Advertisement
Guest User

PVE & CEPH errors

a guest
Jan 3rd, 2017
186
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 16.72 KB | None | 0 0
  1. root@pve1:~# ceph-disk prepare --zap-disk /dev/sdc
  2. Caution: invalid backup GPT header, but valid main header; regenerating
  3. backup header from main header.
  4.  
  5. Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
  6. on the recovery & transformation menu to examine the two tables.
  7.  
  8. Warning! One or more CRCs don't match. You should repair the disk!
  9.  
  10. ****************************************************************************
  11. Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
  12. verification and recovery are STRONGLY recommended.
  13. ****************************************************************************
  14. Jan 03 10:08:07 pve1 kernel: Alternate GPT is invalid, using primary GPT.
  15. Jan 03 10:08:07 pve1 kernel: sdc: sdc1 sdc2
  16. Jan 03 10:08:07 pve1 kernel: Alternate GPT is invalid, using primary GPT.
  17. Jan 03 10:08:07 pve1 kernel: sdc: sdc1 sdc2
  18. GPT data structures destroyed! You may now partition the disk using fdisk or
  19. other utilities.
  20. Creating new GPT entries.
  21. The operation has completed successfully.
  22. Jan 03 10:08:09 pve1 kernel: sdc:
  23. Setting name!
  24. partNum is 1
  25. REALLY setting name!
  26. Jan 03 10:08:09 pve1 kernel: sdc:
  27. The operation has completed successfully.
  28. Jan 03 10:08:10 pve1 kernel: sdc: sdc2
  29. Setting name!
  30. partNum is 0
  31. REALLY setting name!
  32. Jan 03 10:08:10 pve1 kernel: sdc: sdc2
  33. The operation has completed successfully.
  34. Jan 03 10:08:12 pve1 kernel: sdc: sdc1 sdc2
  35. meta-data=/dev/sdc1 isize=2048 agcount=4, agsize=17982802 blks
  36. = sectsz=512 attr=2, projid32bit=1
  37. = crc=0 finobt=0
  38. data = bsize=4096 blocks=71931207, imaxpct=25
  39. = sunit=0 swidth=0 blks
  40. naming =version 2 bsize=4096 ascii-ci=0 ftype=0
  41. log =internal log bsize=4096 blocks=35122, version=2
  42. = sectsz=512 sunit=0 blks, lazy-count=1
  43. realtime =none extsz=4096 blocks=0, rtextents=0
  44. Jan 03 10:08:14 pve1 kernel: XFS (sdc1): Mounting V4 Filesystem
  45. Jan 03 10:08:14 pve1 kernel: XFS (sdc1): Ending clean mount
  46. Jan 03 10:08:14 pve1 kernel: XFS (sdc1): Unmounting Filesystem
  47. Jan 03 10:08:14 pve1 kernel: sdc: sdc1 sdc2
  48. Jan 03 10:08:14 pve1 systemd[1]: Starting Ceph disk activation: /dev/sdc1...
  49. Jan 03 10:08:15 pve1 sh[9624]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdc1', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7f2059f556e0>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
  50. Jan 03 10:08:15 pve1 sh[9624]: command: Running command: /sbin/init --version
  51. Jan 03 10:08:15 pve1 sh[9624]: command: Running command: /sbin/blkid -o udev -p /dev/sdc1
  52. Jan 03 10:08:15 pve1 sh[9624]: command: Running command: /sbin/blkid -o udev -p /dev/sdc1
  53. Jan 03 10:08:15 pve1 sh[9624]: main_trigger: trigger /dev/sdc1 parttype 4fbd7e29-9d25-41b8-afd0-062c0ceff05d uuid d0b69139-b419-4244-a4f5-2bf3f936156b
  54. Jan 03 10:08:15 pve1 sh[9624]: command: Running command: /usr/sbin/ceph-disk --verbose activate /dev/sdc1
  55. Jan 03 10:08:15 pve1 kernel: XFS (sdc1): Mounting V4 Filesystem
  56. Jan 03 10:08:15 pve1 kernel: XFS (sdc1): Ending clean mount
  57. Warning: The kernel is still using the old partition table.
  58. The new table will be used at the next reboot.
  59. The operation has completed successfully.
  60. Jan 03 10:08:16 pve1 kernel: XFS (sdc1): Unmounting Filesystem
  61. Jan 03 10:08:16 pve1 sh[9624]: main_trigger:
  62. Jan 03 10:08:16 pve1 sh[9624]: main_trigger: main_activate: path = /dev/sdc1
  63. Jan 03 10:08:16 pve1 sh[9624]: get_dm_uuid: get_dm_uuid /dev/sdc1 uuid path is /sys/dev/block/8:33/dm/uuid
  64. Jan 03 10:08:16 pve1 sh[9624]: command: Running command: /sbin/blkid -o udev -p /dev/sdc1
  65. Jan 03 10:08:16 pve1 sh[9624]: command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdc1
  66. Jan 03 10:08:16 pve1 sh[9624]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
  67. Jan 03 10:08:16 pve1 sh[9624]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
  68. Jan 03 10:08:16 pve1 sh[9624]: mount: Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.NpSVEx with options noatime,inode64
  69. Jan 03 10:08:16 pve1 sh[9624]: command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.NpSVEx
  70. Jan 03 10:08:16 pve1 sh[9624]: activate: Cluster uuid is ef3ee364-6a34-47ae-955e-9c393939c4d8
  71. Jan 03 10:08:16 pve1 sh[9624]: command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
  72. Jan 03 10:08:16 pve1 sh[9624]: activate: Cluster name is ceph
  73. Jan 03 10:08:16 pve1 sh[9624]: activate: OSD uuid is d0b69139-b419-4244-a4f5-2bf3f936156b
  74. Jan 03 10:08:16 pve1 sh[9624]: allocate_osd_id: Allocating OSD id...
  75. Jan 03 10:08:16 pve1 sh[9624]: command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise d0b69139-b419-4244-a4f5-2bf3f936156b
  76. Jan 03 10:08:16 pve1 sh[9624]: command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NpSVEx/whoami.9634.tmp
  77. Jan 03 10:08:16 pve1 sh[9624]: activate: OSD id is 1
  78. Jan 03 10:08:16 pve1 sh[9624]: activate: Initializing OSD...
  79. Jan 03 10:08:16 pve1 sh[9624]: command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.NpSVEx/activate.monmap
  80. Jan 03 10:08:16 pve1 sh[9624]: got monmap epoch 1
  81. Jan 03 10:08:16 pve1 sh[9624]: command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap /var/lib/ceph/tmp/mnt.NpSVEx/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.NpSVEx --osd-journal /var/lib/ceph/tmp/mnt.NpSVEx/journal --osd-uuid d0b69139-b419-4244-a4f5-2bf3f936156b --keyring /var/lib/ceph/tmp/mnt.NpSVEx/keyring --setuser ceph --setgroup ceph
  82. Jan 03 10:08:16 pve1 sh[9624]: mount_activate: Failed to activate
  83. Jan 03 10:08:16 pve1 sh[9624]: unmount: Unmounting /var/lib/ceph/tmp/mnt.NpSVEx
  84. Jan 03 10:08:16 pve1 sh[9624]: command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.NpSVEx
  85. Jan 03 10:08:16 pve1 sh[9624]: Traceback (most recent call last):
  86. Jan 03 10:08:16 pve1 sh[9624]: File "/usr/sbin/ceph-disk", line 9, in <module>
  87. Jan 03 10:08:16 pve1 sh[9624]: load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  88. Jan 03 10:08:16 pve1 sh[9624]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5009, in run
  89. Jan 03 10:08:16 pve1 sh[9624]: main(sys.argv[1:])
  90. Jan 03 10:08:16 pve1 sh[9624]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4960, in main
  91. Jan 03 10:08:16 pve1 sh[9624]: args.func(args)
  92. Jan 03 10:08:16 pve1 sh[9624]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3321, in main_activate
  93. Jan 03 10:08:16 pve1 sh[9624]: reactivate=args.reactivate,
  94. Jan 03 10:08:16 pve1 sh[9624]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3078, in mount_activate
  95. Jan 03 10:08:16 pve1 sh[9624]: (osd_id, cluster) = activate(path, activate_key_template, init)
  96. Jan 03 10:08:16 pve1 sh[9624]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3254, in activate
  97. Jan 03 10:08:16 pve1 sh[9624]: keyring=keyring,
  98. Jan 03 10:08:16 pve1 sh[9624]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2747, in mkfs
  99. Jan 03 10:08:16 pve1 sh[9624]: '--setgroup', get_ceph_group(),
  100. Jan 03 10:08:16 pve1 sh[9624]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2694, in ceph_osd_mkfs
  101. Jan 03 10:08:16 pve1 sh[9624]: raise Error('%s failed : %s' % (str(arguments), error))
  102. Jan 03 10:08:16 pve1 sh[9624]: ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', u'1', '--monmap', '/var/lib/ceph/tmp/mnt.NpSVEx/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.NpSVEx', '--osd-journal', '/var/lib/ceph/tmp/mnt.NpSVEx/journal', '--osd-uuid', u'd0b69139-b419-4244-a4f5-2bf3f936156b', '--keyring', '/var/lib/ceph/tmp/mnt.NpSVEx/keyring', '--setuser', 'ceph', '--setgroup', 'ceph'] failed : 2017-01-03 10:08:16.025868 7f142aeb1800 -1 filestore(/var/lib/ceph/tmp/mnt.NpSVEx) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.NpSVEx/journal: (13) Permission denied
  103. Jan 03 10:08:16 pve1 sh[9624]: 2017-01-03 10:08:16.025901 7f142aeb1800 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13
  104. Jan 03 10:08:16 pve1 sh[9624]: 2017-01-03 10:08:16.025967 7f142aeb1800 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.NpSVEx: (13) Permission denied
  105. Jan 03 10:08:16 pve1 sh[9624]: Traceback (most recent call last):
  106. Jan 03 10:08:16 pve1 sh[9624]: File "/usr/sbin/ceph-disk", line 9, in <module>
  107. Jan 03 10:08:16 pve1 sh[9624]: load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  108. Jan 03 10:08:16 pve1 sh[9624]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5009, in run
  109. Jan 03 10:08:16 pve1 sh[9624]: main(sys.argv[1:])
  110. Jan 03 10:08:16 pve1 sh[9624]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4960, in main
  111. Jan 03 10:08:16 pve1 sh[9624]: args.func(args)
  112. Jan 03 10:08:16 pve1 sh[9624]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4397, in main_trigger
  113. Jan 03 10:08:16 pve1 sh[9624]: raise Error('return code ' + str(ret))
  114. Jan 03 10:08:16 pve1 sh[9624]: ceph_disk.main.Error: Error: return code 1
  115. Jan 03 10:08:16 pve1 systemd[1]: ceph-disk@dev-sdc1.service: main process exited, code=exited, status=1/FAILURE
  116. Jan 03 10:08:16 pve1 systemd[1]: Failed to start Ceph disk activation: /dev/sdc1.
  117. Jan 03 10:08:16 pve1 systemd[1]: Unit ceph-disk@dev-sdc1.service entered failed state.
  118. root@pve1:~# Jan 03 10:08:16 pve1 systemd[1]: Starting Ceph disk activation: /dev/sdc1...
  119. Jan 03 10:08:16 pve1 sh[9751]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdc1', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7f01f6eeb6e0>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
  120. Jan 03 10:08:16 pve1 sh[9751]: command: Running command: /sbin/init --version
  121. Jan 03 10:08:16 pve1 sh[9751]: command: Running command: /sbin/blkid -o udev -p /dev/sdc1
  122. Jan 03 10:08:16 pve1 sh[9751]: command: Running command: /sbin/blkid -o udev -p /dev/sdc1
  123. Jan 03 10:08:16 pve1 sh[9751]: main_trigger: trigger /dev/sdc1 parttype 4fbd7e29-9d25-41b8-afd0-062c0ceff05d uuid d0b69139-b419-4244-a4f5-2bf3f936156b
  124. Jan 03 10:08:16 pve1 sh[9751]: command: Running command: /usr/sbin/ceph-disk --verbose activate /dev/sdc1
  125. Jan 03 10:08:16 pve1 kernel: XFS (sdc1): Mounting V4 Filesystem
  126. Jan 03 10:08:16 pve1 kernel: XFS (sdc1): Ending clean mount
  127. Jan 03 10:08:17 pve1 kernel: XFS (sdc1): Unmounting Filesystem
  128.  
  129. Jan 03 10:08:17 pve1 sh[9751]: main_trigger:
  130. Jan 03 10:08:17 pve1 sh[9751]: main_trigger: main_activate: path = /dev/sdc1
  131. Jan 03 10:08:17 pve1 sh[9751]: get_dm_uuid: get_dm_uuid /dev/sdc1 uuid path is /sys/dev/block/8:33/dm/uuid
  132. Jan 03 10:08:17 pve1 sh[9751]: command: Running command: /sbin/blkid -o udev -p /dev/sdc1
  133. Jan 03 10:08:17 pve1 sh[9751]: command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdc1
  134. Jan 03 10:08:17 pve1 sh[9751]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
  135. Jan 03 10:08:17 pve1 sh[9751]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
  136. Jan 03 10:08:17 pve1 sh[9751]: mount: Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.amH6fE with options noatime,inode64
  137. Jan 03 10:08:17 pve1 sh[9751]: command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.amH6fE
  138. Jan 03 10:08:17 pve1 sh[9751]: activate: Cluster uuid is ef3ee364-6a34-47ae-955e-9c393939c4d8
  139. Jan 03 10:08:17 pve1 sh[9751]: command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
  140. Jan 03 10:08:17 pve1 sh[9751]: activate: Cluster name is ceph
  141. Jan 03 10:08:17 pve1 sh[9751]: activate: OSD uuid is d0b69139-b419-4244-a4f5-2bf3f936156b
  142. Jan 03 10:08:17 pve1 sh[9751]: activate: OSD id is 1
  143. Jan 03 10:08:17 pve1 sh[9751]: activate: Initializing OSD...
  144. Jan 03 10:08:17 pve1 sh[9751]: command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.amH6fE/activate.monmap
  145. Jan 03 10:08:17 pve1 sh[9751]: got monmap epoch 1
  146. Jan 03 10:08:17 pve1 sh[9751]: command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap /var/lib/ceph/tmp/mnt.amH6fE/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.amH6fE --osd-journal /var/lib/ceph/tmp/mnt.amH6fE/journal --osd-uuid d0b69139-b419-4244-a4f5-2bf3f936156b --keyring /var/lib/ceph/tmp/mnt.amH6fE/keyring --setuser ceph --setgroup ceph
  147. Jan 03 10:08:17 pve1 sh[9751]: mount_activate: Failed to activate
  148. Jan 03 10:08:17 pve1 sh[9751]: unmount: Unmounting /var/lib/ceph/tmp/mnt.amH6fE
  149. Jan 03 10:08:17 pve1 sh[9751]: command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.amH6fE
  150. Jan 03 10:08:17 pve1 sh[9751]: Traceback (most recent call last):
  151. Jan 03 10:08:17 pve1 sh[9751]: File "/usr/sbin/ceph-disk", line 9, in <module>
  152. Jan 03 10:08:17 pve1 sh[9751]: load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  153. Jan 03 10:08:17 pve1 sh[9751]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5009, in run
  154. Jan 03 10:08:17 pve1 sh[9751]: main(sys.argv[1:])
  155. Jan 03 10:08:17 pve1 sh[9751]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4960, in main
  156. Jan 03 10:08:17 pve1 sh[9751]: args.func(args)
  157. Jan 03 10:08:17 pve1 sh[9751]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3321, in main_activate
  158. Jan 03 10:08:17 pve1 sh[9751]: reactivate=args.reactivate,
  159. Jan 03 10:08:17 pve1 sh[9751]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3078, in mount_activate
  160. Jan 03 10:08:17 pve1 sh[9751]: (osd_id, cluster) = activate(path, activate_key_template, init)
  161. Jan 03 10:08:17 pve1 sh[9751]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3254, in activate
  162. Jan 03 10:08:17 pve1 sh[9751]: keyring=keyring,
  163. Jan 03 10:08:17 pve1 sh[9751]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2747, in mkfs
  164. Jan 03 10:08:17 pve1 sh[9751]: '--setgroup', get_ceph_group(),
  165. Jan 03 10:08:17 pve1 sh[9751]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2694, in ceph_osd_mkfs
  166. Jan 03 10:08:17 pve1 sh[9751]: raise Error('%s failed : %s' % (str(arguments), error))
  167. Jan 03 10:08:17 pve1 sh[9751]: ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', u'1', '--monmap', '/var/lib/ceph/tmp/mnt.amH6fE/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.amH6fE', '--osd-journal', '/var/lib/ceph/tmp/mnt.amH6fE/journal', '--osd-uuid', u'd0b69139-b419-4244-a4f5-2bf3f936156b', '--keyring', '/var/lib/ceph/tmp/mnt.amH6fE/keyring', '--setuser', 'ceph', '--setgroup', 'ceph'] failed : 2017-01-03 10:08:17.229200 7fb12075c800 -1 filestore(/var/lib/ceph/tmp/mnt.amH6fE) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.amH6fE/journal: (13) Permission denied
  168. Jan 03 10:08:17 pve1 sh[9751]: 2017-01-03 10:08:17.229221 7fb12075c800 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13
  169. Jan 03 10:08:17 pve1 sh[9751]: 2017-01-03 10:08:17.229270 7fb12075c800 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.amH6fE: (13) Permission denied
  170. Jan 03 10:08:17 pve1 sh[9751]: Traceback (most recent call last):
  171. Jan 03 10:08:17 pve1 sh[9751]: File "/usr/sbin/ceph-disk", line 9, in <module>
  172. Jan 03 10:08:17 pve1 sh[9751]: load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  173. Jan 03 10:08:17 pve1 sh[9751]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5009, in run
  174. Jan 03 10:08:17 pve1 sh[9751]: main(sys.argv[1:])
  175. Jan 03 10:08:17 pve1 sh[9751]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4960, in main
  176. Jan 03 10:08:17 pve1 sh[9751]: args.func(args)
  177. Jan 03 10:08:17 pve1 sh[9751]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4397, in main_trigger
  178. Jan 03 10:08:17 pve1 sh[9751]: raise Error('return code ' + str(ret))
  179. Jan 03 10:08:17 pve1 systemd[1]: ceph-disk@dev-sdc1.service: main process exited, code=exited, status=1/FAILURE
  180. Jan 03 10:08:17 pve1 systemd[1]: Failed to start Ceph disk activation: /dev/sdc1.
  181. Jan 03 10:08:17 pve1 systemd[1]: Unit ceph-disk@dev-sdc1.service entered failed state.
  182. Jan 03 10:08:17 pve1 sh[9751]: ceph_disk.main.Error: Error: return code 1
  183. [/CODE]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement