Advertisement
Guest User

Untitled

a guest
Jan 9th, 2017
621
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 37.35 KB | None | 0 0
  1. [root@cephcluster2 ~]# sudo systemctl status ceph\*.service ceph\*.target
  2. ● ceph-radosgw.target - ceph target allowing to start/stop all ceph-radosgw@.service instances at once
  3. Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw.target; enabled; vendor preset: enabled)
  4. Active: active since Tue 2017-01-10 00:40:15 AEDT; 25min ago
  5.  
  6. ● ceph-mon@cephcluster2.service - Ceph cluster monitor daemon
  7. Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; disabled; vendor preset: disabled)
  8. Active: active (running) since Tue 2017-01-10 00:51:05 AEDT; 14min ago
  9. Main PID: 7098 (ceph-mon)
  10. CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@cephcluster2.service
  11. └─7098 /usr/bin/ceph-mon -f --cluster ceph --id cephcluster2 --setuser ceph --setgroup ceph
  12.  
  13. Jan 10 00:57:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 00:57:10.655812 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
  14. Jan 10 00:58:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 00:58:10.656169 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
  15. Jan 10 00:59:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 00:59:10.656557 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
  16. Jan 10 01:00:00 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:00:00.000308 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
  17. Jan 10 01:00:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:00:10.656893 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
  18. Jan 10 01:01:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:01:10.657309 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
  19. Jan 10 01:02:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:02:10.657660 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
  20. Jan 10 01:03:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:03:10.657998 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
  21. Jan 10 01:04:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:04:10.658387 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
  22. Jan 10 01:05:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:05:10.658747 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
  23.  
  24. ● ceph-mds.target - ceph target allowing to start/stop all ceph-mds@.service instances at once
  25. Loaded: loaded (/usr/lib/systemd/system/ceph-mds.target; enabled; vendor preset: enabled)
  26. Active: active since Tue 2017-01-10 00:40:15 AEDT; 25min ago
  27.  
  28. ● ceph-disk@dev-nvme3n1p1.service - Ceph disk activation: /dev/nvme3n1p1
  29. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  30. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:17 AEDT; 23min ago
  31. Main PID: 1846 (code=exited, status=124)
  32.  
  33. Jan 10 00:40:17 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme3n1p1...
  34. Jan 10 00:42:17 cephcluster2.local systemd[1]: ceph-disk@dev-nvme3n1p1.service: main process exited, code=exited, status=124/n/a
  35. Jan 10 00:42:17 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme3n1p1.
  36. Jan 10 00:42:17 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme3n1p1.service entered failed state.
  37. Jan 10 00:42:17 cephcluster2.local systemd[1]: ceph-disk@dev-nvme3n1p1.service failed.
  38.  
  39. ● ceph-disk@dev-nvme5n1p1.service - Ceph disk activation: /dev/nvme5n1p1
  40. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  41. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  42. Main PID: 1824 (code=exited, status=124)
  43.  
  44. Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme5n1p1...
  45. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme5n1p1.service: main process exited, code=exited, status=124/n/a
  46. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme5n1p1.
  47. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme5n1p1.service entered failed state.
  48. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme5n1p1.service failed.
  49.  
  50. ● ceph-osd@17.service - Ceph object storage daemon
  51. Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
  52. Active: failed (Result: start-limit) since Tue 2017-01-10 00:40:35 AEDT; 24min ago
  53. Main PID: 5016 (code=exited, status=1/FAILURE)
  54.  
  55. Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@17.service entered failed state.
  56. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@17.service failed.
  57. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@17.service holdoff time over, scheduling restart.
  58. Jan 10 00:40:35 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@17.service
  59. Jan 10 00:40:35 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
  60. Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@17.service entered failed state.
  61. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@17.service failed.
  62. Jan 10 00:42:04 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  63. Jan 10 00:42:17 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  64. Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  65.  
  66. ● ceph-disk@dev-nvme5n1p2.service - Ceph disk activation: /dev/nvme5n1p2
  67. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  68. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  69. Main PID: 1827 (code=exited, status=124)
  70.  
  71. Jan 10 00:42:16 cephcluster2.local sh[1827]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/nvme5n1p2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x2883d70>, log_stdo...
  72. Jan 10 00:42:16 cephcluster2.local sh[1827]: command: Running command: /usr/sbin/init --version
  73. Jan 10 00:42:16 cephcluster2.local sh[1827]: command: Running command: /usr/sbin/blkid -o udev -p /dev/nvme5n1p2
  74. Jan 10 00:42:16 cephcluster2.local sh[1827]: command: Running command: /usr/sbin/blkid -o udev -p /dev/nvme5n1p2
  75. Jan 10 00:42:16 cephcluster2.local sh[1827]: main_trigger: trigger /dev/nvme5n1p2 parttype 45b0969e-9b03-4f30-b4c6-b4b80ceff106 uuid 058b145b-2bb1-424c-89fb-34603fdfc9da
  76. Jan 10 00:42:16 cephcluster2.local sh[1827]: command: Running command: /usr/sbin/ceph-disk --verbose activate-journal /dev/nvme5n1p2
  77. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme5n1p2.service: main process exited, code=exited, status=124/n/a
  78. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme5n1p2.
  79. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme5n1p2.service entered failed state.
  80. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme5n1p2.service failed.
  81.  
  82. ● ceph-disk@dev-nvme4n1p1.service - Ceph disk activation: /dev/nvme4n1p1
  83. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  84. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  85. Main PID: 1660 (code=exited, status=124)
  86.  
  87. Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme4n1p1...
  88. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme4n1p1.service: main process exited, code=exited, status=124/n/a
  89. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme4n1p1.
  90. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme4n1p1.service entered failed state.
  91. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme4n1p1.service failed.
  92.  
  93. ● ceph-disk@dev-nvme3n1p2.service - Ceph disk activation: /dev/nvme3n1p2
  94. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  95. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:17 AEDT; 23min ago
  96. Main PID: 1845 (code=exited, status=124)
  97.  
  98. Jan 10 00:40:17 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme3n1p2...
  99. Jan 10 00:42:17 cephcluster2.local systemd[1]: ceph-disk@dev-nvme3n1p2.service: main process exited, code=exited, status=124/n/a
  100. Jan 10 00:42:17 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme3n1p2.
  101. Jan 10 00:42:17 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme3n1p2.service entered failed state.
  102. Jan 10 00:42:17 cephcluster2.local systemd[1]: ceph-disk@dev-nvme3n1p2.service failed.
  103.  
  104. ● ceph-disk@dev-nvme1n1p2.service - Ceph disk activation: /dev/nvme1n1p2
  105. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  106. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  107. Main PID: 1773 (code=exited, status=124)
  108.  
  109. Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme1n1p2...
  110. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme1n1p2.service: main process exited, code=exited, status=124/n/a
  111. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme1n1p2.
  112. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme1n1p2.service entered failed state.
  113. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme1n1p2.service failed.
  114.  
  115. ● ceph-disk@dev-nvme7n1p2.service - Ceph disk activation: /dev/nvme7n1p2
  116. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  117. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  118. Main PID: 1578 (code=exited, status=124)
  119.  
  120. Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme7n1p2...
  121. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme7n1p2.service: main process exited, code=exited, status=124/n/a
  122. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme7n1p2.
  123. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme7n1p2.service entered failed state.
  124. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme7n1p2.service failed.
  125.  
  126. ● ceph-osd@18.service - Ceph object storage daemon
  127. Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
  128. Active: failed (Result: start-limit) since Tue 2017-01-10 00:46:48 AEDT; 18min ago
  129. Main PID: 5021 (code=exited, status=1/FAILURE)
  130.  
  131. Jan 10 00:46:48 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
  132. Jan 10 00:46:48 cephcluster2.local systemd[1]: Unit ceph-osd@18.service entered failed state.
  133. Jan 10 00:46:48 cephcluster2.local systemd[1]: ceph-osd@18.service failed.
  134. Jan 10 00:46:48 cephcluster2.local systemd[1]: ceph-osd@18.service holdoff time over, scheduling restart.
  135. Jan 10 00:46:48 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@18.service
  136. Jan 10 00:46:48 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
  137. Jan 10 00:46:48 cephcluster2.local systemd[1]: Unit ceph-osd@18.service entered failed state.
  138. Jan 10 00:46:48 cephcluster2.local systemd[1]: ceph-osd@18.service failed.
  139. Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  140.  
  141. ● ceph-disk@dev-nvme6n1p2.service - Ceph disk activation: /dev/nvme6n1p2
  142. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  143. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  144. Main PID: 1705 (code=exited, status=124)
  145.  
  146. Jan 10 00:42:04 cephcluster2.local sh[1705]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/nvme6n1p2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x11f9d70>, log_stdo...
  147. Jan 10 00:42:04 cephcluster2.local sh[1705]: command: Running command: /usr/sbin/init --version
  148. Jan 10 00:42:04 cephcluster2.local sh[1705]: command: Running command: /usr/sbin/blkid -o udev -p /dev/nvme6n1p2
  149. Jan 10 00:42:04 cephcluster2.local sh[1705]: command: Running command: /usr/sbin/blkid -o udev -p /dev/nvme6n1p2
  150. Jan 10 00:42:04 cephcluster2.local sh[1705]: main_trigger: trigger /dev/nvme6n1p2 parttype 45b0969e-9b03-4f30-b4c6-b4b80ceff106 uuid 0eb4e43f-b515-46aa-9675-13ef16752d43
  151. Jan 10 00:42:04 cephcluster2.local sh[1705]: command: Running command: /usr/sbin/ceph-disk --verbose activate-journal /dev/nvme6n1p2
  152. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme6n1p2.service: main process exited, code=exited, status=124/n/a
  153. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme6n1p2.
  154. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme6n1p2.service entered failed state.
  155. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme6n1p2.service failed.
  156.  
  157. ● ceph-osd@11.service - Ceph object storage daemon
  158. Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
  159. Active: failed (Result: start-limit) since Tue 2017-01-10 00:48:19 AEDT; 16min ago
  160.  
  161. Jan 10 00:48:19 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
  162. Jan 10 00:48:19 cephcluster2.local systemd[1]: Unit ceph-osd@11.service entered failed state.
  163. Jan 10 00:48:19 cephcluster2.local systemd[1]: ceph-osd@11.service failed.
  164. Jan 10 00:48:19 cephcluster2.local systemd[1]: ceph-osd@11.service holdoff time over, scheduling restart.
  165. Jan 10 00:48:19 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@11.service
  166. Jan 10 00:48:19 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
  167. Jan 10 00:48:19 cephcluster2.local systemd[1]: Unit ceph-osd@11.service entered failed state.
  168. Jan 10 00:48:19 cephcluster2.local systemd[1]: ceph-osd@11.service failed.
  169. Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  170.  
  171. ● ceph-disk@dev-nvme0n1p2.service - Ceph disk activation: /dev/nvme0n1p2
  172. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  173. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  174. Main PID: 1756 (code=exited, status=124)
  175.  
  176. Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme0n1p2...
  177. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme0n1p2.service: main process exited, code=exited, status=124/n/a
  178. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme0n1p2.
  179. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme0n1p2.service entered failed state.
  180. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme0n1p2.service failed.
  181.  
  182. ● ceph-osd@10.service - Ceph object storage daemon
  183. Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
  184. Active: failed (Result: start-limit) since Tue 2017-01-10 00:52:09 AEDT; 13min ago
  185. Process: 7617 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
  186. Process: 7575 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
  187. Main PID: 7617 (code=exited, status=1/FAILURE)
  188.  
  189. Jan 10 00:52:08 cephcluster2.local systemd[1]: Unit ceph-osd@10.service entered failed state.
  190. Jan 10 00:52:08 cephcluster2.local systemd[1]: ceph-osd@10.service failed.
  191. Jan 10 00:52:09 cephcluster2.local systemd[1]: ceph-osd@10.service holdoff time over, scheduling restart.
  192. Jan 10 00:52:09 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@10.service
  193. Jan 10 00:52:09 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
  194. Jan 10 00:52:09 cephcluster2.local systemd[1]: Unit ceph-osd@10.service entered failed state.
  195. Jan 10 00:52:09 cephcluster2.local systemd[1]: ceph-osd@10.service failed.
  196.  
  197. ● ceph-osd@12.service - Ceph object storage daemon
  198. Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
  199. Active: failed (Result: start-limit) since Tue 2017-01-10 00:40:35 AEDT; 24min ago
  200. Main PID: 5058 (code=exited, status=1/FAILURE)
  201.  
  202. Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@12.service entered failed state.
  203. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@12.service failed.
  204. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@12.service holdoff time over, scheduling restart.
  205. Jan 10 00:40:35 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@12.service
  206. Jan 10 00:40:35 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
  207. Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@12.service entered failed state.
  208. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@12.service failed.
  209. Jan 10 00:42:04 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  210. Jan 10 00:42:17 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  211. Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  212.  
  213. ● ceph.target - ceph target allowing to start/stop all ceph*@.service instances at once
  214. Loaded: loaded (/usr/lib/systemd/system/ceph.target; enabled; vendor preset: enabled)
  215. Active: active since Tue 2017-01-10 00:42:04 AEDT; 23min ago
  216.  
  217. Jan 10 00:42:04 cephcluster2.local systemd[1]: Reached target ceph target allowing to start/stop all ceph*@.service instances at once.
  218. Jan 10 00:42:04 cephcluster2.local systemd[1]: Starting ceph target allowing to start/stop all ceph*@.service instances at once.
  219.  
  220. ● ceph-disk@dev-nvme0n1p1.service - Ceph disk activation: /dev/nvme0n1p1
  221. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  222. Active: failed (Result: exit-code) since Tue 2017-01-10 00:50:13 AEDT; 15min ago
  223. Main PID: 6838 (code=exited, status=1/FAILURE)
  224.  
  225. Jan 10 00:50:13 cephcluster2.local sh[6838]: main(sys.argv[1:])
  226. Jan 10 00:50:13 cephcluster2.local sh[6838]: File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4960, in main
  227. Jan 10 00:50:13 cephcluster2.local sh[6838]: args.func(args)
  228. Jan 10 00:50:13 cephcluster2.local sh[6838]: File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4397, in main_trigger
  229. Jan 10 00:50:13 cephcluster2.local sh[6838]: raise Error('return code ' + str(ret))
  230. Jan 10 00:50:13 cephcluster2.local sh[6838]: ceph_disk.main.Error: Error: return code 1
  231. Jan 10 00:50:13 cephcluster2.local systemd[1]: ceph-disk@dev-nvme0n1p1.service: main process exited, code=exited, status=1/FAILURE
  232. Jan 10 00:50:13 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme0n1p1.
  233. Jan 10 00:50:13 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme0n1p1.service entered failed state.
  234. Jan 10 00:50:13 cephcluster2.local systemd[1]: ceph-disk@dev-nvme0n1p1.service failed.
  235.  
  236. ● ceph-mon.target - ceph target allowing to start/stop all ceph-mon@.service instances at once
  237. Loaded: loaded (/usr/lib/systemd/system/ceph-mon.target; enabled; vendor preset: enabled)
  238. Active: active since Tue 2017-01-10 00:40:15 AEDT; 25min ago
  239.  
  240. ● ceph-disk@dev-nvme2n1p1.service - Ceph disk activation: /dev/nvme2n1p1
  241. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  242. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:04 AEDT; 23min ago
  243. Main PID: 1553 (code=exited, status=1/FAILURE)
  244.  
  245. Jan 10 00:42:04 cephcluster2.local sh[1553]: main(sys.argv[1:])
  246. Jan 10 00:42:04 cephcluster2.local sh[1553]: File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4960, in main
  247. Jan 10 00:42:04 cephcluster2.local sh[1553]: args.func(args)
  248. Jan 10 00:42:04 cephcluster2.local sh[1553]: File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4397, in main_trigger
  249. Jan 10 00:42:04 cephcluster2.local sh[1553]: raise Error('return code ' + str(ret))
  250. Jan 10 00:42:04 cephcluster2.local sh[1553]: ceph_disk.main.Error: Error: return code 1
  251. Jan 10 00:42:04 cephcluster2.local systemd[1]: ceph-disk@dev-nvme2n1p1.service: main process exited, code=exited, status=1/FAILURE
  252. Jan 10 00:42:04 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme2n1p1.
  253. Jan 10 00:42:04 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme2n1p1.service entered failed state.
  254. Jan 10 00:42:04 cephcluster2.local systemd[1]: ceph-disk@dev-nvme2n1p1.service failed.
  255.  
  256. ● ceph-disk@dev-nvme8n1p2.service - Ceph disk activation: /dev/nvme8n1p2
  257. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  258. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  259. Main PID: 1818 (code=exited, status=124)
  260.  
  261. Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme8n1p2...
  262. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme8n1p2.service: main process exited, code=exited, status=124/n/a
  263. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme8n1p2.
  264. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme8n1p2.service entered failed state.
  265. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme8n1p2.service failed.
  266.  
  267. ● ceph-osd@14.service - Ceph object storage daemon
  268. Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
  269. Active: failed (Result: start-limit) since Tue 2017-01-10 00:40:35 AEDT; 24min ago
  270. Main PID: 5030 (code=exited, status=1/FAILURE)
  271.  
  272. Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@14.service entered failed state.
  273. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@14.service failed.
  274. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@14.service holdoff time over, scheduling restart.
  275. Jan 10 00:40:35 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@14.service
  276. Jan 10 00:40:35 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
  277. Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@14.service entered failed state.
  278. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@14.service failed.
  279. Jan 10 00:42:04 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  280. Jan 10 00:42:17 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  281. Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  282.  
  283. ● ceph-disk@dev-nvme4n1p2.service - Ceph disk activation: /dev/nvme4n1p2
  284. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  285. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  286. Main PID: 1661 (code=exited, status=124)
  287.  
  288. Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme4n1p2...
  289. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme4n1p2.service: main process exited, code=exited, status=124/n/a
  290. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme4n1p2.
  291. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme4n1p2.service entered failed state.
  292. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme4n1p2.service failed.
  293.  
  294. ● ceph-osd.target - ceph target allowing to start/stop all ceph-osd@.service instances at once
  295. Loaded: loaded (/usr/lib/systemd/system/ceph-osd.target; enabled; vendor preset: enabled)
  296. Active: active since Tue 2017-01-10 00:42:04 AEDT; 23min ago
  297.  
  298. Jan 10 00:42:04 cephcluster2.local systemd[1]: Reached target ceph target allowing to start/stop all ceph-osd@.service instances at once.
  299. Jan 10 00:42:04 cephcluster2.local systemd[1]: Starting ceph target allowing to start/stop all ceph-osd@.service instances at once.
  300.  
  301. ● ceph-disk@dev-nvme1n1p1.service - Ceph disk activation: /dev/nvme1n1p1
  302. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  303. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  304. Main PID: 1770 (code=exited, status=124)
  305.  
  306. Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme1n1p1...
  307. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme1n1p1.service: main process exited, code=exited, status=124/n/a
  308. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme1n1p1.
  309. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme1n1p1.service entered failed state.
  310. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme1n1p1.service failed.
  311.  
  312. ● ceph-osd@15.service - Ceph object storage daemon
  313. Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
  314. Active: failed (Result: start-limit) since Tue 2017-01-10 00:48:19 AEDT; 16min ago
  315. Main PID: 5008 (code=exited, status=1/FAILURE)
  316.  
  317. Jan 10 00:48:19 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
  318. Jan 10 00:48:19 cephcluster2.local systemd[1]: Unit ceph-osd@15.service entered failed state.
  319. Jan 10 00:48:19 cephcluster2.local systemd[1]: ceph-osd@15.service failed.
  320. Jan 10 00:48:19 cephcluster2.local systemd[1]: ceph-osd@15.service holdoff time over, scheduling restart.
  321. Jan 10 00:48:19 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@15.service
  322. Jan 10 00:48:19 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
  323. Jan 10 00:48:19 cephcluster2.local systemd[1]: Unit ceph-osd@15.service entered failed state.
  324. Jan 10 00:48:19 cephcluster2.local systemd[1]: ceph-osd@15.service failed.
  325. Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  326.  
  327. ● ceph-disk@dev-nvme8n1p1.service - Ceph disk activation: /dev/nvme8n1p1
  328. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  329. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  330. Main PID: 1819 (code=exited, status=124)
  331.  
  332. Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme8n1p1...
  333. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme8n1p1.service: main process exited, code=exited, status=124/n/a
  334. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme8n1p1.
  335. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme8n1p1.service entered failed state.
  336. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme8n1p1.service failed.
  337.  
  338. ● ceph-disk@dev-nvme9n1p1.service - Ceph disk activation: /dev/nvme9n1p1
  339. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  340. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:18 AEDT; 22min ago
  341. Main PID: 1894 (code=exited, status=124)
  342.  
  343. Jan 10 00:40:18 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme9n1p1...
  344. Jan 10 00:42:18 cephcluster2.local systemd[1]: ceph-disk@dev-nvme9n1p1.service: main process exited, code=exited, status=124/n/a
  345. Jan 10 00:42:18 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme9n1p1.
  346. Jan 10 00:42:18 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme9n1p1.service entered failed state.
  347. Jan 10 00:42:18 cephcluster2.local systemd[1]: ceph-disk@dev-nvme9n1p1.service failed.
  348.  
  349. ● ceph-disk@dev-nvme9n1p2.service - Ceph disk activation: /dev/nvme9n1p2
  350. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  351. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:18 AEDT; 22min ago
  352. Main PID: 1897 (code=exited, status=124)
  353.  
  354. Jan 10 00:42:17 cephcluster2.local sh[1897]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/nvme9n1p2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x202cd70>, log_stdo...
  355. Jan 10 00:42:17 cephcluster2.local sh[1897]: command: Running command: /usr/sbin/init --version
  356. Jan 10 00:42:17 cephcluster2.local sh[1897]: command: Running command: /usr/sbin/blkid -o udev -p /dev/nvme9n1p2
  357. Jan 10 00:42:17 cephcluster2.local sh[1897]: command: Running command: /usr/sbin/blkid -o udev -p /dev/nvme9n1p2
  358. Jan 10 00:42:17 cephcluster2.local sh[1897]: main_trigger: trigger /dev/nvme9n1p2 parttype 45b0969e-9b03-4f30-b4c6-b4b80ceff106 uuid 7bf65df4-b2cd-4515-8e9c-f48f98c005e8
  359. Jan 10 00:42:17 cephcluster2.local sh[1897]: command: Running command: /usr/sbin/ceph-disk --verbose activate-journal /dev/nvme9n1p2
  360. Jan 10 00:42:18 cephcluster2.local systemd[1]: ceph-disk@dev-nvme9n1p2.service: main process exited, code=exited, status=124/n/a
  361. Jan 10 00:42:18 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme9n1p2.
  362. Jan 10 00:42:18 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme9n1p2.service entered failed state.
  363. Jan 10 00:42:18 cephcluster2.local systemd[1]: ceph-disk@dev-nvme9n1p2.service failed.
  364.  
  365. ● ceph-disk@dev-nvme2n1p2.service - Ceph disk activation: /dev/nvme2n1p2
  366. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  367. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  368. Main PID: 1554 (code=exited, status=124)
  369.  
  370. Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme2n1p2...
  371. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme2n1p2.service: main process exited, code=exited, status=124/n/a
  372. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme2n1p2.
  373. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme2n1p2.service entered failed state.
  374. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme2n1p2.service failed.
  375.  
  376. ● ceph-osd@13.service - Ceph object storage daemon
  377. Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
  378. Active: failed (Result: start-limit) since Tue 2017-01-10 00:40:35 AEDT; 24min ago
  379. Main PID: 5024 (code=exited, status=1/FAILURE)
  380.  
  381. Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@13.service entered failed state.
  382. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@13.service failed.
  383. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@13.service holdoff time over, scheduling restart.
  384. Jan 10 00:40:35 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@13.service
  385. Jan 10 00:40:35 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
  386. Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@13.service entered failed state.
  387. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@13.service failed.
  388. Jan 10 00:42:04 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  389. Jan 10 00:42:17 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  390. Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  391.  
  392. ● ceph-disk@dev-nvme6n1p1.service - Ceph disk activation: /dev/nvme6n1p1
  393. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  394. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  395. Main PID: 1699 (code=exited, status=124)
  396.  
  397. Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme6n1p1...
  398. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme6n1p1.service: main process exited, code=exited, status=124/n/a
  399. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme6n1p1.
  400. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme6n1p1.service entered failed state.
  401. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme6n1p1.service failed.
  402.  
  403. ● ceph-disk@dev-nvme7n1p1.service - Ceph disk activation: /dev/nvme7n1p1
  404. Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
  405. Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
  406. Main PID: 1576 (code=exited, status=124)
  407.  
  408. Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme7n1p1...
  409. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme7n1p1.service: main process exited, code=exited, status=124/n/a
  410. Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme7n1p1.
  411. Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme7n1p1.service entered failed state.
  412. Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme7n1p1.service failed.
  413.  
  414. ● ceph-osd@16.service - Ceph object storage daemon
  415. Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
  416. Active: failed (Result: start-limit) since Tue 2017-01-10 00:40:35 AEDT; 24min ago
  417. Main PID: 5013 (code=exited, status=1/FAILURE)
  418.  
  419. Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@16.service entered failed state.
  420. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@16.service failed.
  421. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@16.service holdoff time over, scheduling restart.
  422. Jan 10 00:40:35 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@16.service
  423. Jan 10 00:40:35 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
  424. Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@16.service entered failed state.
  425. Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@16.service failed.
  426. Jan 10 00:42:04 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  427. Jan 10 00:42:17 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  428. Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
  429.  
  430. ● ceph-osd@9.service - Ceph object storage daemon
  431. Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
  432. Active: active (running) since Tue 2017-01-10 00:51:10 AEDT; 14min ago
  433. Process: 7014 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
  434. Main PID: 7184 (ceph-osd)
  435. CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@9.service
  436. └─7184 /usr/bin/ceph-osd -f --cluster ceph --id 9 --setuser ceph --setgroup ceph
  437.  
  438. Jan 10 01:05:10 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:10.143626 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:50.143624)
  439. Jan 10 01:05:11 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:11.143758 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:51.143756)
  440. Jan 10 01:05:12 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:12.143891 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:52.143889)
  441. Jan 10 01:05:13 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:13.143970 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:53.143968)
  442. Jan 10 01:05:13 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:13.383400 7fc8d12e2700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:53.383398)
  443. Jan 10 01:05:14 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:14.144082 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:54.144077)
  444. Jan 10 01:05:15 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:15.083916 7fc8d12e2700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:55.083915)
  445. Jan 10 01:05:15 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:15.144227 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:55.144225)
  446. Jan 10 01:05:16 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:16.144377 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:56.144375)
  447. Jan 10 01:05:17 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:17.144510 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:57.144508)
  448. Hint: Some lines were ellipsized, use -l to show in full.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement