Advertisement
Guest User

Untitled

a guest
Sep 30th, 2018
123
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 5.08 KB | None | 0 0
  1. ● ceph-disk@dev-sdb1.service - Ceph disk activation: /dev/sdb1
  2. Loaded: loaded (/lib/systemd/system/ceph-disk@.service; static; vendor preset: enabled)
  3. Active: failed (Result: exit-code) since Sun 2018-09-30 22:55:18 UTC; 6s ago
  4. Process: 3211 ExecStart=/bin/sh -c timeout $CEPH_DISK_TIMEOUT flock /var/lock/ceph-disk-$(basename %f) /usr/sbin/ceph-disk --verbose --log-st
  5. dout trigger --sync %f (code=exited, status=1/FAILURE)
  6. Main PID: 3211 (code=exited, status=1/FAILURE)
  7.  
  8. Sep 30 22:55:17 ssd-node2 sh[3211]: main(sys.argv[1:])
  9. Sep 30 22:55:17 ssd-node2 sh[3211]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5323, in main
  10. Sep 30 22:55:17 ssd-node2 sh[3211]: args.func(args)
  11. Sep 30 22:55:17 ssd-node2 sh[3211]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4531, in main_trigger
  12. Sep 30 22:55:17 ssd-node2 sh[3211]: raise Error('return code ' + str(ret))
  13. Sep 30 22:55:17 ssd-node2 sh[3211]: ceph_disk.main.Error: Error: return code 1
  14. Sep 30 22:55:18 ssd-node2 systemd[1]: ceph-disk@dev-sdb1.service: Main process exited, code=exited, status=1/FAILURE
  15. Sep 30 22:55:18 ssd-node2 systemd[1]: Failed to start Ceph disk activation: /dev/sdb1.
  16. Sep 30 22:55:18 ssd-node2 systemd[1]: ceph-disk@dev-sdb1.service: Unit entered failed state.
  17. Sep 30 22:55:18 ssd-node2 systemd[1]: ceph-disk@dev-sdb1.service: Failed with result 'exit-code'.
  18.  
  19. ● ceph-mon.target - ceph target allowing to start/stop all ceph-mon@.service instances at once
  20. Loaded: loaded (/lib/systemd/system/ceph-mon.target; enabled; vendor preset: enabled)
  21. Active: active since Sun 2018-09-30 22:15:25 UTC; 39min ago
  22.  
  23. Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
  24.  
  25. ● ceph-osd.target - ceph target allowing to start/stop all ceph-osd@.service instances at once
  26. Loaded: loaded (/lib/systemd/system/ceph-osd.target; enabled; vendor preset: enabled)
  27. Active: active since Sun 2018-09-30 22:15:29 UTC; 39min ago
  28.  
  29. Sep 30 22:15:29 ssd-node2 systemd[1]: Reached target ceph target allowing to start/stop all ceph-osd@.service instances at once.
  30.  
  31. ● ceph-disk@dev-sdb2.service - Ceph disk activation: /dev/sdb2
  32. Loaded: loaded (/lib/systemd/system/ceph-disk@.service; static; vendor preset: enabled)
  33. Active: failed (Result: exit-code) since Sun 2018-09-30 22:15:30 UTC; 39min ago
  34. Process: 953 ExecStart=/bin/sh -c timeout $CEPH_DISK_TIMEOUT flock /var/lock/ceph-disk-$(basename %f) /usr/sbin/ceph-disk --verbose --log-std
  35. out trigger --sync %f (code=exited, status=1/FAILURE)
  36. Main PID: 953 (code=exited, status=1/FAILURE)
  37.  
  38. Sep 30 22:15:30 ssd-node2 sh[953]: main(sys.argv[1:])
  39. Sep 30 22:15:30 ssd-node2 sh[953]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5323, in main
  40. Sep 30 22:15:30 ssd-node2 sh[953]: args.func(args)
  41. Sep 30 22:15:30 ssd-node2 sh[953]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4531, in main_trigger
  42. Sep 30 22:15:30 ssd-node2 sh[953]: raise Error('return code ' + str(ret))
  43. Sep 30 22:15:30 ssd-node2 sh[953]: ceph_disk.main.Error: Error: return code 1
  44. Sep 30 22:15:30 ssd-node2 systemd[1]: ceph-disk@dev-sdb2.service: Main process exited, code=exited, status=1/FAILURE
  45. Sep 30 22:15:30 ssd-node2 systemd[1]: Failed to start Ceph disk activation: /dev/sdb2.
  46. Sep 30 22:15:30 ssd-node2 systemd[1]: ceph-disk@dev-sdb2.service: Unit entered failed state.
  47. Sep 30 22:15:30 ssd-node2 systemd[1]: ceph-disk@dev-sdb2.service: Failed with result 'exit-code'.
  48.  
  49. ● ceph-mds.target - ceph target allowing to start/stop all ceph-mds@.service instances at once
  50. Loaded: loaded (/lib/systemd/system/ceph-mds.target; enabled; vendor preset: enabled)
  51. Active: active since Sun 2018-09-30 22:15:25 UTC; 39min ago
  52.  
  53. ● ceph.target - ceph target allowing to start/stop all ceph*@.service instances at once
  54. Loaded: loaded (/lib/systemd/system/ceph.target; enabled; vendor preset: enabled)
  55. Active: active since Sun 2018-09-30 22:15:29 UTC; 39min ago
  56.  
  57. Sep 30 22:15:29 ssd-node2 systemd[1]: Reached target ceph target allowing to start/stop all ceph*@.service instances at once.
  58.  
  59. ● ceph-radosgw.target - ceph target allowing to start/stop all ceph-radosgw@.service instances at once
  60. Loaded: loaded (/lib/systemd/system/ceph-radosgw.target; enabled; vendor preset: enabled)
  61. Active: active since Sun 2018-09-30 22:15:29 UTC; 39min ago
  62.  
  63. Sep 30 22:15:29 ssd-node2 systemd[1]: Reached target ceph target allowing to start/stop all ceph-radosgw@.service instances at once.
  64.  
  65. ● ceph.service - LSB: Start Ceph distributed file system daemons at boot time
  66. Loaded: loaded (/etc/init.d/ceph; bad; vendor preset: enabled)
  67. Active: active (exited) since Sun 2018-09-30 22:15:31 UTC; 39min ago
  68. Docs: man:systemd-sysv-generator(8)
  69. Process: 1094 ExecStart=/etc/init.d/ceph start (code=exited, status=0/SUCCESS)
  70. Tasks: 0
  71. Memory: 0B
  72. CPU: 0
  73.  
  74. Sep 30 22:15:29 ssd-node2 systemd[1]: Starting LSB: Start Ceph distributed file system daemons at boot time...
  75. Sep 30 22:15:31 ssd-node2 systemd[1]: Started LSB: Start Ceph distributed file system daemons at boot time.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement