alohamora

unable to add OSD-1

Feb 18th, 2014
201
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.47 KB | None | 0 0
  1. [root@ceph-admin ceph]# ceph-deploy -v osd prepare ceph-node1:/dev/vdb
  2. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy -v osd prepare ceph-node1:/dev/vdb
  3. [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node1:/dev/vdb:
  4. [ceph-node1][DEBUG ] connected to host: ceph-node1
  5. [ceph-node1][DEBUG ] detect platform information from remote host
  6. [ceph-node1][DEBUG ] detect machine type
  7. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  8. [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node1
  9. [ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  10. [ceph-node1][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add
  11. [ceph_deploy.osd][DEBUG ] Preparing host ceph-node1 disk /dev/vdb journal None activate False
  12. [ceph-node1][INFO ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/vdb
  13. [ceph-node1][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/vdb
  14. [ceph-node1][WARNIN] Warning: WARNING: the kernel failed to re-read the partition table on /dev/vdb (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
  15. [ceph-node1][WARNIN] Warning: WARNING: the kernel failed to re-read the partition table on /dev/vdb (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
  16. [ceph-node1][DEBUG ] The operation has completed successfully.
  17. [ceph-node1][DEBUG ] The operation has completed successfully.
  18. [ceph-node1][DEBUG ] meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=28771775 blks
  19. [ceph-node1][DEBUG ] = sectsz=512 attr=2, projid32bit=0
  20. [ceph-node1][DEBUG ] data = bsize=4096 blocks=115087099, imaxpct=25
  21. [ceph-node1][DEBUG ] = sunit=0 swidth=0 blks
  22. [ceph-node1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
  23. [ceph-node1][DEBUG ] log =internal log bsize=4096 blocks=56194, version=2
  24. [ceph-node1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
  25. [ceph-node1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
  26. [ceph-node1][DEBUG ] The operation has completed successfully.
  27. [ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use.
  28. Unhandled exception in thread started by
  29. Error in sys.excepthook:
  30.  
  31. Original exception was:
  32. [root@ceph-admin ceph]#
  33. [root@ceph-admin ceph]#
  34. [root@ceph-admin ceph]#
  35. [root@ceph-admin ceph]#
  36. [root@ceph-admin ceph]#
  37. [root@ceph-admin ceph]#
  38. [root@ceph-admin ceph]#
  39. [root@ceph-admin ceph]#
  40. [root@ceph-admin ceph]# ceph-deploy -v osd activate ceph-node1:/dev/vdb
  41. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy -v osd activate ceph-node1:/dev/vdb
  42. [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-node1:/dev/vdb:
  43. [ceph-node1][DEBUG ] connected to host: ceph-node1
  44. [ceph-node1][DEBUG ] detect platform information from remote host
  45. [ceph-node1][DEBUG ] detect machine type
  46. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  47. [ceph_deploy.osd][DEBUG ] activating host ceph-node1 disk /dev/vdb
  48. [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
  49. [ceph-node1][INFO ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/vdb
  50. [ceph-node1][WARNIN] ceph-disk: Cannot discover filesystem type: device /dev/vdb: Line is truncated:
  51. [ceph-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
  52. [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate --mark-init sysvinit --mount /dev/vdb
  53.  
  54. [root@ceph-admin ceph]#
  55. [root@ceph-admin ceph]#
  56. [root@ceph-admin ceph]# ceph-deploy disk list ceph-node1
  57. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
  58. [ceph-node1][DEBUG ] connected to host: ceph-node1
  59. [ceph-node1][DEBUG ] detect platform information from remote host
  60. [ceph-node1][DEBUG ] detect machine type
  61. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  62. [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
  63. [ceph-node1][INFO ] Running command: ceph-disk list
  64. [ceph-node1][DEBUG ] /dev/vda :
  65. [ceph-node1][DEBUG ] /dev/vda1 swap, swap
  66. [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
  67. [ceph-node1][DEBUG ] /dev/vdb :
  68. [ceph-node1][DEBUG ] /dev/vdb1 ceph data, prepared, cluster ceph, journal /dev/vdb2
  69. [ceph-node1][DEBUG ] /dev/vdb2 ceph journal, for /dev/vdb1
  70. Unhandled exception in thread started by
  71. Error in sys.excepthook:
  72.  
  73. Original exception was:
Advertisement
Add Comment
Please, Sign In to add comment