alohamora

unable to add OSD-2

Feb 18th, 2014
346
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.91 KB | None | 0 0
  1. [root@ceph-admin ceph]#
  2. [root@ceph-admin ceph]# ceph-deploy disk zap ceph-node1:vdb
  3. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk zap ceph-node1:vdb
  4. [ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph-node1
  5. [ceph-node1][DEBUG ] connected to host: ceph-node1
  6. [ceph-node1][DEBUG ] detect platform information from remote host
  7. [ceph-node1][DEBUG ] detect machine type
  8. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  9. [ceph-node1][DEBUG ] zeroing last few blocks of device
  10. [ceph-node1][INFO ] Running command: sgdisk --zap-all --clear --mbrtogpt -- /dev/vdb
  11. [ceph-node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
  12. [ceph-node1][DEBUG ] other utilities.
  13. [ceph-node1][DEBUG ] The operation has completed successfully.
  14. Unhandled exception in thread started by
  15. Error in sys.excepthook:
  16.  
  17. Original exception was:
  18. [root@ceph-admin ceph]#
  19. [root@ceph-admin ceph]#
  20. [root@ceph-admin ceph]#
  21. [root@ceph-admin ceph]#
  22. [root@ceph-admin ceph]#
  23. [root@ceph-admin ceph]# ceph-deploy osd create ceph-node1:vdb
  24. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy osd create ceph-node1:vdb
  25. [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node1:/dev/vdb:
  26. [ceph-node1][DEBUG ] connected to host: ceph-node1
  27. [ceph-node1][DEBUG ] detect platform information from remote host
  28. [ceph-node1][DEBUG ] detect machine type
  29. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  30. [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node1
  31. [ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  32. [ceph-node1][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add
  33. [ceph_deploy.osd][DEBUG ] Preparing host ceph-node1 disk /dev/vdb journal None activate True
  34. [ceph-node1][INFO ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/vdb
  35. [ceph-node1][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/vdb
  36. [ceph-node1][WARNIN] Warning: WARNING: the kernel failed to re-read the partition table on /dev/vdb (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
  37. [ceph-node1][WARNIN] Warning: WARNING: the kernel failed to re-read the partition table on /dev/vdb (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
  38. [ceph-node1][DEBUG ] The operation has completed successfully.
  39. [ceph-node1][DEBUG ] The operation has completed successfully.
  40. [ceph-node1][DEBUG ] meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=28771775 blks
  41. [ceph-node1][DEBUG ] = sectsz=512 attr=2, projid32bit=0
  42. [ceph-node1][DEBUG ] data = bsize=4096 blocks=115087099, imaxpct=25
  43. [ceph-node1][DEBUG ] = sunit=0 swidth=0 blks
  44. [ceph-node1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
  45. [ceph-node1][DEBUG ] log =internal log bsize=4096 blocks=56194, version=2
  46. [ceph-node1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
  47. [ceph-node1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
  48. [ceph-node1][DEBUG ] The operation has completed successfully.
  49. [ceph-node1][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add
  50. [ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use.
  51. Unhandled exception in thread started by <function run_and_release at 0x1769c80>
  52. Traceback (most recent call last):
  53. File "/usr/lib/python2.6/site-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py", line 245, in run_and_release
  54. with self._running_lock:
  55. File "/usr/lib64/python2.6/threading.py", line 117, in acquire
  56. me = _get_ident()
  57. TypeError: 'NoneType' object is not callable
  58. [root@ceph-admin ceph]#
  59. [root@ceph-admin ceph]#
  60.  
  61.  
  62.  
  63. [root@ceph-node1 ceph]#
  64. [root@ceph-node1 ceph]# service ceph status
  65. [root@ceph-node1 ceph]#
  66. [root@ceph-node1 ceph]#
  67. [root@ceph-node1 ceph]#
  68. [root@ceph-node1 ceph]# df -h
  69. Filesystem Size Used Avail Use% Mounted on
  70. /dev/vda2 8.9G 1.8G 6.7G 21% /
  71. tmpfs 15G 0 15G 0% /dev/shm
  72. [root@ceph-node1 ceph]#
  73. [root@ceph-node1 ceph]#
  74. [root@ceph-node1 ceph]#
  75. [root@ceph-node1 ceph]# ceph status
  76. cluster 44ab50ea-6393-45d2-bef6-a56235716cf5
  77. health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds; clock skew detected on mon.ceph-mon2
  78. monmap e7: 3 mons at {ceph-mon1=192.168.1.38:6789/0,ceph-mon2=192.168.1.33:6789/0,ceph-mon3=192.168.1.31:6789/0}, election epoch 496, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
  79. osdmap e1: 0 osds: 0 up, 0 in
  80. pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
  81. 0 kB used, 0 kB / 0 kB avail
  82. 192 creating
  83. [root@ceph-node1 ceph]#
  84. [root@ceph-node1 ceph]#
  85. [root@ceph-node1 ceph]# ceph osd tree
  86. # id weight type name up/down reweight
  87. -1 0 root default
  88. [root@ceph-node1 ceph]#
Advertisement
Add Comment
Please, Sign In to add comment