Advertisement
alohamora

unable to add OSD

Feb 18th, 2014
847
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.24 KB | None | 0 0
  1. [root@ceph-admin ceph]# ceph-deploy -v osd activate ceph-node1:/dev/vdb:/dev/vdb1
  2. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy -v osd activate ceph-node1:/dev/vdb:/dev/vdb1
  3. [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-node1:/dev/vdb:/dev/vdb1
  4. [ceph-node1][DEBUG ] connected to host: ceph-node1
  5. [ceph-node1][DEBUG ] detect platform information from remote host
  6. [ceph-node1][DEBUG ] detect machine type
  7. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  8. [ceph_deploy.osd][DEBUG ] activating host ceph-node1 disk /dev/vdb
  9. [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
  10. [ceph-node1][INFO ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/vdb
  11. [ceph-node1][WARNIN] ceph-disk: Cannot discover filesystem type: device /dev/vdb: Line is truncated:
  12. [ceph-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
  13. [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate --mark-init sysvinit --mount /dev/vdb
  14.  
  15. [root@ceph-admin ceph]# ceph-deploy -v osd activate ceph-node1:/dev/vdb1
  16. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy -v osd activate ceph-node1:/dev/vdb1
  17. [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-node1:/dev/vdb1:
  18. [ceph-node1][DEBUG ] connected to host: ceph-node1
  19. [ceph-node1][DEBUG ] detect platform information from remote host
  20. [ceph-node1][DEBUG ] detect machine type
  21. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  22. [ceph_deploy.osd][DEBUG ] activating host ceph-node1 disk /dev/vdb1
  23. [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
  24. [ceph-node1][INFO ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/vdb1
  25. [ceph-node1][WARNIN] 2014-02-18 10:49:14.971328 7fe911f56700 0 librados: client.bootstrap-osd authentication error (1) Operation not permitted
  26. [ceph-node1][WARNIN] Error connecting to cluster: PermissionError
  27. [ceph-node1][WARNIN] ERROR:ceph-disk:Failed to activate
  28. [ceph-node1][WARNIN] ceph-disk: Error: ceph osd create failed: Command '/usr/bin/ceph' returned non-zero exit status 1:
  29. [ceph-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
  30. [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate --mark-init sysvinit --mount /dev/vdb1
  31.  
  32. [root@ceph-admin ceph]# ll
  33.  
  34.  
  35.  
  36. [root@ceph-node1 ceph]# ceph -c ./ceph.conf -k ceph.client.admin.keyring status
  37. cluster 44ab50ea-6393-45d2-bef6-a56235716cf5
  38. health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds; clock skew detected on mon.ceph-mon2
  39. monmap e7: 3 mons at {ceph-mon1=192.168.1.38:6789/0,ceph-mon2=192.168.1.33:6789/0,ceph-mon3=192.168.1.31:6789/0}, election epoch 496, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
  40. osdmap e1: 0 osds: 0 up, 0 in
  41. pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
  42. 0 kB used, 0 kB / 0 kB avail
  43. 192 creating
  44. [root@ceph-node1 ceph]#
  45.  
  46.  
  47.  
  48.  
  49. [root@ceph-node1 ceph]# service ceph status
  50. [root@ceph-node1 ceph]#
  51. [root@ceph-node1 ceph]#
  52. [root@ceph-node1 ceph]#
  53. [root@ceph-node1 ceph]#
  54. [root@ceph-node1 ceph]# df -h
  55. Filesystem Size Used Avail Use% Mounted on
  56. /dev/vda2 8.9G 1.8G 6.7G 21% /
  57. tmpfs 15G 0 15G 0% /dev/shm
  58. [root@ceph-node1 ceph]#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement