Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [root@ceph-admin ceph]#
- [root@ceph-admin ceph]# ceph-deploy disk zap ceph-node1:vdb
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk zap ceph-node1:vdb
- [ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph-node1
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph-node1][DEBUG ] zeroing last few blocks of device
- [ceph-node1][INFO ] Running command: sgdisk --zap-all --clear --mbrtogpt -- /dev/vdb
- [ceph-node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
- [ceph-node1][DEBUG ] other utilities.
- [ceph-node1][DEBUG ] The operation has completed successfully.
- Unhandled exception in thread started by
- Error in sys.excepthook:
- Original exception was:
- [root@ceph-admin ceph]#
- [root@ceph-admin ceph]#
- [root@ceph-admin ceph]#
- [root@ceph-admin ceph]#
- [root@ceph-admin ceph]#
- [root@ceph-admin ceph]# ceph-deploy osd create ceph-node1:vdb
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy osd create ceph-node1:vdb
- [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node1:/dev/vdb:
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node1
- [ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [ceph-node1][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add
- [ceph_deploy.osd][DEBUG ] Preparing host ceph-node1 disk /dev/vdb journal None activate True
- [ceph-node1][INFO ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/vdb
- [ceph-node1][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/vdb
- [ceph-node1][WARNIN] Warning: WARNING: the kernel failed to re-read the partition table on /dev/vdb (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
- [ceph-node1][WARNIN] Warning: WARNING: the kernel failed to re-read the partition table on /dev/vdb (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
- [ceph-node1][DEBUG ] The operation has completed successfully.
- [ceph-node1][DEBUG ] The operation has completed successfully.
- [ceph-node1][DEBUG ] meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=28771775 blks
- [ceph-node1][DEBUG ] = sectsz=512 attr=2, projid32bit=0
- [ceph-node1][DEBUG ] data = bsize=4096 blocks=115087099, imaxpct=25
- [ceph-node1][DEBUG ] = sunit=0 swidth=0 blks
- [ceph-node1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
- [ceph-node1][DEBUG ] log =internal log bsize=4096 blocks=56194, version=2
- [ceph-node1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
- [ceph-node1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
- [ceph-node1][DEBUG ] The operation has completed successfully.
- [ceph-node1][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add
- [ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use.
- Unhandled exception in thread started by <function run_and_release at 0x1769c80>
- Traceback (most recent call last):
- File "/usr/lib/python2.6/site-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py", line 245, in run_and_release
- with self._running_lock:
- File "/usr/lib64/python2.6/threading.py", line 117, in acquire
- me = _get_ident()
- TypeError: 'NoneType' object is not callable
- [root@ceph-admin ceph]#
- [root@ceph-admin ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# service ceph status
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# df -h
- Filesystem Size Used Avail Use% Mounted on
- /dev/vda2 8.9G 1.8G 6.7G 21% /
- tmpfs 15G 0 15G 0% /dev/shm
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# ceph status
- cluster 44ab50ea-6393-45d2-bef6-a56235716cf5
- health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds; clock skew detected on mon.ceph-mon2
- monmap e7: 3 mons at {ceph-mon1=192.168.1.38:6789/0,ceph-mon2=192.168.1.33:6789/0,ceph-mon3=192.168.1.31:6789/0}, election epoch 496, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
- osdmap e1: 0 osds: 0 up, 0 in
- pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
- 0 kB used, 0 kB / 0 kB avail
- 192 creating
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# ceph osd tree
- # id weight type name up/down reweight
- -1 0 root default
- [root@ceph-node1 ceph]#
Advertisement
Add Comment
Please, Sign In to add comment