Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
- [ceph-node1][INFO ] Running command: ceph-disk list
- [ceph-node1][DEBUG ] /dev/vda :
- [ceph-node1][DEBUG ] /dev/vda1 swap, swap
- [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
- [ceph-node1][DEBUG ] /dev/vdb other, unknown
- [ceph-node1][DEBUG ] /dev/vdc other, unknown
- [root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node1:/dev/vdb
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk zap ceph-node1:/dev/vdb
- [ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph-node1
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph-node1][DEBUG ] zeroing last few blocks of device
- [ceph-node1][INFO ] Running command: sgdisk --zap-all --clear --mbrtogpt -- /dev/vdb
- [ceph-node1][DEBUG ] Creating new GPT entries.
- [ceph-node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
- [ceph-node1][DEBUG ] other utilities.
- [ceph-node1][DEBUG ] The operation has completed successfully.
- Unhandled exception in thread started by
- Error in sys.excepthook:
- Original exception was:
- [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
- [ceph-node1][INFO ] Running command: ceph-disk list
- [ceph-node1][DEBUG ] /dev/vda :
- [ceph-node1][DEBUG ] /dev/vda1 swap, swap
- [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
- [ceph-node1][DEBUG ] /dev/vdb other, unknown
- [ceph-node1][DEBUG ] /dev/vdc other, unknown
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node1:/dev/vdb
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk zap ceph-node1:/dev/vdb
- [ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph-node1
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph-node1][DEBUG ] zeroing last few blocks of device
- [ceph-node1][INFO ] Running command: sgdisk --zap-all --clear --mbrtogpt -- /dev/vdb
- [ceph-node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
- [ceph-node1][DEBUG ] other utilities.
- [ceph-node1][DEBUG ] The operation has completed successfully.
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
- [ceph-node1][INFO ] Running command: ceph-disk list
- [ceph-node1][DEBUG ] /dev/vda :
- [ceph-node1][DEBUG ] /dev/vda1 swap, swap
- [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
- [ceph-node1][DEBUG ] /dev/vdb other, unknown
- [ceph-node1][DEBUG ] /dev/vdc other, unknown
- Unhandled exception in thread started by <function run_and_release at 0x9b3758>
- Traceback (most recent call last):
- File "/usr/lib/python2.6/site-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py", line 245, in run_and_release
- with self._running_lock:
- File "/usr/lib64/python2.6/threading.py", line 117, in acquire
- me = _get_ident()
- TypeError: 'NoneType' object is not callable
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
- [ceph-node1][INFO ] Running command: ceph-disk list
- [ceph-node1][DEBUG ] /dev/vda :
- [ceph-node1][DEBUG ] /dev/vda1 swap, swap
- [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
- [ceph-node1][DEBUG ] /dev/vdb other, unknown
- [ceph-node1][DEBUG ] /dev/vdc other, unknown
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node1:/dev/vdc
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk zap ceph-node1:/dev/vdc
- [ceph_deploy.osd][DEBUG ] zapping /dev/vdc on ceph-node1
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph-node1][DEBUG ] zeroing last few blocks of device
- [ceph-node1][INFO ] Running command: sgdisk --zap-all --clear --mbrtogpt -- /dev/vdc
- [ceph-node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
- [ceph-node1][DEBUG ] other utilities.
- [ceph-node1][DEBUG ] The operation has completed successfully.
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
- [ceph-node1][INFO ] Running command: ceph-disk list
- [ceph-node1][DEBUG ] /dev/vda :
- [ceph-node1][DEBUG ] /dev/vda1 swap, swap
- [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
- [ceph-node1][DEBUG ] /dev/vdb other, unknown
- [ceph-node1][DEBUG ] /dev/vdc other, unknown
- Unhandled exception in thread started by <function run_and_release at 0x24db758>
- Traceback (most recent call last):
- File "/usr/lib/python2.6/site-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py", line 245, in run_and_release
- with self._running_lock:
- File "/usr/lib64/python2.6/threading.py", line 117, in acquire
- me = _get_ident()
- TypeError: 'NoneType' object is not callable
- [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
- [ceph-node1][INFO ] Running command: ceph-disk list
- [ceph-node1][DEBUG ] /dev/vda :
- [ceph-node1][DEBUG ] /dev/vda1 swap, swap
- [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
- [ceph-node1][DEBUG ] /dev/vdb other, unknown
- [ceph-node1][DEBUG ] /dev/vdc other, unknown
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [[root@ceph-node1 ceph]# parted /dev/vdb
- GNU Parted 2.1
- Using /dev/vdb
- Welcome to GNU Parted! Type 'help' to view a list of commands.
- (parted) print
- Model: Virtio Block Device (virtblk)
- Disk /dev/vdb: 472GB
- Sector size (logical/physical): 512B/512B
- Partition Table: gpt
- Number Start End Size File system Name Flags
- (parted) q
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# fdisk -l /dev/vdb
- WARNING: GPT (GUID Partition Table) detected on '/dev/vdb'! The util fdisk doesn't support GPT. Use GNU Parted.
- Disk /dev/vdb: 472.4 GB, 472446402560 bytes
- 256 heads, 63 sectors/track, 57213 cylinders
- Units = cylinders of 16128 * 512 = 8257536 bytes
- Sector size (logical/physical): 512 bytes / 512 bytes
- I/O size (minimum/optimal): 512 bytes / 512 bytes
- Disk identifier: 0x00000000
- Device Boot Start End Blocks Id System
- /dev/vdb1 1 57214 461373439+ ee GPT
- [root@ceph-node1 ceph]# fdisk -l /dev/vdc
- WARNING: GPT (GUID Partition Table) detected on '/dev/vdc'! The util fdisk doesn't support GPT. Use GNU Parted.
- Disk /dev/vdc: 483.2 GB, 483183820800 bytes
- 255 heads, 63 sectors/track, 58743 cylinders
- Units = cylinders of 16065 * 512 = 8225280 bytes
- Sector size (logical/physical): 512 bytes / 512 bytes
- I/O size (minimum/optimal): 512 bytes / 512 bytes
- Disk identifier: 0x00000000
- Device Boot Start End Blocks Id System
- /dev/vdc1 1 58744 471859199+ ee GPT
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# parted -s /dev/vdc print
- Model: Virtio Block Device (virtblk)
- Disk /dev/vdc: 483GB
- Sector size (logical/physical): 512B/512B
- Partition Table: gpt
- Number Start End Size File system Name Flags
- [root@ceph-node1 ceph]# parted -s /dev/vdb print
- Model: Virtio Block Device (virtblk)
- Disk /dev/vdb: 472GB
- Sector size (logical/physical): 512B/512B
- Partition Table: gpt
- Number Start End Size File system Name Flags
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# ceph-disk prepare --fs-type xfs --cluster ceph /dev/vdc
- INFO:ceph-disk:Will colocate journal with data on /dev/vdc
- The operation has completed successfully.
- The operation has completed successfully.
- mkfs.xfs: /dev/vdc1 contains a mounted filesystem
- Usage: mkfs.xfs
- /* blocksize */ [-b log=n|size=num]
- /* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num,
- (sunit=value,swidth=value|su=num,sw=num),
- sectlog=n|sectsize=num
- /* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
- projid32bit=0|1]
- /* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n
- sunit=value|su=num,sectlog=n|sectsize=num,
- lazy-count=0|1]
- /* label */ [-L label (maximum 12 characters)]
- /* naming */ [-n log=n|size=num,version=2|ci]
- /* prototype file */ [-p fname]
- /* quiet */ [-q]
- /* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx]
- /* sectorsize */ [-s log=n|size=num]
- /* version */ [-V]
- devicename
- <devicename> is required unless -d name=xxx is given.
- <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
- xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
- <value> is xxx (512 byte blocks).
- ceph-disk: Error: Command '['mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/vdc1']' returned non-zero exit status 1
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
- [ceph-node1][INFO ] Running command: ceph-disk list
- [ceph-node1][DEBUG ] /dev/vda :
- [ceph-node1][DEBUG ] /dev/vda1 swap, swap
- [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
- [ceph-node1][DEBUG ] /dev/vdb other, unknown
- [ceph-node1][DEBUG ] /dev/vdc :
- [ceph-node1][DEBUG ] /dev/vdc1 ceph data, active, cluster ceph, osd.3
- [ceph-node1][DEBUG ] /dev/vdc2 ceph journal
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# fdisk -l /dev/vdc
- WARNING: GPT (GUID Partition Table) detected on '/dev/vdc'! The util fdisk doesn't support GPT. Use GNU Parted.
- Disk /dev/vdc: 483.2 GB, 483183820800 bytes
- 255 heads, 63 sectors/track, 58743 cylinders
- Units = cylinders of 16065 * 512 = 8225280 bytes
- Sector size (logical/physical): 512 bytes / 512 bytes
- I/O size (minimum/optimal): 512 bytes / 512 bytes
- Disk identifier: 0x00000000
- Device Boot Start End Blocks Id System
- /dev/vdc1 1 58744 471859199+ ee GPT
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# parted -s /dev/vdc print
- Model: Virtio Block Device (virtblk)
- Disk /dev/vdc: 483GB
- Sector size (logical/physical): 512B/512B
- Partition Table: gpt
- Number Start End Size File system Name Flags
- 2 1049kB 105MB 104MB ceph journal
- 1 106MB 483GB 483GB xfs ceph data
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# ceph-disk activate --activate-key /var/lib/ceph/bootstrap-osd/ceph.keyring /dev/vdc1
- INFO:ceph-disk:ceph osd.3 already mounted in position; unmounting ours.
- === osd.3 ===
- Error ENOENT: osd.3 does not exist. create it before updating the crush map
- failed: 'timeout 10 /usr/bin/ceph --name=osd.3 --keyring=/var/lib/ceph/osd/ceph-3/keyring osd crush create-or-move -- 3 0.44 root=default host=ceph-node1 '
- [root@ceph-node1 ceph]# ceph osd tree
- # id weight type name up/down reweight
- -1 0 root default
- -2 0 host ceph-node1
- [root@ceph-node1 ceph]#
Advertisement
Add Comment
Please, Sign In to add comment