Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [root@node-211 ~]# cat /root/ceph.log
- 2014-07-29 17:06:55,527 [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy --overwrite-conf config pull node-218
- 2014-07-29 17:06:55,527 [ceph_deploy.config][DEBUG ] Checking node-218 for /etc/ceph/ceph.conf
- 2014-07-29 17:06:55,527 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection without sudo
- 2014-07-29 17:06:55,768 [ceph_deploy.config][DEBUG ] Got /etc/ceph/ceph.conf from node-218
- 2014-07-29 17:06:56,075 [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy gatherkeys node-218
- 2014-07-29 17:06:56,075 [ceph_deploy.gatherkeys][DEBUG ] Checking node-218 for /etc/ceph/ceph.client.admin.keyring
- 2014-07-29 17:06:56,076 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection without sudo
- 2014-07-29 17:06:56,305 [ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from node-218.
- 2014-07-29 17:06:56,306 [ceph_deploy.gatherkeys][DEBUG ] Checking node-218 for /var/lib/ceph/mon/ceph-{hostname}/keyring
- 2014-07-29 17:06:56,306 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection without sudo
- 2014-07-29 17:06:56,525 [ceph_deploy.gatherkeys][DEBUG ] Got ceph.mon.keyring key from node-218.
- 2014-07-29 17:06:56,525 [ceph_deploy.gatherkeys][DEBUG ] Checking node-218 for /var/lib/ceph/bootstrap-osd/ceph.keyring
- 2014-07-29 17:06:56,525 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection without sudo
- 2014-07-29 17:06:56,730 [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from node-218.
- 2014-07-29 17:06:56,730 [ceph_deploy.gatherkeys][DEBUG ] Checking node-218 for /var/lib/ceph/bootstrap-mds/ceph.keyring
- 2014-07-29 17:06:56,731 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection without sudo
- 2014-07-29 17:06:56,933 [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from node-218.
- 2014-07-29 17:06:57,302 [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy --overwrite-conf config push node-211
- 2014-07-29 17:06:57,302 [ceph_deploy.config][DEBUG ] Pushing config to node-211
- 2014-07-29 17:06:57,303 [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection without sudo
- 2014-07-29 17:07:32,788 [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy osd prepare node-211:/dev/sdb4
- 2014-07-29 17:07:32,788 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node-211:/dev/sdb4:
- 2014-07-29 17:07:32,788 [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection without sudo
- 2014-07-29 17:07:32,950 [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- 2014-07-29 17:07:32,951 [ceph_deploy.osd][DEBUG ] Deploying osd to node-211
- 2014-07-29 17:07:32,959 [node-211][INFO ] write cluster configuration to /etc/ceph/{cluster}.conf
- 2014-07-29 17:07:33,023 [node-211][INFO ] keyring file does not exist, creating one at: /var/lib/ceph/bootstrap-osd/ceph.keyring
- 2014-07-29 17:07:33,033 [node-211][INFO ] create mon keyring file
- 2014-07-29 17:07:33,068 [node-211][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add
- 2014-07-29 17:07:33,122 [ceph_deploy.osd][DEBUG ] Preparing host node-211 disk /dev/sdb4 journal None activate False
- 2014-07-29 17:07:33,122 [node-211][INFO ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb4
- 2014-07-29 17:07:33,458 [node-211][ERROR ] Traceback (most recent call last):
- 2014-07-29 17:07:33,459 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/osd.py", line 126, in prepare_disk
- 2014-07-29 17:07:33,462 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 10, in inner
- 2014-07-29 17:07:33,463 [node-211][ERROR ] def inner(*args, **kwargs):
- 2014-07-29 17:07:33,464 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/util/wrappers.py", line 6, in remote_call
- 2014-07-29 17:07:33,465 [node-211][ERROR ] This allows us to only remote-execute the actual calls, not whole functions.
- 2014-07-29 17:07:33,467 [node-211][ERROR ] File "/usr/lib64/python2.6/subprocess.py", line 505, in check_call
- 2014-07-29 17:07:33,468 [node-211][ERROR ] raise CalledProcessError(retcode, cmd)
- 2014-07-29 17:07:33,469 [node-211][ERROR ] CalledProcessError: Command '['ceph-disk-prepare', '--fs-type', 'xfs', '--cluster', 'ceph', '--', '/dev/sdb4']' returned non-zero exit status 1
- 2014-07-29 17:07:33,474 [node-211][ERROR ] mkfs.xfs: /dev/sdb4 contains a mounted filesystem
- 2014-07-29 17:07:33,475 [node-211][ERROR ] Usage: mkfs.xfs
- 2014-07-29 17:07:33,475 [node-211][ERROR ] /* blocksize */ [-b log=n|size=num]
- 2014-07-29 17:07:33,475 [node-211][ERROR ] /* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num,
- 2014-07-29 17:07:33,475 [node-211][ERROR ] (sunit=value,swidth=value|su=num,sw=num),
- 2014-07-29 17:07:33,475 [node-211][ERROR ] sectlog=n|sectsize=num
- 2014-07-29 17:07:33,475 [node-211][ERROR ] /* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
- 2014-07-29 17:07:33,475 [node-211][ERROR ] projid32bit=0|1]
- 2014-07-29 17:07:33,475 [node-211][ERROR ] /* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n
- 2014-07-29 17:07:33,475 [node-211][ERROR ] sunit=value|su=num,sectlog=n|sectsize=num,
- 2014-07-29 17:07:33,475 [node-211][ERROR ] lazy-count=0|1]
- 2014-07-29 17:07:33,476 [node-211][ERROR ] /* label */ [-L label (maximum 12 characters)]
- 2014-07-29 17:07:33,476 [node-211][ERROR ] /* naming */ [-n log=n|size=num,version=2|ci]
- 2014-07-29 17:07:33,476 [node-211][ERROR ] /* prototype file */ [-p fname]
- 2014-07-29 17:07:33,476 [node-211][ERROR ] /* quiet */ [-q]
- 2014-07-29 17:07:33,476 [node-211][ERROR ] /* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx]
- 2014-07-29 17:07:33,476 [node-211][ERROR ] /* sectorsize */ [-s log=n|size=num]
- 2014-07-29 17:07:33,476 [node-211][ERROR ] /* version */ [-V]
- 2014-07-29 17:07:33,476 [node-211][ERROR ] devicename
- 2014-07-29 17:07:33,476 [node-211][ERROR ] <devicename> is required unless -d name=xxx is given.
- 2014-07-29 17:07:33,476 [node-211][ERROR ] <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
- 2014-07-29 17:07:33,477 [node-211][ERROR ] xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
- 2014-07-29 17:07:33,477 [node-211][ERROR ] <value> is xxx (512 byte blocks).
- 2014-07-29 17:07:33,477 [node-211][ERROR ] ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/sdb4']' returned non-zero exit status 1
- 2014-07-29 17:07:33,480 [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb4
- 2014-07-29 17:07:33,481 [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
- 2014-07-29 17:07:34,725 [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy osd prepare node-211:/dev/sdb4
- 2014-07-29 17:07:34,725 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node-211:/dev/sdb4:
- 2014-07-29 17:07:34,726 [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection without sudo
- 2014-07-29 17:07:34,886 [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- 2014-07-29 17:07:34,887 [ceph_deploy.osd][DEBUG ] Deploying osd to node-211
- 2014-07-29 17:07:34,895 [node-211][INFO ] write cluster configuration to /etc/ceph/{cluster}.conf
- 2014-07-29 17:07:34,982 [node-211][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add
- 2014-07-29 17:07:35,044 [ceph_deploy.osd][DEBUG ] Preparing host node-211 disk /dev/sdb4 journal None activate False
- 2014-07-29 17:07:35,044 [node-211][INFO ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb4
- 2014-07-29 17:07:35,313 [node-211][ERROR ] Traceback (most recent call last):
- 2014-07-29 17:07:35,313 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/osd.py", line 126, in prepare_disk
- 2014-07-29 17:07:35,316 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 10, in inner
- 2014-07-29 17:07:35,317 [node-211][ERROR ] def inner(*args, **kwargs):
- 2014-07-29 17:07:35,318 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/util/wrappers.py", line 6, in remote_call
- 2014-07-29 17:07:35,319 [node-211][ERROR ] This allows us to only remote-execute the actual calls, not whole functions.
- 2014-07-29 17:07:35,320 [node-211][ERROR ] File "/usr/lib64/python2.6/subprocess.py", line 505, in check_call
- 2014-07-29 17:07:35,321 [node-211][ERROR ] raise CalledProcessError(retcode, cmd)
- 2014-07-29 17:07:35,322 [node-211][ERROR ] CalledProcessError: Command '['ceph-disk-prepare', '--fs-type', 'xfs', '--cluster', 'ceph', '--', '/dev/sdb4']' returned non-zero exit status 1
- 2014-07-29 17:07:35,328 [node-211][ERROR ] mkfs.xfs: /dev/sdb4 contains a mounted filesystem
- 2014-07-29 17:07:35,328 [node-211][ERROR ] Usage: mkfs.xfs
- 2014-07-29 17:07:35,328 [node-211][ERROR ] /* blocksize */ [-b log=n|size=num]
- 2014-07-29 17:07:35,328 [node-211][ERROR ] /* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num,
- 2014-07-29 17:07:35,329 [node-211][ERROR ] (sunit=value,swidth=value|su=num,sw=num),
- 2014-07-29 17:07:35,329 [node-211][ERROR ] sectlog=n|sectsize=num
- 2014-07-29 17:07:35,329 [node-211][ERROR ] /* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
- 2014-07-29 17:07:35,329 [node-211][ERROR ] projid32bit=0|1]
- 2014-07-29 17:07:35,329 [node-211][ERROR ] /* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n
- 2014-07-29 17:07:35,329 [node-211][ERROR ] sunit=value|su=num,sectlog=n|sectsize=num,
- 2014-07-29 17:07:35,329 [node-211][ERROR ] lazy-count=0|1]
- 2014-07-29 17:07:35,329 [node-211][ERROR ] /* label */ [-L label (maximum 12 characters)]
- 2014-07-29 17:07:35,329 [node-211][ERROR ] /* naming */ [-n log=n|size=num,version=2|ci]
- 2014-07-29 17:07:35,329 [node-211][ERROR ] /* prototype file */ [-p fname]
- 2014-07-29 17:07:35,329 [node-211][ERROR ] /* quiet */ [-q]
- 2014-07-29 17:07:35,330 [node-211][ERROR ] /* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx]
- 2014-07-29 17:07:35,330 [node-211][ERROR ] /* sectorsize */ [-s log=n|size=num]
- 2014-07-29 17:07:35,330 [node-211][ERROR ] /* version */ [-V]
- 2014-07-29 17:07:35,330 [node-211][ERROR ] devicename
- 2014-07-29 17:07:35,330 [node-211][ERROR ] <devicename> is required unless -d name=xxx is given.
- 2014-07-29 17:07:35,330 [node-211][ERROR ] <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
- 2014-07-29 17:07:35,330 [node-211][ERROR ] xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
- 2014-07-29 17:07:35,330 [node-211][ERROR ] <value> is xxx (512 byte blocks).
- 2014-07-29 17:07:35,330 [node-211][ERROR ] ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/sdb4']' returned non-zero exit status 1
- 2014-07-29 17:07:35,334 [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb4
- 2014-07-29 17:07:35,334 [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
- 2014-07-29 17:08:10,063 [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy osd prepare node-211:/dev/sdb4
- 2014-07-29 17:08:10,063 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node-211:/dev/sdb4:
- 2014-07-29 17:08:10,063 [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection without sudo
- 2014-07-29 17:08:10,223 [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- 2014-07-29 17:08:10,224 [ceph_deploy.osd][DEBUG ] Deploying osd to node-211
- 2014-07-29 17:08:10,232 [node-211][INFO ] write cluster configuration to /etc/ceph/{cluster}.conf
- 2014-07-29 17:08:10,325 [node-211][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add
- 2014-07-29 17:08:10,377 [ceph_deploy.osd][DEBUG ] Preparing host node-211 disk /dev/sdb4 journal None activate False
- 2014-07-29 17:08:10,378 [node-211][INFO ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb4
- 2014-07-29 17:08:10,669 [node-211][ERROR ] Traceback (most recent call last):
- 2014-07-29 17:08:10,670 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/osd.py", line 126, in prepare_disk
- 2014-07-29 17:08:10,673 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 10, in inner
- 2014-07-29 17:08:10,674 [node-211][ERROR ] def inner(*args, **kwargs):
- 2014-07-29 17:08:10,674 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/util/wrappers.py", line 6, in remote_call
- 2014-07-29 17:08:10,675 [node-211][ERROR ] This allows us to only remote-execute the actual calls, not whole functions.
- 2014-07-29 17:08:10,676 [node-211][ERROR ] File "/usr/lib64/python2.6/subprocess.py", line 505, in check_call
- 2014-07-29 17:08:10,677 [node-211][ERROR ] raise CalledProcessError(retcode, cmd)
- 2014-07-29 17:08:10,678 [node-211][ERROR ] CalledProcessError: Command '['ceph-disk-prepare', '--fs-type', 'xfs', '--cluster', 'ceph', '--', '/dev/sdb4']' returned non-zero exit status 1
- 2014-07-29 17:08:10,684 [node-211][ERROR ] mkfs.xfs: /dev/sdb4 contains a mounted filesystem
- 2014-07-29 17:08:10,685 [node-211][ERROR ] Usage: mkfs.xfs
- 2014-07-29 17:08:10,685 [node-211][ERROR ] /* blocksize */ [-b log=n|size=num]
- 2014-07-29 17:08:10,685 [node-211][ERROR ] /* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num,
- 2014-07-29 17:08:10,685 [node-211][ERROR ] (sunit=value,swidth=value|su=num,sw=num),
- 2014-07-29 17:08:10,685 [node-211][ERROR ] sectlog=n|sectsize=num
- 2014-07-29 17:08:10,685 [node-211][ERROR ] /* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
- 2014-07-29 17:08:10,685 [node-211][ERROR ] projid32bit=0|1]
- 2014-07-29 17:08:10,685 [node-211][ERROR ] /* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n
- 2014-07-29 17:08:10,685 [node-211][ERROR ] sunit=value|su=num,sectlog=n|sectsize=num,
- 2014-07-29 17:08:10,685 [node-211][ERROR ] lazy-count=0|1]
- 2014-07-29 17:08:10,686 [node-211][ERROR ] /* label */ [-L label (maximum 12 characters)]
- 2014-07-29 17:08:10,686 [node-211][ERROR ] /* naming */ [-n log=n|size=num,version=2|ci]
- 2014-07-29 17:08:10,686 [node-211][ERROR ] /* prototype file */ [-p fname]
- 2014-07-29 17:08:10,686 [node-211][ERROR ] /* quiet */ [-q]
- 2014-07-29 17:08:10,686 [node-211][ERROR ] /* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx]
- 2014-07-29 17:08:10,686 [node-211][ERROR ] /* sectorsize */ [-s log=n|size=num]
- 2014-07-29 17:08:10,686 [node-211][ERROR ] /* version */ [-V]
- 2014-07-29 17:08:10,686 [node-211][ERROR ] devicename
- 2014-07-29 17:08:10,686 [node-211][ERROR ] <devicename> is required unless -d name=xxx is given.
- 2014-07-29 17:08:10,686 [node-211][ERROR ] <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
- 2014-07-29 17:08:10,686 [node-211][ERROR ] xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
- 2014-07-29 17:08:10,687 [node-211][ERROR ] <value> is xxx (512 byte blocks).
- 2014-07-29 17:08:10,687 [node-211][ERROR ] ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/sdb4']' returned non-zero exit status 1
- 2014-07-29 17:08:10,690 [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb4
- 2014-07-29 17:08:10,691 [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
- 2014-07-29 17:08:11,942 [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy osd prepare node-211:/dev/sdb4
- 2014-07-29 17:08:11,942 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node-211:/dev/sdb4:
- 2014-07-29 17:08:11,942 [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection without sudo
- 2014-07-29 17:08:12,103 [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- 2014-07-29 17:08:12,104 [ceph_deploy.osd][DEBUG ] Deploying osd to node-211
- 2014-07-29 17:08:12,111 [node-211][INFO ] write cluster configuration to /etc/ceph/{cluster}.conf
- 2014-07-29 17:08:12,200 [node-211][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add
- 2014-07-29 17:08:12,253 [ceph_deploy.osd][DEBUG ] Preparing host node-211 disk /dev/sdb4 journal None activate False
- 2014-07-29 17:08:12,253 [node-211][INFO ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb4
- 2014-07-29 17:08:12,529 [node-211][ERROR ] Traceback (most recent call last):
- 2014-07-29 17:08:12,530 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/osd.py", line 126, in prepare_disk
- 2014-07-29 17:08:12,533 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 10, in inner
- 2014-07-29 17:08:12,534 [node-211][ERROR ] def inner(*args, **kwargs):
- 2014-07-29 17:08:12,535 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/util/wrappers.py", line 6, in remote_call
- 2014-07-29 17:08:12,536 [node-211][ERROR ] This allows us to only remote-execute the actual calls, not whole functions.
- 2014-07-29 17:08:12,537 [node-211][ERROR ] File "/usr/lib64/python2.6/subprocess.py", line 505, in check_call
- 2014-07-29 17:08:12,539 [node-211][ERROR ] raise CalledProcessError(retcode, cmd)
- 2014-07-29 17:08:12,540 [node-211][ERROR ] CalledProcessError: Command '['ceph-disk-prepare', '--fs-type', 'xfs', '--cluster', 'ceph', '--', '/dev/sdb4']' returned non-zero exit status 1
- 2014-07-29 17:08:12,545 [node-211][ERROR ] mkfs.xfs: /dev/sdb4 contains a mounted filesystem
- 2014-07-29 17:08:12,545 [node-211][ERROR ] Usage: mkfs.xfs
- 2014-07-29 17:08:12,546 [node-211][ERROR ] /* blocksize */ [-b log=n|size=num]
- 2014-07-29 17:08:12,546 [node-211][ERROR ] /* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num,
- 2014-07-29 17:08:12,546 [node-211][ERROR ] (sunit=value,swidth=value|su=num,sw=num),
- 2014-07-29 17:08:12,546 [node-211][ERROR ] sectlog=n|sectsize=num
- 2014-07-29 17:08:12,546 [node-211][ERROR ] /* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
- 2014-07-29 17:08:12,546 [node-211][ERROR ] projid32bit=0|1]
- 2014-07-29 17:08:12,546 [node-211][ERROR ] /* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n
- 2014-07-29 17:08:12,546 [node-211][ERROR ] sunit=value|su=num,sectlog=n|sectsize=num,
- 2014-07-29 17:08:12,546 [node-211][ERROR ] lazy-count=0|1]
- 2014-07-29 17:08:12,546 [node-211][ERROR ] /* label */ [-L label (maximum 12 characters)]
- 2014-07-29 17:08:12,546 [node-211][ERROR ] /* naming */ [-n log=n|size=num,version=2|ci]
- 2014-07-29 17:08:12,547 [node-211][ERROR ] /* prototype file */ [-p fname]
- 2014-07-29 17:08:12,547 [node-211][ERROR ] /* quiet */ [-q]
- 2014-07-29 17:08:12,547 [node-211][ERROR ] /* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx]
- 2014-07-29 17:08:12,547 [node-211][ERROR ] /* sectorsize */ [-s log=n|size=num]
- 2014-07-29 17:08:12,547 [node-211][ERROR ] /* version */ [-V]
- 2014-07-29 17:08:12,547 [node-211][ERROR ] devicename
- 2014-07-29 17:08:12,547 [node-211][ERROR ] <devicename> is required unless -d name=xxx is given.
- 2014-07-29 17:08:12,547 [node-211][ERROR ] <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
- 2014-07-29 17:08:12,547 [node-211][ERROR ] xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
- 2014-07-29 17:08:12,547 [node-211][ERROR ] <value> is xxx (512 byte blocks).
- 2014-07-29 17:08:12,547 [node-211][ERROR ] ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/sdb4']' returned non-zero exit status 1
- 2014-07-29 17:08:12,551 [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb4
- 2014-07-29 17:08:12,551 [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
- 2014-07-29 17:08:46,037 [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy osd prepare node-211:/dev/sdb4
- 2014-07-29 17:08:46,037 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node-211:/dev/sdb4:
- 2014-07-29 17:08:46,037 [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection without sudo
- 2014-07-29 17:08:46,198 [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- 2014-07-29 17:08:46,198 [ceph_deploy.osd][DEBUG ] Deploying osd to node-211
- 2014-07-29 17:08:46,206 [node-211][INFO ] write cluster configuration to /etc/ceph/{cluster}.conf
- 2014-07-29 17:08:46,294 [node-211][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add
- 2014-07-29 17:08:46,349 [ceph_deploy.osd][DEBUG ] Preparing host node-211 disk /dev/sdb4 journal None activate False
- 2014-07-29 17:08:46,350 [node-211][INFO ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb4
- 2014-07-29 17:08:46,616 [node-211][ERROR ] Traceback (most recent call last):
- 2014-07-29 17:08:46,616 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/osd.py", line 126, in prepare_disk
- 2014-07-29 17:08:46,620 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 10, in inner
- 2014-07-29 17:08:46,621 [node-211][ERROR ] def inner(*args, **kwargs):
- 2014-07-29 17:08:46,622 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/util/wrappers.py", line 6, in remote_call
- 2014-07-29 17:08:46,623 [node-211][ERROR ] This allows us to only remote-execute the actual calls, not whole functions.
- 2014-07-29 17:08:46,625 [node-211][ERROR ] File "/usr/lib64/python2.6/subprocess.py", line 505, in check_call
- 2014-07-29 17:08:46,626 [node-211][ERROR ] raise CalledProcessError(retcode, cmd)
- 2014-07-29 17:08:46,627 [node-211][ERROR ] CalledProcessError: Command '['ceph-disk-prepare', '--fs-type', 'xfs', '--cluster', 'ceph', '--', '/dev/sdb4']' returned non-zero exit status 1
- 2014-07-29 17:08:46,632 [node-211][ERROR ] mkfs.xfs: /dev/sdb4 contains a mounted filesystem
- 2014-07-29 17:08:46,633 [node-211][ERROR ] Usage: mkfs.xfs
- 2014-07-29 17:08:46,633 [node-211][ERROR ] /* blocksize */ [-b log=n|size=num]
- 2014-07-29 17:08:46,633 [node-211][ERROR ] /* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num,
- 2014-07-29 17:08:46,633 [node-211][ERROR ] (sunit=value,swidth=value|su=num,sw=num),
- 2014-07-29 17:08:46,633 [node-211][ERROR ] sectlog=n|sectsize=num
- 2014-07-29 17:08:46,633 [node-211][ERROR ] /* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
- 2014-07-29 17:08:46,633 [node-211][ERROR ] projid32bit=0|1]
- 2014-07-29 17:08:46,633 [node-211][ERROR ] /* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n
- 2014-07-29 17:08:46,633 [node-211][ERROR ] sunit=value|su=num,sectlog=n|sectsize=num,
- 2014-07-29 17:08:46,633 [node-211][ERROR ] lazy-count=0|1]
- 2014-07-29 17:08:46,634 [node-211][ERROR ] /* label */ [-L label (maximum 12 characters)]
- 2014-07-29 17:08:46,634 [node-211][ERROR ] /* naming */ [-n log=n|size=num,version=2|ci]
- 2014-07-29 17:08:46,634 [node-211][ERROR ] /* prototype file */ [-p fname]
- 2014-07-29 17:08:46,634 [node-211][ERROR ] /* quiet */ [-q]
- 2014-07-29 17:08:46,634 [node-211][ERROR ] /* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx]
- 2014-07-29 17:08:46,634 [node-211][ERROR ] /* sectorsize */ [-s log=n|size=num]
- 2014-07-29 17:08:46,634 [node-211][ERROR ] /* version */ [-V]
- 2014-07-29 17:08:46,634 [node-211][ERROR ] devicename
- 2014-07-29 17:08:46,634 [node-211][ERROR ] <devicename> is required unless -d name=xxx is given.
- 2014-07-29 17:08:46,634 [node-211][ERROR ] <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
- 2014-07-29 17:08:46,634 [node-211][ERROR ] xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
- 2014-07-29 17:08:46,635 [node-211][ERROR ] <value> is xxx (512 byte blocks).
- 2014-07-29 17:08:46,635 [node-211][ERROR ] ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/sdb4']' returned non-zero exit status 1
- 2014-07-29 17:08:46,638 [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb4
- 2014-07-29 17:08:46,639 [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
- 2014-07-29 17:08:47,893 [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy osd prepare node-211:/dev/sdb4
- 2014-07-29 17:08:47,894 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node-211:/dev/sdb4:
- 2014-07-29 17:08:47,894 [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection without sudo
- 2014-07-29 17:08:48,055 [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- 2014-07-29 17:08:48,055 [ceph_deploy.osd][DEBUG ] Deploying osd to node-211
- 2014-07-29 17:08:48,064 [node-211][INFO ] write cluster configuration to /etc/ceph/{cluster}.conf
- 2014-07-29 17:08:48,145 [node-211][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add
- 2014-07-29 17:08:48,199 [ceph_deploy.osd][DEBUG ] Preparing host node-211 disk /dev/sdb4 journal None activate False
- 2014-07-29 17:08:48,199 [node-211][INFO ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb4
- 2014-07-29 17:08:48,485 [node-211][ERROR ] Traceback (most recent call last):
- 2014-07-29 17:08:48,485 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/osd.py", line 126, in prepare_disk
- 2014-07-29 17:08:48,489 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 10, in inner
- 2014-07-29 17:08:48,490 [node-211][ERROR ] def inner(*args, **kwargs):
- 2014-07-29 17:08:48,491 [node-211][ERROR ] File "/usr/lib/python2.6/site-packages/ceph_deploy/util/wrappers.py", line 6, in remote_call
- 2014-07-29 17:08:48,492 [node-211][ERROR ] This allows us to only remote-execute the actual calls, not whole functions.
- 2014-07-29 17:08:48,493 [node-211][ERROR ] File "/usr/lib64/python2.6/subprocess.py", line 505, in check_call
- 2014-07-29 17:08:48,494 [node-211][ERROR ] raise CalledProcessError(retcode, cmd)
- 2014-07-29 17:08:48,495 [node-211][ERROR ] CalledProcessError: Command '['ceph-disk-prepare', '--fs-type', 'xfs', '--cluster', 'ceph', '--', '/dev/sdb4']' returned non-zero exit status 1
- 2014-07-29 17:08:48,501 [node-211][ERROR ] mkfs.xfs: /dev/sdb4 contains a mounted filesystem
- 2014-07-29 17:08:48,501 [node-211][ERROR ] Usage: mkfs.xfs
- 2014-07-29 17:08:48,501 [node-211][ERROR ] /* blocksize */ [-b log=n|size=num]
- 2014-07-29 17:08:48,501 [node-211][ERROR ] /* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num,
- 2014-07-29 17:08:48,501 [node-211][ERROR ] (sunit=value,swidth=value|su=num,sw=num),
- 2014-07-29 17:08:48,501 [node-211][ERROR ] sectlog=n|sectsize=num
- 2014-07-29 17:08:48,501 [node-211][ERROR ] /* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
- 2014-07-29 17:08:48,502 [node-211][ERROR ] projid32bit=0|1]
- 2014-07-29 17:08:48,502 [node-211][ERROR ] /* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n
- 2014-07-29 17:08:48,502 [node-211][ERROR ] sunit=value|su=num,sectlog=n|sectsize=num,
- 2014-07-29 17:08:48,502 [node-211][ERROR ] lazy-count=0|1]
- 2014-07-29 17:08:48,502 [node-211][ERROR ] /* label */ [-L label (maximum 12 characters)]
- 2014-07-29 17:08:48,502 [node-211][ERROR ] /* naming */ [-n log=n|size=num,version=2|ci]
- 2014-07-29 17:08:48,502 [node-211][ERROR ] /* prototype file */ [-p fname]
- 2014-07-29 17:08:48,502 [node-211][ERROR ] /* quiet */ [-q]
- 2014-07-29 17:08:48,502 [node-211][ERROR ] /* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx]
- 2014-07-29 17:08:48,502 [node-211][ERROR ] /* sectorsize */ [-s log=n|size=num]
- 2014-07-29 17:08:48,502 [node-211][ERROR ] /* version */ [-V]
- 2014-07-29 17:08:48,503 [node-211][ERROR ] devicename
- 2014-07-29 17:08:48,503 [node-211][ERROR ] <devicename> is required unless -d name=xxx is given.
- 2014-07-29 17:08:48,503 [node-211][ERROR ] <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
- 2014-07-29 17:08:48,503 [node-211][ERROR ] xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
- 2014-07-29 17:08:48,503 [node-211][ERROR ] <value> is xxx (512 byte blocks).
- 2014-07-29 17:08:48,503 [node-211][ERROR ] ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/sdb4']' returned non-zero exit status 1
- 2014-07-29 17:08:48,507 [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb4
- 2014-07-29 17:08:48,507 [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement