Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [ceph@ceph-admin01 openstack_eqsy3]$ ceph-deploy disk prepare kvsrv03:/dev/sdw
- [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.30): /usr/bin/ceph-deploy disk prepare kvsrv03:/dev/sdw
- [ceph_deploy.cli][INFO ] ceph-deploy options:
- [ceph_deploy.cli][INFO ] username : None
- [ceph_deploy.cli][INFO ] disk : [('kvsrv03', '/dev/sdw', None)]
- [ceph_deploy.cli][INFO ] dmcrypt : False
- [ceph_deploy.cli][INFO ] verbose : False
- [ceph_deploy.cli][INFO ] overwrite_conf : False
- [ceph_deploy.cli][INFO ] subcommand : prepare
- [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
- [ceph_deploy.cli][INFO ] quiet : False
- [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fe99b14c7a0>
- [ceph_deploy.cli][INFO ] cluster : ceph
- [ceph_deploy.cli][INFO ] fs_type : xfs
- [ceph_deploy.cli][INFO ] func : <function disk at 0x7fe99b13dc08>
- [ceph_deploy.cli][INFO ] ceph_conf : None
- [ceph_deploy.cli][INFO ] default_release : False
- [ceph_deploy.cli][INFO ] zap_disk : False
- [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks kvsrv03:/dev/sdw:
- [kvsrv03][DEBUG ] connection detected need for sudo
- [kvsrv03][DEBUG ] connected to host: kvsrv03
- [kvsrv03][DEBUG ] detect platform information from remote host
- [kvsrv03][DEBUG ] detect machine type
- [kvsrv03][DEBUG ] find the location of an executable
- [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
- [ceph_deploy.osd][DEBUG ] Deploying osd to kvsrv03
- [kvsrv03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [ceph_deploy.osd][DEBUG ] Preparing host kvsrv03 disk /dev/sdw journal None activate False
- [kvsrv03][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdw
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
- [kvsrv03][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdw uuid path is /sys/dev/block/65:96/dm/uuid
- [kvsrv03][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdw uuid path is /sys/dev/block/65:96/dm/uuid
- [kvsrv03][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdw uuid path is /sys/dev/block/65:96/dm/uuid
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
- [kvsrv03][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdw uuid path is /sys/dev/block/65:96/dm/uuid
- [kvsrv03][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdw
- [kvsrv03][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdw uuid path is /sys/dev/block/65:96/dm/uuid
- [kvsrv03][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdw uuid path is /sys/dev/block/65:96/dm/uuid
- [kvsrv03][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdw
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:af5ece30-28c8-4edf-8f01-d7a0a7a892fa --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdw
- [kvsrv03][DEBUG ] The operation has completed successfully.
- [kvsrv03][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdw
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
- [kvsrv03][WARNIN] DEBUG:ceph-disk:Sleeping for 40 seconds!
- [kvsrv03][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdw
- [kvsrv03][WARNIN] Error: Error informing the kernel about modifications to partition /dev/sdw2 -- Device or resource busy. This means Linux won't know about any changes you made to /dev/sdw2 until you reboot -- so you shouldn't mount it or use it in any way before rebooting.
- [kvsrv03][WARNIN] Error: Failed to add partition 2 (Device or resource busy)
- [kvsrv03][WARNIN] Traceback (most recent call last):
- [kvsrv03][WARNIN] File "/sbin/ceph-disk", line 3578, in <module>
- [kvsrv03][WARNIN] main(sys.argv[1:])
- [kvsrv03][WARNIN] File "/sbin/ceph-disk", line 3532, in main
- [kvsrv03][WARNIN] args.func(args)
- [kvsrv03][WARNIN] File "/sbin/ceph-disk", line 1865, in main_prepare
- [kvsrv03][WARNIN] luks=luks
- [kvsrv03][WARNIN] File "/sbin/ceph-disk", line 1467, in prepare_journal
- [kvsrv03][WARNIN] return prepare_journal_dev(data, journal, journal_size, journal_uuid, journal_dm_keypath, cryptsetup_parameters, luks)
- [kvsrv03][WARNIN] File "/sbin/ceph-disk", line 1421, in prepare_journal_dev
- [kvsrv03][WARNIN] raise Error(e)
- [kvsrv03][WARNIN] __main__.Error: Error: Command '['/sbin/partprobe', '/dev/sdw']' returned non-zero exit status 1
- [kvsrv03][ERROR ] RuntimeError: command returned non-zero exit status: 1
- [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdw
- [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement