Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [root@ceph-admin ceph]# ceph-deploy -v osd activate ceph-node1:/dev/vdb:/dev/vdb1
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy -v osd activate ceph-node1:/dev/vdb:/dev/vdb1
- [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-node1:/dev/vdb:/dev/vdb1
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] activating host ceph-node1 disk /dev/vdb
- [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
- [ceph-node1][INFO ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/vdb
- [ceph-node1][WARNIN] ceph-disk: Cannot discover filesystem type: device /dev/vdb: Line is truncated:
- [ceph-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
- [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate --mark-init sysvinit --mount /dev/vdb
- [root@ceph-admin ceph]# ceph-deploy -v osd activate ceph-node1:/dev/vdb1
- [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy -v osd activate ceph-node1:/dev/vdb1
- [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-node1:/dev/vdb1:
- [ceph-node1][DEBUG ] connected to host: ceph-node1
- [ceph-node1][DEBUG ] detect platform information from remote host
- [ceph-node1][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] activating host ceph-node1 disk /dev/vdb1
- [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
- [ceph-node1][INFO ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/vdb1
- [ceph-node1][WARNIN] 2014-02-18 10:49:14.971328 7fe911f56700 0 librados: client.bootstrap-osd authentication error (1) Operation not permitted
- [ceph-node1][WARNIN] Error connecting to cluster: PermissionError
- [ceph-node1][WARNIN] ERROR:ceph-disk:Failed to activate
- [ceph-node1][WARNIN] ceph-disk: Error: ceph osd create failed: Command '/usr/bin/ceph' returned non-zero exit status 1:
- [ceph-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
- [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate --mark-init sysvinit --mount /dev/vdb1
- [root@ceph-admin ceph]# ll
- [root@ceph-node1 ceph]# ceph -c ./ceph.conf -k ceph.client.admin.keyring status
- cluster 44ab50ea-6393-45d2-bef6-a56235716cf5
- health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds; clock skew detected on mon.ceph-mon2
- monmap e7: 3 mons at {ceph-mon1=192.168.1.38:6789/0,ceph-mon2=192.168.1.33:6789/0,ceph-mon3=192.168.1.31:6789/0}, election epoch 496, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
- osdmap e1: 0 osds: 0 up, 0 in
- pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
- 0 kB used, 0 kB / 0 kB avail
- 192 creating
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# service ceph status
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]#
- [root@ceph-node1 ceph]# df -h
- Filesystem Size Used Avail Use% Mounted on
- /dev/vda2 8.9G 1.8G 6.7G 21% /
- tmpfs 15G 0 15G 0% /dev/shm
- [root@ceph-node1 ceph]#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement