Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- root@paris3:~# /etc/init.d/ceph -a start
- === mon.0 ===
- Starting Ceph mon.0 on ceph3...
- === mon.1 ===
- Starting Ceph mon.1 on ceph4...
- === mon.2 ===
- Starting Ceph mon.2 on ceph5...
- === mds.0 ===
- Starting Ceph mds.0 on ceph3...
- starting mds.0 at :/0
- === mds.1 ===
- Starting Ceph mds.1 on ceph4...
- starting mds.1 at :/0
- === mds.2 ===
- Starting Ceph mds.2 on ceph5...
- starting mds.2 at :/0
- === osd.0 ===
- Mounting xfs on ceph3:/srv/ceph/osd0
- create-or-move updated item id 0 name 'osd.0' weight 2.73 at location {host=ceph3,root=default} to crush map
- Starting Ceph osd.0 on ceph3...
- starting osd.0 at :/0 osd_data /srv/ceph/osd0 /srv/ceph/journals/osd0/journal
- === osd.1 ===
- Mounting xfs on ceph4:/srv/ceph/osd1
- df: `/srv/ceph/osd1/.': No such file or directory
- df: no file systems processed
- create-or-move updating item id 1 name 'osd.1' weight 1 at location {host=ceph3,root=default} to crush map
- Starting Ceph osd.1 on ceph4...
- starting osd.1 at :/0 osd_data /srv/ceph/osd1 /srv/ceph/journals/osd1/journal
- === osd.2 ===
- Mounting xfs on ceph5:/srv/ceph/osd2
- df: `/srv/ceph/osd2/.': No such file or directory
- df: no file systems processed
- create-or-move updating item id 2 name 'osd.2' weight 1 at location {host=ceph3,root=default} to crush map
- Starting Ceph osd.2 on ceph5...
- starting osd.2 at :/0 osd_data /srv/ceph/osd2 /srv/ceph/journals/osd2/journal
- === osd.3 ===
- Mounting xfs on ceph3:/srv/ceph/osd3
- create-or-move updated item id 3 name 'osd.3' weight 0.07 at location {host=ceph3,root=default} to crush map
- Starting Ceph osd.3 on ceph3...
- starting osd.3 at :/0 osd_data /srv/ceph/osd3 /srv/ceph/osd3/journal
- === osd.4 ===
- Mounting xfs on ceph4:/srv/ceph/osd4
- df: `/srv/ceph/osd4/.': No such file or directory
- df: no file systems processed
- create-or-move updating item id 4 name 'osd.4' weight 1 at location {host=ceph3,root=default} to crush map
- Starting Ceph osd.4 on ceph4...
- starting osd.4 at :/0 osd_data /srv/ceph/osd4 /srv/ceph/osd4/journal
- === osd.5 ===
- Mounting xfs on ceph5:/srv/ceph/osd5
- df: `/srv/ceph/osd5/.': No such file or directory
- df: no file systems processed
- create-or-move updating item id 5 name 'osd.5' weight 1 at location {host=ceph3,root=default} to crush map
- Starting Ceph osd.5 on ceph5...
- starting osd.5 at :/0 osd_data /srv/ceph/osd5 /srv/ceph/osd5/journal
- === osd.6 ===
- Mounting xfs on ceph3:/srv/ceph/osd6
- create-or-move updated item id 6 name 'osd.6' weight 2.73 at location {host=ceph3,root=default} to crush map
- Starting Ceph osd.6 on ceph3...
- starting osd.6 at :/0 osd_data /srv/ceph/osd6 /srv/ceph/journals/osd6/journal
- root@paris3:~# ceph -s
- health HEALTH_WARN 41 pgs peering; 727 pgs stale; 41 pgs stuck inactive; 640 pgs stuck stale; 640 pgs stuck unclean; recovery recovering 16 o/s, 67025KB/s; mds cluster is degraded
- monmap e1: 3 mons at {0=10.123.123.3:6789/0,1=10.123.123.4:6789/0,2=10.123.123.5:6789/0}, election epoch 56, quorum 0,1,2 0,1,2
- osdmap e389: 7 osds: 7 up, 7 in
- pgmap v37440: 1280 pgs: 640 stale+active+clean, 308 active+remapped, 87 stale+active+remapped, 204 active+replay+remapped, 41 remapped+peering; 39222 MB data, 122 GB used, 11259 GB / 11382 GB avail; recovering 16 o/s, 67025KB/s
- mdsmap e107: 1/1/1 up {0=1=up:replay}, 2 up:standby
- root@paris3:~# ceph osd tree
- # id weight type name up/down reweight
- -6 0.06999 root ssd
- -7 0.06999 rack ssd_rack_01
- 3 0.06999 osd.3 up 1
- -5 0 host ceph5
- -4 0 host ceph4
- -2 12.21 host ceph3
- 0 3 osd.0 up 1
- 6 3 osd.6 up 1
- 3 0.06999 osd.3 up 1
- 1 3 osd.1 up 1
- 2 3 osd.2 up 1
- 4 0.06999 osd.4 up 1
- 5 0.06999 osd.5 up 1
- -1 6 root default
- -3 6 rack hdd_rack_01
- 0 3 osd.0 up 1
- 6 3 osd.6 up 1
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement