Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- root@csn1:/mnt/nfs-primary1# ceph osd tree
- dumped osdmap tree epoch 161
- # id weight type name up/down reweight
- -1 9 pool default
- -3 2 rack rack1313
- -2 2 host csn1
- 101 1 osd.101 up 1
- 3 1 osd.3 up 1
- -9 2 rack rack1314
- -7 2 host csn3
- 1 1 osd.1 up 1
- 5 1 osd.5 up 1
- -6 4 rack rack1315
- -4 2 host csn2
- 201 1 osd.201 up 1
- 4 1 osd.4 up 1
- -8 2 host csn5
- 2 1 osd.2 up 1
- 7 1 osd.7 up 1
- -10 1 rack rack1316
- -5 1 host csn4
- 6 1 osd.6 up 1
- 0 0 osd.0 up 1
- root@csn1:/mnt/nfs-primary1# ceph -s
- health HEALTH_OK
- monmap e1: 5 mons at {1=172.21.1.1:6789/0,2=172.21.1.2:6789/0,3=172.21.1.3:6789/0,4=172.21.1.4:6789/0,5=172.21.1.5:6789/0}, election epoch 46, quorum 0,1,2,3,4 1,2,3,4,5
- osdmap e161: 10 osds: 10 up, 10 in
- pgmap v173336: 38784 pgs: 38784 active+clean; 40059 MB data, 81240 MB used, 26746 GB / 27945 GB avail
- mdsmap e33: 1/1/1 up {0=3=up:active}, 4 up:standby
- root@csn1:/mnt/nfs-primary1# ceph osd dump | grep 'rep size'
- pool 0 'data' rep size 3 crush_ruleset 0 object_hash rjenkins pg_num 12928 pgp_num 12928 last_change 160 owner 0 crash_replay_interval 45
- pool 1 'metadata' rep size 2 crush_ruleset 1 object_hash rjenkins pg_num 12928 pgp_num 12928 last_change 1 owner 0
- pool 2 'rbd' rep size 2 crush_ruleset 2 object_hash rjenkins pg_num 12928 pgp_num 12928 last_change 1 owner 0
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement