Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- cephdeploy@services1:~/os-cluster$ ceph -s
- cluster 3175dc2e-bd5b-4cd7-91ce-1ab9454b4142
- health HEALTH_WARN
- 140 pgs backfill
- 5 pgs backfilling
- 92 pgs degraded
- 1710 pgs down
- 1710 pgs peering
- 7 pgs recovering
- 77 pgs recovery_wait
- 76 pgs stuck degraded
- 1710 pgs stuck inactive
- 1939 pgs stuck unclean
- 8 pgs undersized
- 315 requests are blocked > 32 sec
- recovery 3546/434187 objects degraded (0.817%)
- recovery 12815/434187 objects misplaced (2.951%)
- too many PGs per OSD (367 > max 300)
- monmap e1: 3 mons at {storage1=192.168.0.15:6789/0,storage2=192.168.0.16:6789/0,storage3=192.168.0.17:6789/0}
- election epoch 224, quorum 0,1,2 storage1,storage2,storage3
- osdmap e13209: 66 osds: 44 up, 44 in; 144 remapped pgs
- flags sortbitwise
- pgmap v6006371: 8080 pgs, 17 pools, 1282 GB data, 208 kobjects
- 2578 GB used, 38167 GB / 40746 GB avail
- 3546/434187 objects degraded (0.817%)
- 12815/434187 objects misplaced (2.951%)
- 6141 active+clean
- 1710 down+peering
- 132 active+remapped+wait_backfill
- 77 active+recovery_wait+degraded
- 8 active+undersized+degraded+remapped+wait_backfill
- 7 active+recovering+degraded
- 5 active+remapped+backfilling
- recovery io 39446 kB/s, 9 objects/s
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement