Advertisement
Guest User

gluster_volume_status_info

a guest
Dec 12th, 2018
121
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.44 KB | None | 0 0
  1. luster volume status
  2. Status of volume: shared
  3. Gluster process TCP Port RDMA Port Online Pid
  4. ------------------------------------------------------------------------------
  5. Brick gluster11:/gluster/bricksda1/shared 49152 0 Y 2073
  6. Brick gluster12:/gluster/bricksda1/shared 49152 0 Y 12756
  7. Brick gluster13:/gluster/bricksda1/shared 49152 0 Y 2176
  8. Brick gluster11:/gluster/bricksdb1/shared 49153 0 Y 2081
  9. Brick gluster12:/gluster/bricksdb1/shared 49153 0 Y 12764
  10. Brick gluster13:/gluster/bricksdb1/shared 49153 0 Y 2177
  11. Brick gluster11:/gluster/bricksdc1/shared 49154 0 Y 2091
  12. Brick gluster12:/gluster/bricksdc1/shared 49154 0 Y 12772
  13. Brick gluster13:/gluster/bricksdc1/shared 49154 0 Y 2178
  14. Brick gluster11:/gluster/bricksdd1/shared 49156 0 Y 2271
  15. Brick gluster12:/gluster/bricksdd1_new/shar
  16. ed 49155 0 Y 12780
  17. Brick gluster13:/gluster/bricksdd1_new/shar
  18. ed 49158 0 Y 4818
  19. Self-heal Daemon on localhost N/A N/A Y 20673
  20. Self-heal Daemon on gluster13 N/A N/A Y 4848
  21. Self-heal Daemon on gluster12 N/A N/A Y 16815
  22.  
  23. Task Status of Volume shared
  24. ------------------------------------------------------------------------------
  25. There are no active volume tasks
  26.  
  27. gluster volume info
  28.  
  29. Volume Name: shared
  30. Type: Distributed-Replicate
  31. Volume ID: e879d208-1d8c-4089-85f3-ef1b3aa45d36
  32. Status: Started
  33. Snapshot Count: 0
  34. Number of Bricks: 4 x 3 = 12
  35. Transport-type: tcp
  36. Bricks:
  37. Brick1: gluster11:/gluster/bricksda1/shared
  38. Brick2: gluster12:/gluster/bricksda1/shared
  39. Brick3: gluster13:/gluster/bricksda1/shared
  40. Brick4: gluster11:/gluster/bricksdb1/shared
  41. Brick5: gluster12:/gluster/bricksdb1/shared
  42. Brick6: gluster13:/gluster/bricksdb1/shared
  43. Brick7: gluster11:/gluster/bricksdc1/shared
  44. Brick8: gluster12:/gluster/bricksdc1/shared
  45. Brick9: gluster13:/gluster/bricksdc1/shared
  46. Brick10: gluster11:/gluster/bricksdd1/shared
  47. Brick11: gluster12:/gluster/bricksdd1_new/shared
  48. Brick12: gluster13:/gluster/bricksdd1_new/shared
  49. Options Reconfigured:
  50. cluster.lookup-unhashed: on
  51. storage.build-pgfid: off
  52. performance.nl-cache: on
  53. cluster.heal-timeout: 600
  54. cluster.self-heal-daemon: enable
  55. performance.md-cache-timeout: 600
  56. cluster.lookup-optimize: on
  57. cluster.readdir-optimize: on
  58. performance.cache-refresh-timeout: 4
  59. performance.parallel-readdir: on
  60. server.event-threads: 4
  61. client.event-threads: 4
  62. performance.cache-max-file-size: 128MB
  63. performance.write-behind-window-size: 16MB
  64. performance.io-thread-count: 8
  65. cluster.min-free-disk: 1%
  66. performance.cache-size: 6GB
  67. nfs.disable: on
  68. transport.address-family: inet
  69. performance.high-prio-threads: 8
  70. performance.normal-prio-threads: 8
  71. performance.low-prio-threads: 8
  72. performance.least-prio-threads: 8
  73. performance.io-cache: on
  74. server.allow-insecure: on
  75. performance.strict-o-direct: off
  76. transport.listen-backlog: 100
  77. server.outstanding-rpc-limit: 128
  78. cluster.shd-max-threads: 1
  79. network.inode-lru-limit: 50000
  80. cluster.background-self-heal-count: 256
  81. cluster.data-self-heal-algorithm: full
  82. cluster.heal-wait-queue-length: 10000
  83. cluster.self-heal-window-size: 16
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement