Advertisement
Guest User

Untitled

a guest
Sep 3rd, 2018
595
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Logtalk 68.66 KB | None | 0 0
  1. kubectl logs deploy-heketi-859478d448-x2zn2
  2. Setting up heketi database
  3. No database file found
  4. stat: cannot stat '/var/lib/heketi/heketi.db': No such file or directory
  5. Heketi v7.0.0-134-g29a2af8
  6. [heketi] INFO 2018/09/03 09:03:51 Loaded kubernetes executor
  7. [heketi] INFO 2018/09/03 09:03:52 GlusterFS Application Loaded
  8. [heketi] INFO 2018/09/03 09:03:52 Started Node Health Cache Monitor
  9. Authorization loaded
  10. Listening on port 8080
  11. [admin@node1 ~]$ kubectl logs deploy-heketi-859478d448-x2zn2 -f
  12. Setting up heketi database
  13. No database file found
  14. stat: cannot stat '/var/lib/heketi/heketi.db': No such file or directory
  15. Heketi v7.0.0-134-g29a2af8
  16. [heketi] INFO 2018/09/03 09:03:51 Loaded kubernetes executor
  17. [heketi] INFO 2018/09/03 09:03:52 GlusterFS Application Loaded
  18. [heketi] INFO 2018/09/03 09:03:52 Started Node Health Cache Monitor
  19. Authorization loaded
  20. Listening on port 8080
  21. [negroni] Started GET /clusters
  22. [negroni] Completed 200 OK in 202.121µs
  23. [negroni] Started POST /clusters
  24. [negroni] Completed 201 Created in 145.962628ms
  25. [negroni] Started POST /nodes
  26. [heketi] INFO 2018/09/03 09:04:02 Starting Node Health Status refresh
  27. [heketi] INFO 2018/09/03 09:04:02 Cleaned 0 nodes from health cache
  28. [cmdexec] INFO 2018/09/03 09:04:02 Check Glusterd service status in node node3
  29. [kubeexec] DEBUG 2018/09/03 09:04:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
  30. Result: ● glusterd.service - GlusterFS, a clustered file-system server
  31.    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  32.    Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 54s ago
  33.   Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  34.  Main PID: 97 (glusterd)
  35.    CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
  36.            ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  37.            └─140 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
  38.  
  39. Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
  40. Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
  41. [heketi] INFO 2018/09/03 09:04:02 Adding node node3
  42. [negroni] Completed 202 Accepted in 214.616099ms
  43. [asynchttp] INFO 2018/09/03 09:04:02 asynchttp.go:288: Started job aba9fd463a0c410e34d737890182c3c4
  44. [negroni] Started GET /queue/aba9fd463a0c410e34d737890182c3c4
  45. [negroni] Completed 200 OK in 134.622µs
  46. [heketi] INFO 2018/09/03 09:04:02 Added node 543c97a4c6b1195bb558f1923a21a0fd
  47. [asynchttp] INFO 2018/09/03 09:04:02 asynchttp.go:292: Completed job aba9fd463a0c410e34d737890182c3c4 in 19.556498ms
  48. [negroni] Started GET /queue/aba9fd463a0c410e34d737890182c3c4
  49. [negroni] Completed 303 See Other in 103.977µs
  50. [negroni] Started GET /nodes/543c97a4c6b1195bb558f1923a21a0fd
  51. [negroni] Completed 200 OK in 727.87µs
  52. [negroni] Started POST /devices
  53. [heketi] INFO 2018/09/03 09:04:02 Adding device /dev/loop0 to node 543c97a4c6b1195bb558f1923a21a0fd
  54. [negroni] Completed 202 Accepted in 30.128347ms
  55. [asynchttp] INFO 2018/09/03 09:04:02 asynchttp.go:288: Started job e3f33e264af22214f9ae3e9adcf81810
  56. [negroni] Started GET /queue/e3f33e264af22214f9ae3e9adcf81810
  57. [negroni] Completed 200 OK in 93.804µs
  58. [kubeexec] ERROR 2018/09/03 09:04:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [pvcreate --metadatasize=128M --dataalignment=256K '/dev/loop0'] on glusterfs-gddwq: Err[command terminated with exit code 5]: Stdout []: Stderr [WARNING: dos signature detected on /dev/loop0 at offset 510. Wipe it? [y/n]: [n]
  59.   Aborted wiping of dos.
  60.   1 existing signature left on the device.
  61. ]
  62. [asynchttp] INFO 2018/09/03 09:04:03 asynchttp.go:292: Completed job e3f33e264af22214f9ae3e9adcf81810 in 920.882873ms
  63. [negroni] Started GET /queue/e3f33e264af22214f9ae3e9adcf81810
  64. [negroni] Completed 500 Internal Server Error in 110.699µs
  65. [negroni] Started POST /nodes
  66. [cmdexec] INFO 2018/09/03 09:04:05 Check Glusterd service status in node node3
  67. [kubeexec] DEBUG 2018/09/03 09:04:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
  68. Result: ● glusterd.service - GlusterFS, a clustered file-system server
  69.    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  70.    Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 57s ago
  71.   Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  72.  Main PID: 97 (glusterd)
  73.    CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
  74.            ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  75.            └─140 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
  76.  
  77. Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
  78. Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
  79. [heketi] INFO 2018/09/03 09:04:06 Adding node node4
  80. [negroni] Completed 202 Accepted in 2.302880639s
  81. [asynchttp] INFO 2018/09/03 09:04:06 asynchttp.go:288: Started job d7f05424b021b4a5530f49ffa5e85168
  82. [cmdexec] INFO 2018/09/03 09:04:06 Probing: node3 -> 10.100.1.72
  83. [negroni] Started GET /queue/d7f05424b021b4a5530f49ffa5e85168
  84. [negroni] Completed 200 OK in 132.949µs
  85. [kubeexec] DEBUG 2018/09/03 09:04:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: gluster peer probe 10.100.1.72
  86. Result: peer probe: success. Host 10.100.1.72 port 24007 already in peer list
  87. [cmdexec] INFO 2018/09/03 09:04:06 Setting snapshot limit
  88. [negroni] Started GET /queue/d7f05424b021b4a5530f49ffa5e85168
  89. [negroni] Completed 200 OK in 95.014µs
  90. [negroni] Started GET /queue/d7f05424b021b4a5530f49ffa5e85168
  91. [negroni] Completed 200 OK in 94.128µs
  92. [kubeexec] DEBUG 2018/09/03 09:04:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: gluster --mode=script snapshot config snap-max-hard-limit 14
  93. Result: snapshot config: snap-max-hard-limit for System set successfully
  94. [heketi] INFO 2018/09/03 09:04:06 Added node 780e6b35c81597c1ca04079742e377a1
  95. [asynchttp] INFO 2018/09/03 09:04:06 asynchttp.go:292: Completed job d7f05424b021b4a5530f49ffa5e85168 in 618.785605ms
  96. [negroni] Started GET /queue/d7f05424b021b4a5530f49ffa5e85168
  97. [negroni] Completed 303 See Other in 135.292µs
  98. [negroni] Started GET /nodes/780e6b35c81597c1ca04079742e377a1
  99. [negroni] Completed 200 OK in 230.882µs
  100. [negroni] Started POST /devices
  101. [heketi] INFO 2018/09/03 09:04:06 Adding device /dev/loop0 to node 780e6b35c81597c1ca04079742e377a1
  102. [negroni] Completed 202 Accepted in 60.42611ms
  103. [asynchttp] INFO 2018/09/03 09:04:06 asynchttp.go:288: Started job 41d2b8bb7b2add4aef526dd9719caf93
  104. [negroni] Started GET /queue/41d2b8bb7b2add4aef526dd9719caf93
  105. [negroni] Completed 200 OK in 112.168µs
  106. [kubeexec] ERROR 2018/09/03 09:04:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [pvcreate --metadatasize=128M --dataalignment=256K '/dev/loop0'] on glusterfs-7xwgg: Err[command terminated with exit code 5]: Stdout []: Stderr [WARNING: dos signature detected on /dev/loop0 at offset 510. Wipe it? [y/n]: [n]
  107.   Aborted wiping of dos.
  108.   1 existing signature left on the device.
  109. ]
  110. [asynchttp] INFO 2018/09/03 09:04:07 asynchttp.go:292: Completed job 41d2b8bb7b2add4aef526dd9719caf93 in 216.302454ms
  111. [negroni] Started GET /queue/41d2b8bb7b2add4aef526dd9719caf93
  112. [negroni] Completed 500 Internal Server Error in 167.16µs
  113. [negroni] Started POST /nodes
  114. [cmdexec] INFO 2018/09/03 09:04:08 Check Glusterd service status in node node3
  115. [kubeexec] DEBUG 2018/09/03 09:04:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
  116. Result: ● glusterd.service - GlusterFS, a clustered file-system server
  117.    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  118.    Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 59s ago
  119.   Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  120.  Main PID: 97 (glusterd)
  121.    CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
  122.            ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  123.            └─140 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
  124.  
  125. Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
  126. Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
  127. [heketi] INFO 2018/09/03 09:04:08 Adding node node5
  128. [negroni] Completed 202 Accepted in 256.002855ms
  129. [asynchttp] INFO 2018/09/03 09:04:08 asynchttp.go:288: Started job 71c6cd689f7aaba8eea05173b8b94ca9
  130. [cmdexec] INFO 2018/09/03 09:04:08 Probing: node3 -> 10.100.1.73
  131. [negroni] Started GET /queue/71c6cd689f7aaba8eea05173b8b94ca9
  132. [negroni] Completed 200 OK in 85.189µs
  133. [kubeexec] DEBUG 2018/09/03 09:04:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: gluster peer probe 10.100.1.73
  134. Result: peer probe: success. Host 10.100.1.73 port 24007 already in peer list
  135. [cmdexec] INFO 2018/09/03 09:04:08 Setting snapshot limit
  136. [negroni] Started GET /queue/71c6cd689f7aaba8eea05173b8b94ca9
  137. [negroni] Completed 200 OK in 155.395µs
  138. [negroni] Started GET /queue/71c6cd689f7aaba8eea05173b8b94ca9
  139. [negroni] Completed 200 OK in 150.742µs
  140. [kubeexec] DEBUG 2018/09/03 09:04:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: gluster --mode=script snapshot config snap-max-hard-limit 14
  141. Result: snapshot config: snap-max-hard-limit for System set successfully
  142. [heketi] INFO 2018/09/03 09:04:08 Added node b7b237ed28e9ad394aed97804ec6ff31
  143. [asynchttp] INFO 2018/09/03 09:04:08 asynchttp.go:292: Completed job 71c6cd689f7aaba8eea05173b8b94ca9 in 732.533265ms
  144. [negroni] Started GET /queue/71c6cd689f7aaba8eea05173b8b94ca9
  145. [negroni] Completed 303 See Other in 209.495µs
  146. [negroni] Started GET /nodes/b7b237ed28e9ad394aed97804ec6ff31
  147. [negroni] Completed 200 OK in 203.696µs
  148. [negroni] Started POST /devices
  149. [heketi] INFO 2018/09/03 09:04:08 Adding device /dev/loop0 to node b7b237ed28e9ad394aed97804ec6ff31
  150. [negroni] Completed 202 Accepted in 19.431872ms
  151. [asynchttp] INFO 2018/09/03 09:04:08 asynchttp.go:288: Started job 8391802b2e3da5e6c21e813d86f783ad
  152. [negroni] Started GET /queue/8391802b2e3da5e6c21e813d86f783ad
  153. [negroni] Completed 200 OK in 105.252µs
  154. [kubeexec] ERROR 2018/09/03 09:04:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [pvcreate --metadatasize=128M --dataalignment=256K '/dev/loop0'] on glusterfs-l9ts8: Err[command terminated with exit code 5]: Stdout []: Stderr [WARNING: dos signature detected on /dev/loop0 at offset 510. Wipe it? [y/n]: [n]
  155.   Aborted wiping of dos.
  156.   1 existing signature left on the device.
  157. ]
  158. [asynchttp] INFO 2018/09/03 09:04:09 asynchttp.go:292: Completed job 8391802b2e3da5e6c21e813d86f783ad in 266.433068ms
  159. [negroni] Started GET /queue/8391802b2e3da5e6c21e813d86f783ad
  160. [negroni] Completed 500 Internal Server Error in 114.399µs
  161. [negroni] Started GET /clusters/1969c8b9d869d79463b62fdf17b83f91
  162. [negroni] Completed 200 OK in 212.1µs
  163. ^C[admin@node1 ~]$ ssh node3
  164. Last login: Mon Sep  3 08:55:48 2018 from node1
  165. [admin@node3 ~]$ sudo wipefs -a /dev/loop0
  166. /dev/loop0: 2 bytes were erased at offset 0x000001fe (dos): 55 aa
  167. /dev/loop0: calling ioclt to re-read partition table: Invalid argument
  168. [admin@node3 ~]$ sudo wipefs -a /dev/loop0 -f
  169. [admin@node3 ~]$ exit
  170. logout
  171. Connection to node3 closed.
  172. [admin@node1 ~]$ ssh node4
  173. Last login: Mon Sep  3 09:01:35 2018 from node1
  174. [admin@node4 ~]$ sudo wipefs -a /dev/loop0 -f
  175. /dev/loop0: 2 bytes were erased at offset 0x000001fe (dos): 55 aa
  176. /dev/loop0: calling ioclt to re-read partition table: Invalid argument
  177. [admin@node4 ~]$ exit
  178. logout
  179. Connection to node4 closed.
  180. [admin@node1 ~]$ ssh node5
  181. Last login: Mon Sep  3 08:59:47 2018 from node1
  182. [admin@node5 ~]$ sudo wipefs -a /dev/loop0
  183. /dev/loop0: 2 bytes were erased at offset 0x000001fe (dos): 55 aa
  184. /dev/loop0: calling ioclt to re-read partition table: Invalid argument
  185. [admin@node5 ~]$ sudo wipefs -a /dev/loop0 -f
  186. [admin@node5 ~]$ exit
  187. logout
  188. Connection to node5 closed.
  189. [admin@node1 ~]$ kubectl logs deploy-heketi-859478d448-kpl9z -p
  190. Error from server (BadRequest): previous terminated container "deploy-heketi" in pod "deploy-heketi-859478d448-kpl9z" not found
  191. [admin@node1 ~]$ kubectl logs deploy-heketi-859478d448-kpl9z -p
  192. Error from server (BadRequest): previous terminated container "deploy-heketi" in pod "deploy-heketi-859478d448-kpl9z" not found
  193. [admin@node1 ~]$ kubectl logs deploy-heketi-859478d448-kpl9z -f
  194. Setting up heketi database
  195. No database file found
  196. stat: cannot stat '/var/lib/heketi/heketi.db': No such file or directory
  197. Heketi v6.0.0-196-gf31ad28
  198. [heketi] INFO 2018/09/03 09:06:08 Loaded kubernetes executor
  199. [heketi] INFO 2018/09/03 09:06:08 GlusterFS Application Loaded
  200. [heketi] INFO 2018/09/03 09:06:08 Started Node Health Cache Monitor
  201. Authorization loaded
  202. Listening on port 8080
  203. [heketi] INFO 2018/09/03 09:06:18 Starting Node Health Status refresh
  204. [heketi] INFO 2018/09/03 09:06:18 Cleaned 0 nodes from health cache
  205. [negroni] Started GET /clusters
  206. [negroni] Completed 200 OK in 250.115µs
  207. [negroni] Started POST /clusters
  208. [negroni] Completed 201 Created in 27.062018ms
  209. [negroni] Started POST /nodes
  210. [cmdexec] INFO 2018/09/03 09:06:19 Check Glusterd service status in node node3
  211. [kubeexec] DEBUG 2018/09/03 09:06:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
  212. Result: ● glusterd.service - GlusterFS, a clustered file-system server
  213.    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  214.    Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 3min 11s ago
  215.   Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  216.  Main PID: 97 (glusterd)
  217.    CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
  218.            ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  219.            └─140 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
  220.  
  221. Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
  222. Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
  223. [heketi] INFO 2018/09/03 09:06:19 Adding node node3
  224. [negroni] Completed 202 Accepted in 140.877493ms
  225. [asynchttp] INFO 2018/09/03 09:06:19 asynchttp.go:288: Started job 04a66632b549c07a0b2780c9fb8b6ede
  226. [negroni] Started GET /queue/04a66632b549c07a0b2780c9fb8b6ede
  227. [negroni] Completed 200 OK in 65.785µs
  228. [heketi] INFO 2018/09/03 09:06:19 Added node 1bf89fc30122cefb19a9e33fa5784a13
  229. [asynchttp] INFO 2018/09/03 09:06:19 asynchttp.go:292: Completed job 04a66632b549c07a0b2780c9fb8b6ede in 24.944584ms
  230. [negroni] Started GET /queue/04a66632b549c07a0b2780c9fb8b6ede
  231. [negroni] Completed 303 See Other in 126.333µs
  232. [negroni] Started GET /nodes/1bf89fc30122cefb19a9e33fa5784a13
  233. [negroni] Completed 200 OK in 368.155µs
  234. [negroni] Started POST /devices
  235. [heketi] INFO 2018/09/03 09:06:20 Adding device /dev/loop0 to node 1bf89fc30122cefb19a9e33fa5784a13
  236. [negroni] Completed 202 Accepted in 22.724293ms
  237. [asynchttp] INFO 2018/09/03 09:06:20 asynchttp.go:288: Started job 675ad388472eeee85bfca1b5a2ceb18c
  238. [negroni] Started GET /queue/675ad388472eeee85bfca1b5a2ceb18c
  239. [negroni] Completed 200 OK in 92.646µs
  240. [kubeexec] DEBUG 2018/09/03 09:06:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/loop0'
  241. Result:   Physical volume "/dev/loop0" successfully created.
  242. [kubeexec] DEBUG 2018/09/03 09:06:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: vgcreate vg_431729f9e9eaac554584ab8784472fb9 /dev/loop0
  243. Result:   Volume group "vg_431729f9e9eaac554584ab8784472fb9" successfully created
  244. [kubeexec] DEBUG 2018/09/03 09:06:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: vgdisplay -c vg_431729f9e9eaac554584ab8784472fb9
  245. Result:   vg_431729f9e9eaac554584ab8784472fb9:r/w:772:-1:0:0:0:-1:0:1:1:51679232:4096:12617:0:12617:FX3Rrq-nzgq-Fdq2-n6dO-BRje-hjxu-GYOEjv
  246. [cmdexec] DEBUG 2018/09/03 09:06:20 /src/github.com/heketi/heketi/executors/cmdexec/device.go:143: Size of /dev/loop0 in node3 is 51679232
  247. [negroni] Started GET /queue/675ad388472eeee85bfca1b5a2ceb18c
  248. [negroni] Completed 200 OK in 95.567µs
  249. [negroni] Started GET /queue/675ad388472eeee85bfca1b5a2ceb18c
  250. [negroni] Completed 200 OK in 93.496µs
  251. [heketi] INFO 2018/09/03 09:06:22 Added device /dev/loop0
  252. [asynchttp] INFO 2018/09/03 09:06:22 asynchttp.go:292: Completed job 675ad388472eeee85bfca1b5a2ceb18c in 2.556913942s
  253. [negroni] Started GET /queue/675ad388472eeee85bfca1b5a2ceb18c
  254. [negroni] Completed 204 No Content in 88.95µs
  255. [negroni] Started POST /nodes
  256. [cmdexec] INFO 2018/09/03 09:06:23 Check Glusterd service status in node node3
  257. [kubeexec] DEBUG 2018/09/03 09:06:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
  258. Result: ● glusterd.service - GlusterFS, a clustered file-system server
  259.    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  260.    Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 3min 14s ago
  261.   Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  262.  Main PID: 97 (glusterd)
  263.    CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
  264.            ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  265.            └─140 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
  266.  
  267. Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
  268. Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
  269. [heketi] INFO 2018/09/03 09:06:23 Adding node node4
  270. [negroni] Completed 202 Accepted in 159.338556ms
  271. [asynchttp] INFO 2018/09/03 09:06:23 asynchttp.go:288: Started job 84c24f7109a3e7153f7e5a00aba6770f
  272. [cmdexec] INFO 2018/09/03 09:06:23 Probing: node3 -> 10.100.1.72
  273. [negroni] Started GET /queue/84c24f7109a3e7153f7e5a00aba6770f
  274. [negroni] Completed 200 OK in 73.185µs
  275. [kubeexec] DEBUG 2018/09/03 09:06:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: gluster peer probe 10.100.1.72
  276. Result: peer probe: success. Host 10.100.1.72 port 24007 already in peer list
  277. [cmdexec] INFO 2018/09/03 09:06:23 Setting snapshot limit
  278. [negroni] Started GET /queue/84c24f7109a3e7153f7e5a00aba6770f
  279. [negroni] Completed 200 OK in 126.687µs
  280. [kubeexec] DEBUG 2018/09/03 09:06:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: gluster --mode=script snapshot config snap-max-hard-limit 14
  281. Result: snapshot config: snap-max-hard-limit for System set successfully
  282. [heketi] INFO 2018/09/03 09:06:23 Added node fce79176083462c39bafb000a899fff3
  283. [asynchttp] INFO 2018/09/03 09:06:23 asynchttp.go:292: Completed job 84c24f7109a3e7153f7e5a00aba6770f in 479.338771ms
  284. [negroni] Started GET /queue/84c24f7109a3e7153f7e5a00aba6770f
  285. [negroni] Completed 303 See Other in 103.368µs
  286. [negroni] Started GET /nodes/fce79176083462c39bafb000a899fff3
  287. [negroni] Completed 200 OK in 253.033µs
  288. [negroni] Started POST /devices
  289. [heketi] INFO 2018/09/03 09:06:23 Adding device /dev/loop0 to node fce79176083462c39bafb000a899fff3
  290. [negroni] Completed 202 Accepted in 85.557342ms
  291. [asynchttp] INFO 2018/09/03 09:06:23 asynchttp.go:288: Started job 96b8fabbc696f029fbc18587ae4143b8
  292. [negroni] Started GET /queue/96b8fabbc696f029fbc18587ae4143b8
  293. [negroni] Completed 200 OK in 76.257µs
  294. [kubeexec] DEBUG 2018/09/03 09:06:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/loop0'
  295. Result:   Physical volume "/dev/loop0" successfully created.
  296. [kubeexec] DEBUG 2018/09/03 09:06:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: vgcreate vg_2453bf761ebcb88158a59d64f443d352 /dev/loop0
  297. Result:   Volume group "vg_2453bf761ebcb88158a59d64f443d352" successfully created
  298. [kubeexec] DEBUG 2018/09/03 09:06:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: vgdisplay -c vg_2453bf761ebcb88158a59d64f443d352
  299. Result:   vg_2453bf761ebcb88158a59d64f443d352:r/w:772:-1:0:0:0:-1:0:1:1:26079232:4096:6367:0:6367:iDlxks-fFAW-YCSP-tiyK-ov04-Hk0U-WW3zEk
  300. [cmdexec] DEBUG 2018/09/03 09:06:24 /src/github.com/heketi/heketi/executors/cmdexec/device.go:143: Size of /dev/loop0 in node4 is 26079232
  301. [heketi] INFO 2018/09/03 09:06:24 Added device /dev/loop0
  302. [asynchttp] INFO 2018/09/03 09:06:24 asynchttp.go:292: Completed job 96b8fabbc696f029fbc18587ae4143b8 in 671.994654ms
  303. [negroni] Started GET /queue/96b8fabbc696f029fbc18587ae4143b8
  304. [negroni] Completed 204 No Content in 87.54µs
  305. [negroni] Started POST /nodes
  306. [cmdexec] INFO 2018/09/03 09:06:25 Check Glusterd service status in node node3
  307. [kubeexec] DEBUG 2018/09/03 09:06:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
  308. Result: ● glusterd.service - GlusterFS, a clustered file-system server
  309.    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  310.    Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 3min 16s ago
  311.   Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  312.  Main PID: 97 (glusterd)
  313.    CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
  314.            ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  315.            └─140 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
  316.  
  317. Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
  318. Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
  319. [heketi] INFO 2018/09/03 09:06:25 Adding node node5
  320. [negroni] Completed 202 Accepted in 300.057044ms
  321. [asynchttp] INFO 2018/09/03 09:06:25 asynchttp.go:288: Started job 545caff1ebbb1718c9ddc948a39c1d1e
  322. [cmdexec] INFO 2018/09/03 09:06:25 Probing: node3 -> 10.100.1.73
  323. [negroni] Started GET /queue/545caff1ebbb1718c9ddc948a39c1d1e
  324. [negroni] Completed 200 OK in 63µs
  325. [kubeexec] DEBUG 2018/09/03 09:06:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: gluster peer probe 10.100.1.73
  326. Result: peer probe: success. Host 10.100.1.73 port 24007 already in peer list
  327. [cmdexec] INFO 2018/09/03 09:06:25 Setting snapshot limit
  328. [negroni] Started GET /queue/545caff1ebbb1718c9ddc948a39c1d1e
  329. [negroni] Completed 200 OK in 102.452µs
  330. [kubeexec] DEBUG 2018/09/03 09:06:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: gluster --mode=script snapshot config snap-max-hard-limit 14
  331. Result: snapshot config: snap-max-hard-limit for System set successfully
  332. [heketi] INFO 2018/09/03 09:06:25 Added node de310d550bef8cd7c2414e6240da36e7
  333. [asynchttp] INFO 2018/09/03 09:06:25 asynchttp.go:292: Completed job 545caff1ebbb1718c9ddc948a39c1d1e in 485.669526ms
  334. [negroni] Started GET /queue/545caff1ebbb1718c9ddc948a39c1d1e
  335. [negroni] Completed 303 See Other in 117.883µs
  336. [negroni] Started GET /nodes/de310d550bef8cd7c2414e6240da36e7
  337. [negroni] Completed 200 OK in 160.919µs
  338. [negroni] Started POST /devices
  339. [heketi] INFO 2018/09/03 09:06:25 Adding device /dev/loop0 to node de310d550bef8cd7c2414e6240da36e7
  340. [negroni] Completed 202 Accepted in 32.306024ms
  341. [asynchttp] INFO 2018/09/03 09:06:25 asynchttp.go:288: Started job d7a50cf53a4cea6d939fb08223cfca4b
  342. [negroni] Started GET /queue/d7a50cf53a4cea6d939fb08223cfca4b
  343. [negroni] Completed 200 OK in 64.576µs
  344. [kubeexec] DEBUG 2018/09/03 09:06:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/loop0'
  345. Result:   Physical volume "/dev/loop0" successfully created.
  346. [kubeexec] DEBUG 2018/09/03 09:06:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: vgcreate vg_ccfc2ad4653c9d38c8e619232307149a /dev/loop0
  347. Result:   Volume group "vg_ccfc2ad4653c9d38c8e619232307149a" successfully created
  348. [kubeexec] DEBUG 2018/09/03 09:06:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: vgdisplay -c vg_ccfc2ad4653c9d38c8e619232307149a
  349. Result:   vg_ccfc2ad4653c9d38c8e619232307149a:r/w:772:-1:0:0:0:-1:0:1:1:51679232:4096:12617:0:12617:hLkAzQ-wl2X-B2x7-ii9U-wDqd-ca8B-RoOENc
  350. [cmdexec] DEBUG 2018/09/03 09:06:26 /src/github.com/heketi/heketi/executors/cmdexec/device.go:143: Size of /dev/loop0 in node5 is 51679232
  351. [heketi] INFO 2018/09/03 09:06:26 Added device /dev/loop0
  352. [asynchttp] INFO 2018/09/03 09:06:26 asynchttp.go:292: Completed job d7a50cf53a4cea6d939fb08223cfca4b in 588.440999ms
  353. [negroni] Started GET /queue/d7a50cf53a4cea6d939fb08223cfca4b
  354. [negroni] Completed 204 No Content in 121.522µs
  355. [negroni] Started GET /clusters/22bdff84184abc4512c81188e26f973b
  356. [negroni] Completed 200 OK in 247.157µs
  357. [negroni] Started GET /clusters
  358. [negroni] Completed 200 OK in 121.319µs
  359. [negroni] Started GET /clusters/22bdff84184abc4512c81188e26f973b
  360. [negroni] Completed 200 OK in 152.58µs
  361. [negroni] Started POST /volumes
  362. [heketi] INFO 2018/09/03 09:06:27 Allocating brick set #0
  363. [negroni] Completed 202 Accepted in 53.137917ms
  364. [asynchttp] INFO 2018/09/03 09:06:27 asynchttp.go:288: Started job 1d0873543c5a840751fe16c2c8ee9ee6
  365. [heketi] INFO 2018/09/03 09:06:27 Started async operation: Create Volume
  366. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  367. [negroni] Completed 200 OK in 69.357µs
  368. [heketi] INFO 2018/09/03 09:06:27 Creating brick f98c2763743d3c9647a112338fa2abb7
  369. [heketi] INFO 2018/09/03 09:06:27 Creating brick ff00dfa4c19eafa51a18c034c3adc381
  370. [heketi] INFO 2018/09/03 09:06:27 Creating brick 0ad04a0d0f8c7015c818f1a10c7a1454
  371. [kubeexec] DEBUG 2018/09/03 09:06:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mkdir -p /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381
  372. Result:
  373. [kubeexec] DEBUG 2018/09/03 09:06:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mkdir -p /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7
  374. Result:
  375. [kubeexec] DEBUG 2018/09/03 09:06:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mkdir -p /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454
  376. Result:
  377. [kubeexec] DEBUG 2018/09/03 09:06:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: lvcreate --poolmetadatasize 12288K -c 256K -L 2097152K -T vg_ccfc2ad4653c9d38c8e619232307149a/tp_ff00dfa4c19eafa51a18c034c3adc381 -V 2097152K -n brick_ff00dfa4c19eafa51a18c034c3adc381
  378. Result:   Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
  379.   Logical volume "brick_ff00dfa4c19eafa51a18c034c3adc381" created.
  380. [kubeexec] DEBUG 2018/09/03 09:06:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: lvcreate --poolmetadatasize 12288K -c 256K -L 2097152K -T vg_2453bf761ebcb88158a59d64f443d352/tp_0ad04a0d0f8c7015c818f1a10c7a1454 -V 2097152K -n brick_0ad04a0d0f8c7015c818f1a10c7a1454
  381. Result:   Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
  382.   Logical volume "brick_0ad04a0d0f8c7015c818f1a10c7a1454" created.
  383. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  384. [negroni] Completed 200 OK in 104.249µs
  385. [kubeexec] DEBUG 2018/09/03 09:06:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: lvcreate --poolmetadatasize 12288K -c 256K -L 2097152K -T vg_431729f9e9eaac554584ab8784472fb9/tp_f98c2763743d3c9647a112338fa2abb7 -V 2097152K -n brick_f98c2763743d3c9647a112338fa2abb7
  386. Result:   Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
  387.   Logical volume "brick_f98c2763743d3c9647a112338fa2abb7" created.
  388. [kubeexec] DEBUG 2018/09/03 09:06:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_ff00dfa4c19eafa51a18c034c3adc381
  389. Result: meta-data=/dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_ff00dfa4c19eafa51a18c034c3adc381 isize=512    agcount=8, agsize=65520 blks
  390.          =                       sectsz=512   attr=2, projid32bit=1
  391.          =                       crc=1        finobt=0, sparse=0
  392. data     =                       bsize=4096   blocks=524160, imaxpct=25
  393.          =                       sunit=16     swidth=64 blks
  394. naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
  395. log      =internal log           bsize=4096   blocks=2560, version=2
  396.          =                       sectsz=512   sunit=16 blks, lazy-count=1
  397. realtime =none                   extsz=4096   blocks=0, rtextents=0
  398. [kubeexec] DEBUG 2018/09/03 09:06:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_0ad04a0d0f8c7015c818f1a10c7a1454
  399. Result: meta-data=/dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_0ad04a0d0f8c7015c818f1a10c7a1454 isize=512    agcount=8, agsize=65520 blks
  400.          =                       sectsz=512   attr=2, projid32bit=1
  401.          =                       crc=1        finobt=0, sparse=0
  402. data     =                       bsize=4096   blocks=524160, imaxpct=25
  403.          =                       sunit=16     swidth=64 blks
  404. naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
  405. log      =internal log           bsize=4096   blocks=2560, version=2
  406.          =                       sectsz=512   sunit=16 blks, lazy-count=1
  407. realtime =none                   extsz=4096   blocks=0, rtextents=0
  408. [kubeexec] DEBUG 2018/09/03 09:06:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: awk "BEGIN {print \"/dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_ff00dfa4c19eafa51a18c034c3adc381 /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
  409. Result:
  410. [kubeexec] DEBUG 2018/09/03 09:06:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_f98c2763743d3c9647a112338fa2abb7
  411. Result: meta-data=/dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_f98c2763743d3c9647a112338fa2abb7 isize=512    agcount=8, agsize=65520 blks
  412.          =                       sectsz=512   attr=2, projid32bit=1
  413.          =                       crc=1        finobt=0, sparse=0
  414. data     =                       bsize=4096   blocks=524160, imaxpct=25
  415.          =                       sunit=16     swidth=64 blks
  416. naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
  417. log      =internal log           bsize=4096   blocks=2560, version=2
  418.          =                       sectsz=512   sunit=16 blks, lazy-count=1
  419. realtime =none                   extsz=4096   blocks=0, rtextents=0
  420. [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_ff00dfa4c19eafa51a18c034c3adc381 /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381
  421. Result:
  422. [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: awk "BEGIN {print \"/dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_f98c2763743d3c9647a112338fa2abb7 /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
  423. Result:
  424. [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: awk "BEGIN {print \"/dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_0ad04a0d0f8c7015c818f1a10c7a1454 /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
  425. Result:
  426. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  427. [negroni] Completed 200 OK in 99.402µs
  428. [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mkdir /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381/brick
  429. Result:
  430. [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_f98c2763743d3c9647a112338fa2abb7 /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7
  431. Result:
  432. [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_0ad04a0d0f8c7015c818f1a10c7a1454 /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454
  433. Result:
  434. [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mkdir /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7/brick
  435. Result:
  436. [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mkdir /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454/brick
  437. Result:
  438. [cmdexec] INFO 2018/09/03 09:06:29 Creating volume heketidbstorage replica 3
  439. [kubeexec] ERROR 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:240: Failed to run command [gluster --mode=script volume create heketidbstorage replica 3 10.100.1.73:/var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381/brick 10.100.1.71:/var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7/brick 10.100.1.72:/var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454/brick] on glusterfs-l9ts8: Err[command terminated with exit code 1]: Stdout []: Stderr [volume create: heketidbstorage: failed: Volume heketidbstorage already exists
  440. ]
  441. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  442. [negroni] Completed 200 OK in 130.128µs
  443. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  444. [negroni] Completed 200 OK in 106.27µs
  445. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  446. [negroni] Completed 200 OK in 153.796µs
  447. [kubeexec] DEBUG 2018/09/03 09:06:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: gluster --mode=script volume stop heketidbstorage force
  448. Result: volume stop: heketidbstorage: success
  449. [heketi] ERROR 2018/09/03 09:06:32 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:177: Error executing create volume: Unable to execute command on glusterfs-l9ts8: volume create: heketidbstorage: failed: Volume heketidbstorage already exists
  450. [kubeexec] DEBUG 2018/09/03 09:06:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: gluster --mode=script volume delete heketidbstorage
  451. Result: volume delete: heketidbstorage: success
  452. [heketi] WARNING 2018/09/03 09:06:32 Create Volume Exec requested retry
  453. [heketi] INFO 2018/09/03 09:06:32 Retry Create Volume (1)
  454. [kubeexec] ERROR 2018/09/03 09:06:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:240: Failed to run command [gluster --mode=script volume stop heketidbstorage force] on glusterfs-gddwq: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: heketidbstorage: failed: Volume heketidbstorage does not exist
  455. ]
  456. [cmdexec] ERROR 2018/09/03 09:06:32 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:143: Unable to stop volume heketidbstorage: Unable to execute command on glusterfs-gddwq: volume stop: heketidbstorage: failed: Volume heketidbstorage does not exist
  457. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  458. [negroni] Completed 200 OK in 110.357µs
  459. [kubeexec] ERROR 2018/09/03 09:06:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:240: Failed to run command [gluster --mode=script volume delete heketidbstorage] on glusterfs-gddwq: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: heketidbstorage: failed: Volume heketidbstorage does not exist
  460. ]
  461. [cmdexec] ERROR 2018/09/03 09:06:33 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:152: Unable to delete volume heketidbstorage: Unable to execute command on glusterfs-gddwq: volume delete: heketidbstorage: failed: Volume heketidbstorage does not exist
  462. [heketi] INFO 2018/09/03 09:06:33 Deleting brick ff00dfa4c19eafa51a18c034c3adc381
  463. [heketi] INFO 2018/09/03 09:06:33 Deleting brick f98c2763743d3c9647a112338fa2abb7
  464. [heketi] INFO 2018/09/03 09:06:33 Deleting brick 0ad04a0d0f8c7015c818f1a10c7a1454
  465. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  466. [negroni] Completed 200 OK in 106.694µs
  467. [kubeexec] DEBUG 2018/09/03 09:06:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: umount /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454
  468. Result:
  469. [kubeexec] DEBUG 2018/09/03 09:06:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: umount /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7
  470. Result:
  471. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  472. [negroni] Completed 200 OK in 89.783µs
  473. [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: lvremove -f vg_2453bf761ebcb88158a59d64f443d352/tp_0ad04a0d0f8c7015c818f1a10c7a1454
  474. Result:   Logical volume "brick_0ad04a0d0f8c7015c818f1a10c7a1454" successfully removed
  475.   Logical volume "tp_0ad04a0d0f8c7015c818f1a10c7a1454" successfully removed
  476. [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: umount /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381
  477. Result:
  478. [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: lvremove -f vg_431729f9e9eaac554584ab8784472fb9/tp_f98c2763743d3c9647a112338fa2abb7
  479. Result:   Logical volume "brick_f98c2763743d3c9647a112338fa2abb7" successfully removed
  480.   Logical volume "tp_f98c2763743d3c9647a112338fa2abb7" successfully removed
  481. [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: rmdir /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454
  482. Result:
  483. [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: rmdir /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7
  484. Result:
  485. [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: sed -i.save "/brick_0ad04a0d0f8c7015c818f1a10c7a1454/d" /var/lib/heketi/fstab
  486. Result:
  487. [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: lvremove -f vg_ccfc2ad4653c9d38c8e619232307149a/tp_ff00dfa4c19eafa51a18c034c3adc381
  488. Result:   Logical volume "brick_ff00dfa4c19eafa51a18c034c3adc381" successfully removed
  489.   Logical volume "tp_ff00dfa4c19eafa51a18c034c3adc381" successfully removed
  490. [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: sed -i.save "/brick_f98c2763743d3c9647a112338fa2abb7/d" /var/lib/heketi/fstab
  491. Result:
  492. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  493. [negroni] Completed 200 OK in 90.92µs
  494. [kubeexec] DEBUG 2018/09/03 09:06:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: rmdir /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381
  495. Result:
  496. [kubeexec] DEBUG 2018/09/03 09:06:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: sed -i.save "/brick_ff00dfa4c19eafa51a18c034c3adc381/d" /var/lib/heketi/fstab
  497. Result:
  498. [heketi] INFO 2018/09/03 09:06:36 Allocating brick set #0
  499. [heketi] INFO 2018/09/03 09:06:36 Creating brick f6904696fbd30686107ebc84c846d2ff
  500. [heketi] INFO 2018/09/03 09:06:36 Creating brick 7cf5ffef6984e236ab8a43e2fa4836dd
  501. [heketi] INFO 2018/09/03 09:06:36 Creating brick 9d235c1bb9a6737dfee12b8424673a0c
  502. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  503. [negroni] Completed 200 OK in 159.17µs
  504. [kubeexec] DEBUG 2018/09/03 09:06:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mkdir -p /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff
  505. Result:
  506. [kubeexec] DEBUG 2018/09/03 09:06:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mkdir -p /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd
  507. Result:
  508. [kubeexec] DEBUG 2018/09/03 09:06:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mkdir -p /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c
  509. Result:
  510. [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: lvcreate --poolmetadatasize 12288K -c 256K -L 2097152K -T vg_2453bf761ebcb88158a59d64f443d352/tp_f6904696fbd30686107ebc84c846d2ff -V 2097152K -n brick_f6904696fbd30686107ebc84c846d2ff
  511. Result:   Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
  512.   Logical volume "brick_f6904696fbd30686107ebc84c846d2ff" created.
  513. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  514. [negroni] Completed 200 OK in 89.71µs
  515. [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: lvcreate --poolmetadatasize 12288K -c 256K -L 2097152K -T vg_ccfc2ad4653c9d38c8e619232307149a/tp_7cf5ffef6984e236ab8a43e2fa4836dd -V 2097152K -n brick_7cf5ffef6984e236ab8a43e2fa4836dd
  516. Result:   Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
  517.   Logical volume "brick_7cf5ffef6984e236ab8a43e2fa4836dd" created.
  518. [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff
  519. Result: meta-data=/dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff isize=512    agcount=8, agsize=65520 blks
  520.          =                       sectsz=512   attr=2, projid32bit=1
  521.          =                       crc=1        finobt=0, sparse=0
  522. data     =                       bsize=4096   blocks=524160, imaxpct=25
  523.          =                       sunit=16     swidth=64 blks
  524. naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
  525. log      =internal log           bsize=4096   blocks=2560, version=2
  526.          =                       sectsz=512   sunit=16 blks, lazy-count=1
  527. realtime =none                   extsz=4096   blocks=0, rtextents=0
  528. [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: awk "BEGIN {print \"/dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
  529. Result:
  530. [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff
  531. Result:
  532. [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: lvcreate --poolmetadatasize 12288K -c 256K -L 2097152K -T vg_431729f9e9eaac554584ab8784472fb9/tp_9d235c1bb9a6737dfee12b8424673a0c -V 2097152K -n brick_9d235c1bb9a6737dfee12b8424673a0c
  533. Result:   Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
  534.   Logical volume "brick_9d235c1bb9a6737dfee12b8424673a0c" created.
  535. [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mkdir /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff/brick
  536. Result:
  537. [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd
  538. Result: meta-data=/dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd isize=512    agcount=8, agsize=65520 blks
  539.          =                       sectsz=512   attr=2, projid32bit=1
  540.          =                       crc=1        finobt=0, sparse=0
  541. data     =                       bsize=4096   blocks=524160, imaxpct=25
  542.          =                       sunit=16     swidth=64 blks
  543. naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
  544. log      =internal log           bsize=4096   blocks=2560, version=2
  545.          =                       sectsz=512   sunit=16 blks, lazy-count=1
  546. realtime =none                   extsz=4096   blocks=0, rtextents=0
  547. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  548. [negroni] Completed 200 OK in 96.901µs
  549. [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: awk "BEGIN {print \"/dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
  550. Result:
  551. [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c
  552. Result: meta-data=/dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c isize=512    agcount=8, agsize=65520 blks
  553.          =                       sectsz=512   attr=2, projid32bit=1
  554.          =                       crc=1        finobt=0, sparse=0
  555. data     =                       bsize=4096   blocks=524160, imaxpct=25
  556.          =                       sunit=16     swidth=64 blks
  557. naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
  558. log      =internal log           bsize=4096   blocks=2560, version=2
  559.          =                       sectsz=512   sunit=16 blks, lazy-count=1
  560. realtime =none                   extsz=4096   blocks=0, rtextents=0
  561. [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: awk "BEGIN {print \"/dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
  562. Result:
  563. [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd
  564. Result:
  565. [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mkdir /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd/brick
  566. Result:
  567. [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c
  568. Result:
  569. [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mkdir /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c/brick
  570. Result:
  571. [cmdexec] INFO 2018/09/03 09:06:39 Creating volume heketidbstorage replica 3
  572. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  573. [negroni] Completed 200 OK in 117.636µs
  574. [kubeexec] DEBUG 2018/09/03 09:06:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: gluster --mode=script volume create heketidbstorage replica 3 10.100.1.72:/var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff/brick 10.100.1.73:/var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd/brick 10.100.1.71:/var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c/brick
  575. Result: volume create: heketidbstorage: success: please start the volume to access data
  576. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  577. [negroni] Completed 200 OK in 201.863µs
  578. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  579. [negroni] Completed 200 OK in 74.687µs
  580. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  581. [negroni] Completed 200 OK in 75.684µs
  582. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  583. [negroni] Completed 200 OK in 73.942µs
  584. [kubeexec] DEBUG 2018/09/03 09:06:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: gluster --mode=script volume start heketidbstorage
  585. Result: volume start: heketidbstorage: success
  586. [asynchttp] INFO 2018/09/03 09:06:44 asynchttp.go:292: Completed job 1d0873543c5a840751fe16c2c8ee9ee6 in 17.256120406s
  587. [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
  588. [negroni] Completed 303 See Other in 170.799µs
  589. [negroni] Started GET /volumes/b7be4e9566b9b3e61523a239c68391e4
  590. [negroni] Completed 200 OK in 658.536µs
  591. [negroni] Started GET /backup/db
  592. [negroni] Completed 200 OK in 265.79µs
  593. [heketi] INFO 2018/09/03 09:08:08 Starting Node Health Status refresh
  594. [cmdexec] INFO 2018/09/03 09:08:08 Check Glusterd service status in node node3
  595. [kubeexec] DEBUG 2018/09/03 09:08:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
  596. Result: ● glusterd.service - GlusterFS, a clustered file-system server
  597.    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  598.    Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 5min ago
  599.   Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  600.  Main PID: 97 (glusterd)
  601.    CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
  602.            ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  603.            ├─611 /usr/sbin/glusterfsd -s 10.100.1.71 --volfile-id heketidbstorage.10.100.1.71.var-lib-heketi-mounts-vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c-brick -p /var/run/gluster/vols/heketidbstorage/10.100.1.71-var-lib-heketi-mounts-vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c-brick.pid -S /var/run/gluster/99c704fce89e2904.socket --brick-name /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c-brick.log --xlator-option *-posix.glusterd-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name brick --brick-port 49153 --xlator-option heketidbstorage-server.listen-port=49153
  604.            └─634 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
  605.  
  606. Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
  607. Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
  608. [heketi] INFO 2018/09/03 09:08:08 Periodic health check status: node 1bf89fc30122cefb19a9e33fa5784a13 up=true
  609. [cmdexec] INFO 2018/09/03 09:08:08 Check Glusterd service status in node node5
  610. [kubeexec] DEBUG 2018/09/03 09:08:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: systemctl status glusterd
  611. Result: ● glusterd.service - GlusterFS, a clustered file-system server
  612.    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  613.    Active: active (running) since Mon 2018-09-03 09:02:59 UTC; 5min ago
  614.   Process: 95 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  615.  Main PID: 96 (glusterd)
  616.    CGroup: /kubepods/burstable/pod1f005e70-af58-11e8-8dd9-fa163ed47b72/0b779139259837e49074fdd9f56183df07c94e4752d38e73234c71f8e5550025/system.slice/glusterd.service
  617.            ├─ 96 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  618.            ├─493 /usr/sbin/glusterfsd -s 10.100.1.73 --volfile-id heketidbstorage.10.100.1.73.var-lib-heketi-mounts-vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd-brick -p /var/run/gluster/vols/heketidbstorage/10.100.1.73-var-lib-heketi-mounts-vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd-brick.pid -S /var/run/gluster/bda722f6ba713278.socket --brick-name /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd-brick.log --xlator-option *-posix.glusterd-uuid=9b521451-5ae2-44ea-ace9-73c51a98bc18 --process-name brick --brick-port 49153 --xlator-option heketidbstorage-server.listen-port=49153
  619.            └─516 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/c75540a109b4b1e0.socket --xlator-option *replicate*.node-uuid=9b521451-5ae2-44ea-ace9-73c51a98bc18 --process-name glustershd
  620.  
  621. Sep 03 09:02:56 node5 systemd[1]: Starting GlusterFS, a clustered file-system server...
  622. Sep 03 09:02:59 node5 systemd[1]: Started GlusterFS, a clustered file-system server.
  623. [heketi] INFO 2018/09/03 09:08:08 Periodic health check status: node de310d550bef8cd7c2414e6240da36e7 up=true
  624. [cmdexec] INFO 2018/09/03 09:08:08 Check Glusterd service status in node node4
  625. [kubeexec] DEBUG 2018/09/03 09:08:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: systemctl status glusterd
  626. Result: ● glusterd.service - GlusterFS, a clustered file-system server
  627.    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  628.    Active: active (running) since Mon 2018-09-03 09:03:17 UTC; 4min 51s ago
  629.   Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  630.  Main PID: 97 (glusterd)
  631.    CGroup: /kubepods/burstable/pod1f0dd694-af58-11e8-8dd9-fa163ed47b72/ba6810326497d28a8ceb04e636e84c3a08b9878bb6c3bddd7959451da3e511ee/system.slice/glusterd.service
  632.            ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  633.            ├─485 /usr/sbin/glusterfsd -s 10.100.1.72 --volfile-id heketidbstorage.10.100.1.72.var-lib-heketi-mounts-vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff-brick -p /var/run/gluster/vols/heketidbstorage/10.100.1.72-var-lib-heketi-mounts-vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff-brick.pid -S /var/run/gluster/12b5528ed96b7d64.socket --brick-name /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff-brick.log --xlator-option *-posix.glusterd-uuid=1934d29c-d5f1-4ecb-bc60-9f14d978487e --process-name brick --brick-port 49153 --xlator-option heketidbstorage-server.listen-port=49153
  634.            └─508 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/71e62a0db612eacb.socket --xlator-option *replicate*.node-uuid=1934d29c-d5f1-4ecb-bc60-9f14d978487e --process-name glustershd
  635.  
  636. Sep 03 09:03:09 node4 systemd[1]: Starting GlusterFS, a clustered file-system server...
  637. Sep 03 09:03:17 node4 systemd[1]: Started GlusterFS, a clustered file-system server.
  638. [heketi] INFO 2018/09/03 09:08:08 Periodic health check status: node fce79176083462c39bafb000a899fff3 up=true
  639. [heketi] INFO 2018/09/03 09:08:08 Cleaned 0 nodes from health cache
  640. [heketi] INFO 2018/09/03 09:10:08 Starting Node Health Status refresh
  641. [cmdexec] INFO 2018/09/03 09:10:08 Check Glusterd service status in node node3
  642. [kubeexec] DEBUG 2018/09/03 09:10:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
  643. Result: ● glusterd.service - GlusterFS, a clustered file-system server
  644.    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  645.    Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 7min ago
  646.   Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  647.  Main PID: 97 (glusterd)
  648.    CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
  649.            ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  650.            ├─611 /usr/sbin/glusterfsd -s 10.100.1.71 --volfile-id heketidbstorage.10.100.1.71.var-lib-heketi-mounts-vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c-brick -p /var/run/gluster/vols/heketidbstorage/10.100.1.71-var-lib-heketi-mounts-vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c-brick.pid -S /var/run/gluster/99c704fce89e2904.socket --brick-name /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c-brick.log --xlator-option *-posix.glusterd-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name brick --brick-port 49153 --xlator-option heketidbstorage-server.listen-port=49153
  651.            └─634 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
  652.  
  653. Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
  654. Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
  655. [heketi] INFO 2018/09/03 09:10:08 Periodic health check status: node 1bf89fc30122cefb19a9e33fa5784a13 up=true
  656. [cmdexec] INFO 2018/09/03 09:10:08 Check Glusterd service status in node node5
  657. [kubeexec] DEBUG 2018/09/03 09:10:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: systemctl status glusterd
  658. Result: ● glusterd.service - GlusterFS, a clustered file-system server
  659.    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  660.    Active: active (running) since Mon 2018-09-03 09:02:59 UTC; 7min ago
  661.   Process: 95 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  662.  Main PID: 96 (glusterd)
  663.    CGroup: /kubepods/burstable/pod1f005e70-af58-11e8-8dd9-fa163ed47b72/0b779139259837e49074fdd9f56183df07c94e4752d38e73234c71f8e5550025/system.slice/glusterd.service
  664.            ├─ 96 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  665.            ├─493 /usr/sbin/glusterfsd -s 10.100.1.73 --volfile-id heketidbstorage.10.100.1.73.var-lib-heketi-mounts-vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd-brick -p /var/run/gluster/vols/heketidbstorage/10.100.1.73-var-lib-heketi-mounts-vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd-brick.pid -S /var/run/gluster/bda722f6ba713278.socket --brick-name /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd-brick.log --xlator-option *-posix.glusterd-uuid=9b521451-5ae2-44ea-ace9-73c51a98bc18 --process-name brick --brick-port 49153 --xlator-option heketidbstorage-server.listen-port=49153
  666.            └─516 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/c75540a109b4b1e0.socket --xlator-option *replicate*.node-uuid=9b521451-5ae2-44ea-ace9-73c51a98bc18 --process-name glustershd
  667.  
  668. Sep 03 09:02:56 node5 systemd[1]: Starting GlusterFS, a clustered file-system server...
  669. Sep 03 09:02:59 node5 systemd[1]: Started GlusterFS, a clustered file-system server.
  670. [heketi] INFO 2018/09/03 09:10:08 Periodic health check status: node de310d550bef8cd7c2414e6240da36e7 up=true
  671. [cmdexec] INFO 2018/09/03 09:10:08 Check Glusterd service status in node node4
  672. [kubeexec] DEBUG 2018/09/03 09:10:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: systemctl status glusterd
  673. Result: ● glusterd.service - GlusterFS, a clustered file-system server
  674.    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
  675.    Active: active (running) since Mon 2018-09-03 09:03:17 UTC; 6min ago
  676.   Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  677.  Main PID: 97 (glusterd)
  678.    CGroup: /kubepods/burstable/pod1f0dd694-af58-11e8-8dd9-fa163ed47b72/ba6810326497d28a8ceb04e636e84c3a08b9878bb6c3bddd7959451da3e511ee/system.slice/glusterd.service
  679.            ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  680.            ├─485 /usr/sbin/glusterfsd -s 10.100.1.72 --volfile-id heketidbstorage.10.100.1.72.var-lib-heketi-mounts-vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff-brick -p /var/run/gluster/vols/heketidbstorage/10.100.1.72-var-lib-heketi-mounts-vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff-brick.pid -S /var/run/gluster/12b5528ed96b7d64.socket --brick-name /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff-brick.log --xlator-option *-posix.glusterd-uuid=1934d29c-d5f1-4ecb-bc60-9f14d978487e --process-name brick --brick-port 49153 --xlator-option heketidbstorage-server.listen-port=49153
  681.            └─508 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/71e62a0db612eacb.socket --xlator-option *replicate*.node-uuid=1934d29c-d5f1-4ecb-bc60-9f14d978487e --process-name glustershd
  682.  
  683. Sep 03 09:03:09 node4 systemd[1]: Starting GlusterFS, a clustered file-system server...
  684. Sep 03 09:03:17 node4 systemd[1]: Started GlusterFS, a clustered file-system server.
  685. [heketi] INFO 2018/09/03 09:10:08 Periodic health check status: node fce79176083462c39bafb000a899fff3 up=true
  686. [heketi] INFO 2018/09/03 09:10:08 Cleaned 0 nodes from health cache
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement