Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- kubectl logs deploy-heketi-859478d448-x2zn2
- Setting up heketi database
- No database file found
- stat: cannot stat '/var/lib/heketi/heketi.db': No such file or directory
- Heketi v7.0.0-134-g29a2af8
- [heketi] INFO 2018/09/03 09:03:51 Loaded kubernetes executor
- [heketi] INFO 2018/09/03 09:03:52 GlusterFS Application Loaded
- [heketi] INFO 2018/09/03 09:03:52 Started Node Health Cache Monitor
- Authorization loaded
- Listening on port 8080
- [admin@node1 ~]$ kubectl logs deploy-heketi-859478d448-x2zn2 -f
- Setting up heketi database
- No database file found
- stat: cannot stat '/var/lib/heketi/heketi.db': No such file or directory
- Heketi v7.0.0-134-g29a2af8
- [heketi] INFO 2018/09/03 09:03:51 Loaded kubernetes executor
- [heketi] INFO 2018/09/03 09:03:52 GlusterFS Application Loaded
- [heketi] INFO 2018/09/03 09:03:52 Started Node Health Cache Monitor
- Authorization loaded
- Listening on port 8080
- [negroni] Started GET /clusters
- [negroni] Completed 200 OK in 202.121µs
- [negroni] Started POST /clusters
- [negroni] Completed 201 Created in 145.962628ms
- [negroni] Started POST /nodes
- [heketi] INFO 2018/09/03 09:04:02 Starting Node Health Status refresh
- [heketi] INFO 2018/09/03 09:04:02 Cleaned 0 nodes from health cache
- [cmdexec] INFO 2018/09/03 09:04:02 Check Glusterd service status in node node3
- [kubeexec] DEBUG 2018/09/03 09:04:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
- Result: ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 54s ago
- Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 97 (glusterd)
- CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
- ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- └─140 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
- Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
- Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
- [heketi] INFO 2018/09/03 09:04:02 Adding node node3
- [negroni] Completed 202 Accepted in 214.616099ms
- [asynchttp] INFO 2018/09/03 09:04:02 asynchttp.go:288: Started job aba9fd463a0c410e34d737890182c3c4
- [negroni] Started GET /queue/aba9fd463a0c410e34d737890182c3c4
- [negroni] Completed 200 OK in 134.622µs
- [heketi] INFO 2018/09/03 09:04:02 Added node 543c97a4c6b1195bb558f1923a21a0fd
- [asynchttp] INFO 2018/09/03 09:04:02 asynchttp.go:292: Completed job aba9fd463a0c410e34d737890182c3c4 in 19.556498ms
- [negroni] Started GET /queue/aba9fd463a0c410e34d737890182c3c4
- [negroni] Completed 303 See Other in 103.977µs
- [negroni] Started GET /nodes/543c97a4c6b1195bb558f1923a21a0fd
- [negroni] Completed 200 OK in 727.87µs
- [negroni] Started POST /devices
- [heketi] INFO 2018/09/03 09:04:02 Adding device /dev/loop0 to node 543c97a4c6b1195bb558f1923a21a0fd
- [negroni] Completed 202 Accepted in 30.128347ms
- [asynchttp] INFO 2018/09/03 09:04:02 asynchttp.go:288: Started job e3f33e264af22214f9ae3e9adcf81810
- [negroni] Started GET /queue/e3f33e264af22214f9ae3e9adcf81810
- [negroni] Completed 200 OK in 93.804µs
- [kubeexec] ERROR 2018/09/03 09:04:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [pvcreate --metadatasize=128M --dataalignment=256K '/dev/loop0'] on glusterfs-gddwq: Err[command terminated with exit code 5]: Stdout []: Stderr [WARNING: dos signature detected on /dev/loop0 at offset 510. Wipe it? [y/n]: [n]
- Aborted wiping of dos.
- 1 existing signature left on the device.
- ]
- [asynchttp] INFO 2018/09/03 09:04:03 asynchttp.go:292: Completed job e3f33e264af22214f9ae3e9adcf81810 in 920.882873ms
- [negroni] Started GET /queue/e3f33e264af22214f9ae3e9adcf81810
- [negroni] Completed 500 Internal Server Error in 110.699µs
- [negroni] Started POST /nodes
- [cmdexec] INFO 2018/09/03 09:04:05 Check Glusterd service status in node node3
- [kubeexec] DEBUG 2018/09/03 09:04:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
- Result: ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 57s ago
- Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 97 (glusterd)
- CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
- ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- └─140 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
- Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
- Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
- [heketi] INFO 2018/09/03 09:04:06 Adding node node4
- [negroni] Completed 202 Accepted in 2.302880639s
- [asynchttp] INFO 2018/09/03 09:04:06 asynchttp.go:288: Started job d7f05424b021b4a5530f49ffa5e85168
- [cmdexec] INFO 2018/09/03 09:04:06 Probing: node3 -> 10.100.1.72
- [negroni] Started GET /queue/d7f05424b021b4a5530f49ffa5e85168
- [negroni] Completed 200 OK in 132.949µs
- [kubeexec] DEBUG 2018/09/03 09:04:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: gluster peer probe 10.100.1.72
- Result: peer probe: success. Host 10.100.1.72 port 24007 already in peer list
- [cmdexec] INFO 2018/09/03 09:04:06 Setting snapshot limit
- [negroni] Started GET /queue/d7f05424b021b4a5530f49ffa5e85168
- [negroni] Completed 200 OK in 95.014µs
- [negroni] Started GET /queue/d7f05424b021b4a5530f49ffa5e85168
- [negroni] Completed 200 OK in 94.128µs
- [kubeexec] DEBUG 2018/09/03 09:04:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: gluster --mode=script snapshot config snap-max-hard-limit 14
- Result: snapshot config: snap-max-hard-limit for System set successfully
- [heketi] INFO 2018/09/03 09:04:06 Added node 780e6b35c81597c1ca04079742e377a1
- [asynchttp] INFO 2018/09/03 09:04:06 asynchttp.go:292: Completed job d7f05424b021b4a5530f49ffa5e85168 in 618.785605ms
- [negroni] Started GET /queue/d7f05424b021b4a5530f49ffa5e85168
- [negroni] Completed 303 See Other in 135.292µs
- [negroni] Started GET /nodes/780e6b35c81597c1ca04079742e377a1
- [negroni] Completed 200 OK in 230.882µs
- [negroni] Started POST /devices
- [heketi] INFO 2018/09/03 09:04:06 Adding device /dev/loop0 to node 780e6b35c81597c1ca04079742e377a1
- [negroni] Completed 202 Accepted in 60.42611ms
- [asynchttp] INFO 2018/09/03 09:04:06 asynchttp.go:288: Started job 41d2b8bb7b2add4aef526dd9719caf93
- [negroni] Started GET /queue/41d2b8bb7b2add4aef526dd9719caf93
- [negroni] Completed 200 OK in 112.168µs
- [kubeexec] ERROR 2018/09/03 09:04:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [pvcreate --metadatasize=128M --dataalignment=256K '/dev/loop0'] on glusterfs-7xwgg: Err[command terminated with exit code 5]: Stdout []: Stderr [WARNING: dos signature detected on /dev/loop0 at offset 510. Wipe it? [y/n]: [n]
- Aborted wiping of dos.
- 1 existing signature left on the device.
- ]
- [asynchttp] INFO 2018/09/03 09:04:07 asynchttp.go:292: Completed job 41d2b8bb7b2add4aef526dd9719caf93 in 216.302454ms
- [negroni] Started GET /queue/41d2b8bb7b2add4aef526dd9719caf93
- [negroni] Completed 500 Internal Server Error in 167.16µs
- [negroni] Started POST /nodes
- [cmdexec] INFO 2018/09/03 09:04:08 Check Glusterd service status in node node3
- [kubeexec] DEBUG 2018/09/03 09:04:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
- Result: ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 59s ago
- Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 97 (glusterd)
- CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
- ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- └─140 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
- Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
- Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
- [heketi] INFO 2018/09/03 09:04:08 Adding node node5
- [negroni] Completed 202 Accepted in 256.002855ms
- [asynchttp] INFO 2018/09/03 09:04:08 asynchttp.go:288: Started job 71c6cd689f7aaba8eea05173b8b94ca9
- [cmdexec] INFO 2018/09/03 09:04:08 Probing: node3 -> 10.100.1.73
- [negroni] Started GET /queue/71c6cd689f7aaba8eea05173b8b94ca9
- [negroni] Completed 200 OK in 85.189µs
- [kubeexec] DEBUG 2018/09/03 09:04:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: gluster peer probe 10.100.1.73
- Result: peer probe: success. Host 10.100.1.73 port 24007 already in peer list
- [cmdexec] INFO 2018/09/03 09:04:08 Setting snapshot limit
- [negroni] Started GET /queue/71c6cd689f7aaba8eea05173b8b94ca9
- [negroni] Completed 200 OK in 155.395µs
- [negroni] Started GET /queue/71c6cd689f7aaba8eea05173b8b94ca9
- [negroni] Completed 200 OK in 150.742µs
- [kubeexec] DEBUG 2018/09/03 09:04:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: node3 Pod: glusterfs-gddwq Command: gluster --mode=script snapshot config snap-max-hard-limit 14
- Result: snapshot config: snap-max-hard-limit for System set successfully
- [heketi] INFO 2018/09/03 09:04:08 Added node b7b237ed28e9ad394aed97804ec6ff31
- [asynchttp] INFO 2018/09/03 09:04:08 asynchttp.go:292: Completed job 71c6cd689f7aaba8eea05173b8b94ca9 in 732.533265ms
- [negroni] Started GET /queue/71c6cd689f7aaba8eea05173b8b94ca9
- [negroni] Completed 303 See Other in 209.495µs
- [negroni] Started GET /nodes/b7b237ed28e9ad394aed97804ec6ff31
- [negroni] Completed 200 OK in 203.696µs
- [negroni] Started POST /devices
- [heketi] INFO 2018/09/03 09:04:08 Adding device /dev/loop0 to node b7b237ed28e9ad394aed97804ec6ff31
- [negroni] Completed 202 Accepted in 19.431872ms
- [asynchttp] INFO 2018/09/03 09:04:08 asynchttp.go:288: Started job 8391802b2e3da5e6c21e813d86f783ad
- [negroni] Started GET /queue/8391802b2e3da5e6c21e813d86f783ad
- [negroni] Completed 200 OK in 105.252µs
- [kubeexec] ERROR 2018/09/03 09:04:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [pvcreate --metadatasize=128M --dataalignment=256K '/dev/loop0'] on glusterfs-l9ts8: Err[command terminated with exit code 5]: Stdout []: Stderr [WARNING: dos signature detected on /dev/loop0 at offset 510. Wipe it? [y/n]: [n]
- Aborted wiping of dos.
- 1 existing signature left on the device.
- ]
- [asynchttp] INFO 2018/09/03 09:04:09 asynchttp.go:292: Completed job 8391802b2e3da5e6c21e813d86f783ad in 266.433068ms
- [negroni] Started GET /queue/8391802b2e3da5e6c21e813d86f783ad
- [negroni] Completed 500 Internal Server Error in 114.399µs
- [negroni] Started GET /clusters/1969c8b9d869d79463b62fdf17b83f91
- [negroni] Completed 200 OK in 212.1µs
- ^C[admin@node1 ~]$ ssh node3
- Last login: Mon Sep 3 08:55:48 2018 from node1
- [admin@node3 ~]$ sudo wipefs -a /dev/loop0
- /dev/loop0: 2 bytes were erased at offset 0x000001fe (dos): 55 aa
- /dev/loop0: calling ioclt to re-read partition table: Invalid argument
- [admin@node3 ~]$ sudo wipefs -a /dev/loop0 -f
- [admin@node3 ~]$ exit
- logout
- Connection to node3 closed.
- [admin@node1 ~]$ ssh node4
- Last login: Mon Sep 3 09:01:35 2018 from node1
- [admin@node4 ~]$ sudo wipefs -a /dev/loop0 -f
- /dev/loop0: 2 bytes were erased at offset 0x000001fe (dos): 55 aa
- /dev/loop0: calling ioclt to re-read partition table: Invalid argument
- [admin@node4 ~]$ exit
- logout
- Connection to node4 closed.
- [admin@node1 ~]$ ssh node5
- Last login: Mon Sep 3 08:59:47 2018 from node1
- [admin@node5 ~]$ sudo wipefs -a /dev/loop0
- /dev/loop0: 2 bytes were erased at offset 0x000001fe (dos): 55 aa
- /dev/loop0: calling ioclt to re-read partition table: Invalid argument
- [admin@node5 ~]$ sudo wipefs -a /dev/loop0 -f
- [admin@node5 ~]$ exit
- logout
- Connection to node5 closed.
- [admin@node1 ~]$ kubectl logs deploy-heketi-859478d448-kpl9z -p
- Error from server (BadRequest): previous terminated container "deploy-heketi" in pod "deploy-heketi-859478d448-kpl9z" not found
- [admin@node1 ~]$ kubectl logs deploy-heketi-859478d448-kpl9z -p
- Error from server (BadRequest): previous terminated container "deploy-heketi" in pod "deploy-heketi-859478d448-kpl9z" not found
- [admin@node1 ~]$ kubectl logs deploy-heketi-859478d448-kpl9z -f
- Setting up heketi database
- No database file found
- stat: cannot stat '/var/lib/heketi/heketi.db': No such file or directory
- Heketi v6.0.0-196-gf31ad28
- [heketi] INFO 2018/09/03 09:06:08 Loaded kubernetes executor
- [heketi] INFO 2018/09/03 09:06:08 GlusterFS Application Loaded
- [heketi] INFO 2018/09/03 09:06:08 Started Node Health Cache Monitor
- Authorization loaded
- Listening on port 8080
- [heketi] INFO 2018/09/03 09:06:18 Starting Node Health Status refresh
- [heketi] INFO 2018/09/03 09:06:18 Cleaned 0 nodes from health cache
- [negroni] Started GET /clusters
- [negroni] Completed 200 OK in 250.115µs
- [negroni] Started POST /clusters
- [negroni] Completed 201 Created in 27.062018ms
- [negroni] Started POST /nodes
- [cmdexec] INFO 2018/09/03 09:06:19 Check Glusterd service status in node node3
- [kubeexec] DEBUG 2018/09/03 09:06:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
- Result: ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 3min 11s ago
- Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 97 (glusterd)
- CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
- ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- └─140 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
- Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
- Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
- [heketi] INFO 2018/09/03 09:06:19 Adding node node3
- [negroni] Completed 202 Accepted in 140.877493ms
- [asynchttp] INFO 2018/09/03 09:06:19 asynchttp.go:288: Started job 04a66632b549c07a0b2780c9fb8b6ede
- [negroni] Started GET /queue/04a66632b549c07a0b2780c9fb8b6ede
- [negroni] Completed 200 OK in 65.785µs
- [heketi] INFO 2018/09/03 09:06:19 Added node 1bf89fc30122cefb19a9e33fa5784a13
- [asynchttp] INFO 2018/09/03 09:06:19 asynchttp.go:292: Completed job 04a66632b549c07a0b2780c9fb8b6ede in 24.944584ms
- [negroni] Started GET /queue/04a66632b549c07a0b2780c9fb8b6ede
- [negroni] Completed 303 See Other in 126.333µs
- [negroni] Started GET /nodes/1bf89fc30122cefb19a9e33fa5784a13
- [negroni] Completed 200 OK in 368.155µs
- [negroni] Started POST /devices
- [heketi] INFO 2018/09/03 09:06:20 Adding device /dev/loop0 to node 1bf89fc30122cefb19a9e33fa5784a13
- [negroni] Completed 202 Accepted in 22.724293ms
- [asynchttp] INFO 2018/09/03 09:06:20 asynchttp.go:288: Started job 675ad388472eeee85bfca1b5a2ceb18c
- [negroni] Started GET /queue/675ad388472eeee85bfca1b5a2ceb18c
- [negroni] Completed 200 OK in 92.646µs
- [kubeexec] DEBUG 2018/09/03 09:06:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/loop0'
- Result: Physical volume "/dev/loop0" successfully created.
- [kubeexec] DEBUG 2018/09/03 09:06:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: vgcreate vg_431729f9e9eaac554584ab8784472fb9 /dev/loop0
- Result: Volume group "vg_431729f9e9eaac554584ab8784472fb9" successfully created
- [kubeexec] DEBUG 2018/09/03 09:06:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: vgdisplay -c vg_431729f9e9eaac554584ab8784472fb9
- Result: vg_431729f9e9eaac554584ab8784472fb9:r/w:772:-1:0:0:0:-1:0:1:1:51679232:4096:12617:0:12617:FX3Rrq-nzgq-Fdq2-n6dO-BRje-hjxu-GYOEjv
- [cmdexec] DEBUG 2018/09/03 09:06:20 /src/github.com/heketi/heketi/executors/cmdexec/device.go:143: Size of /dev/loop0 in node3 is 51679232
- [negroni] Started GET /queue/675ad388472eeee85bfca1b5a2ceb18c
- [negroni] Completed 200 OK in 95.567µs
- [negroni] Started GET /queue/675ad388472eeee85bfca1b5a2ceb18c
- [negroni] Completed 200 OK in 93.496µs
- [heketi] INFO 2018/09/03 09:06:22 Added device /dev/loop0
- [asynchttp] INFO 2018/09/03 09:06:22 asynchttp.go:292: Completed job 675ad388472eeee85bfca1b5a2ceb18c in 2.556913942s
- [negroni] Started GET /queue/675ad388472eeee85bfca1b5a2ceb18c
- [negroni] Completed 204 No Content in 88.95µs
- [negroni] Started POST /nodes
- [cmdexec] INFO 2018/09/03 09:06:23 Check Glusterd service status in node node3
- [kubeexec] DEBUG 2018/09/03 09:06:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
- Result: ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 3min 14s ago
- Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 97 (glusterd)
- CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
- ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- └─140 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
- Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
- Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
- [heketi] INFO 2018/09/03 09:06:23 Adding node node4
- [negroni] Completed 202 Accepted in 159.338556ms
- [asynchttp] INFO 2018/09/03 09:06:23 asynchttp.go:288: Started job 84c24f7109a3e7153f7e5a00aba6770f
- [cmdexec] INFO 2018/09/03 09:06:23 Probing: node3 -> 10.100.1.72
- [negroni] Started GET /queue/84c24f7109a3e7153f7e5a00aba6770f
- [negroni] Completed 200 OK in 73.185µs
- [kubeexec] DEBUG 2018/09/03 09:06:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: gluster peer probe 10.100.1.72
- Result: peer probe: success. Host 10.100.1.72 port 24007 already in peer list
- [cmdexec] INFO 2018/09/03 09:06:23 Setting snapshot limit
- [negroni] Started GET /queue/84c24f7109a3e7153f7e5a00aba6770f
- [negroni] Completed 200 OK in 126.687µs
- [kubeexec] DEBUG 2018/09/03 09:06:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: gluster --mode=script snapshot config snap-max-hard-limit 14
- Result: snapshot config: snap-max-hard-limit for System set successfully
- [heketi] INFO 2018/09/03 09:06:23 Added node fce79176083462c39bafb000a899fff3
- [asynchttp] INFO 2018/09/03 09:06:23 asynchttp.go:292: Completed job 84c24f7109a3e7153f7e5a00aba6770f in 479.338771ms
- [negroni] Started GET /queue/84c24f7109a3e7153f7e5a00aba6770f
- [negroni] Completed 303 See Other in 103.368µs
- [negroni] Started GET /nodes/fce79176083462c39bafb000a899fff3
- [negroni] Completed 200 OK in 253.033µs
- [negroni] Started POST /devices
- [heketi] INFO 2018/09/03 09:06:23 Adding device /dev/loop0 to node fce79176083462c39bafb000a899fff3
- [negroni] Completed 202 Accepted in 85.557342ms
- [asynchttp] INFO 2018/09/03 09:06:23 asynchttp.go:288: Started job 96b8fabbc696f029fbc18587ae4143b8
- [negroni] Started GET /queue/96b8fabbc696f029fbc18587ae4143b8
- [negroni] Completed 200 OK in 76.257µs
- [kubeexec] DEBUG 2018/09/03 09:06:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/loop0'
- Result: Physical volume "/dev/loop0" successfully created.
- [kubeexec] DEBUG 2018/09/03 09:06:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: vgcreate vg_2453bf761ebcb88158a59d64f443d352 /dev/loop0
- Result: Volume group "vg_2453bf761ebcb88158a59d64f443d352" successfully created
- [kubeexec] DEBUG 2018/09/03 09:06:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: vgdisplay -c vg_2453bf761ebcb88158a59d64f443d352
- Result: vg_2453bf761ebcb88158a59d64f443d352:r/w:772:-1:0:0:0:-1:0:1:1:26079232:4096:6367:0:6367:iDlxks-fFAW-YCSP-tiyK-ov04-Hk0U-WW3zEk
- [cmdexec] DEBUG 2018/09/03 09:06:24 /src/github.com/heketi/heketi/executors/cmdexec/device.go:143: Size of /dev/loop0 in node4 is 26079232
- [heketi] INFO 2018/09/03 09:06:24 Added device /dev/loop0
- [asynchttp] INFO 2018/09/03 09:06:24 asynchttp.go:292: Completed job 96b8fabbc696f029fbc18587ae4143b8 in 671.994654ms
- [negroni] Started GET /queue/96b8fabbc696f029fbc18587ae4143b8
- [negroni] Completed 204 No Content in 87.54µs
- [negroni] Started POST /nodes
- [cmdexec] INFO 2018/09/03 09:06:25 Check Glusterd service status in node node3
- [kubeexec] DEBUG 2018/09/03 09:06:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
- Result: ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 3min 16s ago
- Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 97 (glusterd)
- CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
- ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- └─140 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
- Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
- Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
- [heketi] INFO 2018/09/03 09:06:25 Adding node node5
- [negroni] Completed 202 Accepted in 300.057044ms
- [asynchttp] INFO 2018/09/03 09:06:25 asynchttp.go:288: Started job 545caff1ebbb1718c9ddc948a39c1d1e
- [cmdexec] INFO 2018/09/03 09:06:25 Probing: node3 -> 10.100.1.73
- [negroni] Started GET /queue/545caff1ebbb1718c9ddc948a39c1d1e
- [negroni] Completed 200 OK in 63µs
- [kubeexec] DEBUG 2018/09/03 09:06:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: gluster peer probe 10.100.1.73
- Result: peer probe: success. Host 10.100.1.73 port 24007 already in peer list
- [cmdexec] INFO 2018/09/03 09:06:25 Setting snapshot limit
- [negroni] Started GET /queue/545caff1ebbb1718c9ddc948a39c1d1e
- [negroni] Completed 200 OK in 102.452µs
- [kubeexec] DEBUG 2018/09/03 09:06:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: gluster --mode=script snapshot config snap-max-hard-limit 14
- Result: snapshot config: snap-max-hard-limit for System set successfully
- [heketi] INFO 2018/09/03 09:06:25 Added node de310d550bef8cd7c2414e6240da36e7
- [asynchttp] INFO 2018/09/03 09:06:25 asynchttp.go:292: Completed job 545caff1ebbb1718c9ddc948a39c1d1e in 485.669526ms
- [negroni] Started GET /queue/545caff1ebbb1718c9ddc948a39c1d1e
- [negroni] Completed 303 See Other in 117.883µs
- [negroni] Started GET /nodes/de310d550bef8cd7c2414e6240da36e7
- [negroni] Completed 200 OK in 160.919µs
- [negroni] Started POST /devices
- [heketi] INFO 2018/09/03 09:06:25 Adding device /dev/loop0 to node de310d550bef8cd7c2414e6240da36e7
- [negroni] Completed 202 Accepted in 32.306024ms
- [asynchttp] INFO 2018/09/03 09:06:25 asynchttp.go:288: Started job d7a50cf53a4cea6d939fb08223cfca4b
- [negroni] Started GET /queue/d7a50cf53a4cea6d939fb08223cfca4b
- [negroni] Completed 200 OK in 64.576µs
- [kubeexec] DEBUG 2018/09/03 09:06:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/loop0'
- Result: Physical volume "/dev/loop0" successfully created.
- [kubeexec] DEBUG 2018/09/03 09:06:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: vgcreate vg_ccfc2ad4653c9d38c8e619232307149a /dev/loop0
- Result: Volume group "vg_ccfc2ad4653c9d38c8e619232307149a" successfully created
- [kubeexec] DEBUG 2018/09/03 09:06:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: vgdisplay -c vg_ccfc2ad4653c9d38c8e619232307149a
- Result: vg_ccfc2ad4653c9d38c8e619232307149a:r/w:772:-1:0:0:0:-1:0:1:1:51679232:4096:12617:0:12617:hLkAzQ-wl2X-B2x7-ii9U-wDqd-ca8B-RoOENc
- [cmdexec] DEBUG 2018/09/03 09:06:26 /src/github.com/heketi/heketi/executors/cmdexec/device.go:143: Size of /dev/loop0 in node5 is 51679232
- [heketi] INFO 2018/09/03 09:06:26 Added device /dev/loop0
- [asynchttp] INFO 2018/09/03 09:06:26 asynchttp.go:292: Completed job d7a50cf53a4cea6d939fb08223cfca4b in 588.440999ms
- [negroni] Started GET /queue/d7a50cf53a4cea6d939fb08223cfca4b
- [negroni] Completed 204 No Content in 121.522µs
- [negroni] Started GET /clusters/22bdff84184abc4512c81188e26f973b
- [negroni] Completed 200 OK in 247.157µs
- [negroni] Started GET /clusters
- [negroni] Completed 200 OK in 121.319µs
- [negroni] Started GET /clusters/22bdff84184abc4512c81188e26f973b
- [negroni] Completed 200 OK in 152.58µs
- [negroni] Started POST /volumes
- [heketi] INFO 2018/09/03 09:06:27 Allocating brick set #0
- [negroni] Completed 202 Accepted in 53.137917ms
- [asynchttp] INFO 2018/09/03 09:06:27 asynchttp.go:288: Started job 1d0873543c5a840751fe16c2c8ee9ee6
- [heketi] INFO 2018/09/03 09:06:27 Started async operation: Create Volume
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 69.357µs
- [heketi] INFO 2018/09/03 09:06:27 Creating brick f98c2763743d3c9647a112338fa2abb7
- [heketi] INFO 2018/09/03 09:06:27 Creating brick ff00dfa4c19eafa51a18c034c3adc381
- [heketi] INFO 2018/09/03 09:06:27 Creating brick 0ad04a0d0f8c7015c818f1a10c7a1454
- [kubeexec] DEBUG 2018/09/03 09:06:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mkdir -p /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mkdir -p /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mkdir -p /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: lvcreate --poolmetadatasize 12288K -c 256K -L 2097152K -T vg_ccfc2ad4653c9d38c8e619232307149a/tp_ff00dfa4c19eafa51a18c034c3adc381 -V 2097152K -n brick_ff00dfa4c19eafa51a18c034c3adc381
- Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
- Logical volume "brick_ff00dfa4c19eafa51a18c034c3adc381" created.
- [kubeexec] DEBUG 2018/09/03 09:06:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: lvcreate --poolmetadatasize 12288K -c 256K -L 2097152K -T vg_2453bf761ebcb88158a59d64f443d352/tp_0ad04a0d0f8c7015c818f1a10c7a1454 -V 2097152K -n brick_0ad04a0d0f8c7015c818f1a10c7a1454
- Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
- Logical volume "brick_0ad04a0d0f8c7015c818f1a10c7a1454" created.
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 104.249µs
- [kubeexec] DEBUG 2018/09/03 09:06:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: lvcreate --poolmetadatasize 12288K -c 256K -L 2097152K -T vg_431729f9e9eaac554584ab8784472fb9/tp_f98c2763743d3c9647a112338fa2abb7 -V 2097152K -n brick_f98c2763743d3c9647a112338fa2abb7
- Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
- Logical volume "brick_f98c2763743d3c9647a112338fa2abb7" created.
- [kubeexec] DEBUG 2018/09/03 09:06:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_ff00dfa4c19eafa51a18c034c3adc381
- Result: meta-data=/dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_ff00dfa4c19eafa51a18c034c3adc381 isize=512 agcount=8, agsize=65520 blks
- = sectsz=512 attr=2, projid32bit=1
- = crc=1 finobt=0, sparse=0
- data = bsize=4096 blocks=524160, imaxpct=25
- = sunit=16 swidth=64 blks
- naming =version 2 bsize=8192 ascii-ci=0 ftype=1
- log =internal log bsize=4096 blocks=2560, version=2
- = sectsz=512 sunit=16 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- [kubeexec] DEBUG 2018/09/03 09:06:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_0ad04a0d0f8c7015c818f1a10c7a1454
- Result: meta-data=/dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_0ad04a0d0f8c7015c818f1a10c7a1454 isize=512 agcount=8, agsize=65520 blks
- = sectsz=512 attr=2, projid32bit=1
- = crc=1 finobt=0, sparse=0
- data = bsize=4096 blocks=524160, imaxpct=25
- = sunit=16 swidth=64 blks
- naming =version 2 bsize=8192 ascii-ci=0 ftype=1
- log =internal log bsize=4096 blocks=2560, version=2
- = sectsz=512 sunit=16 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- [kubeexec] DEBUG 2018/09/03 09:06:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: awk "BEGIN {print \"/dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_ff00dfa4c19eafa51a18c034c3adc381 /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_f98c2763743d3c9647a112338fa2abb7
- Result: meta-data=/dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_f98c2763743d3c9647a112338fa2abb7 isize=512 agcount=8, agsize=65520 blks
- = sectsz=512 attr=2, projid32bit=1
- = crc=1 finobt=0, sparse=0
- data = bsize=4096 blocks=524160, imaxpct=25
- = sunit=16 swidth=64 blks
- naming =version 2 bsize=8192 ascii-ci=0 ftype=1
- log =internal log bsize=4096 blocks=2560, version=2
- = sectsz=512 sunit=16 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_ff00dfa4c19eafa51a18c034c3adc381 /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: awk "BEGIN {print \"/dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_f98c2763743d3c9647a112338fa2abb7 /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: awk "BEGIN {print \"/dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_0ad04a0d0f8c7015c818f1a10c7a1454 /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
- Result:
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 99.402µs
- [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mkdir /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381/brick
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_f98c2763743d3c9647a112338fa2abb7 /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_0ad04a0d0f8c7015c818f1a10c7a1454 /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mkdir /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7/brick
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mkdir /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454/brick
- Result:
- [cmdexec] INFO 2018/09/03 09:06:29 Creating volume heketidbstorage replica 3
- [kubeexec] ERROR 2018/09/03 09:06:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:240: Failed to run command [gluster --mode=script volume create heketidbstorage replica 3 10.100.1.73:/var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381/brick 10.100.1.71:/var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7/brick 10.100.1.72:/var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454/brick] on glusterfs-l9ts8: Err[command terminated with exit code 1]: Stdout []: Stderr [volume create: heketidbstorage: failed: Volume heketidbstorage already exists
- ]
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 130.128µs
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 106.27µs
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 153.796µs
- [kubeexec] DEBUG 2018/09/03 09:06:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: gluster --mode=script volume stop heketidbstorage force
- Result: volume stop: heketidbstorage: success
- [heketi] ERROR 2018/09/03 09:06:32 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:177: Error executing create volume: Unable to execute command on glusterfs-l9ts8: volume create: heketidbstorage: failed: Volume heketidbstorage already exists
- [kubeexec] DEBUG 2018/09/03 09:06:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: gluster --mode=script volume delete heketidbstorage
- Result: volume delete: heketidbstorage: success
- [heketi] WARNING 2018/09/03 09:06:32 Create Volume Exec requested retry
- [heketi] INFO 2018/09/03 09:06:32 Retry Create Volume (1)
- [kubeexec] ERROR 2018/09/03 09:06:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:240: Failed to run command [gluster --mode=script volume stop heketidbstorage force] on glusterfs-gddwq: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: heketidbstorage: failed: Volume heketidbstorage does not exist
- ]
- [cmdexec] ERROR 2018/09/03 09:06:32 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:143: Unable to stop volume heketidbstorage: Unable to execute command on glusterfs-gddwq: volume stop: heketidbstorage: failed: Volume heketidbstorage does not exist
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 110.357µs
- [kubeexec] ERROR 2018/09/03 09:06:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:240: Failed to run command [gluster --mode=script volume delete heketidbstorage] on glusterfs-gddwq: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: heketidbstorage: failed: Volume heketidbstorage does not exist
- ]
- [cmdexec] ERROR 2018/09/03 09:06:33 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:152: Unable to delete volume heketidbstorage: Unable to execute command on glusterfs-gddwq: volume delete: heketidbstorage: failed: Volume heketidbstorage does not exist
- [heketi] INFO 2018/09/03 09:06:33 Deleting brick ff00dfa4c19eafa51a18c034c3adc381
- [heketi] INFO 2018/09/03 09:06:33 Deleting brick f98c2763743d3c9647a112338fa2abb7
- [heketi] INFO 2018/09/03 09:06:33 Deleting brick 0ad04a0d0f8c7015c818f1a10c7a1454
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 106.694µs
- [kubeexec] DEBUG 2018/09/03 09:06:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: umount /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: umount /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7
- Result:
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 89.783µs
- [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: lvremove -f vg_2453bf761ebcb88158a59d64f443d352/tp_0ad04a0d0f8c7015c818f1a10c7a1454
- Result: Logical volume "brick_0ad04a0d0f8c7015c818f1a10c7a1454" successfully removed
- Logical volume "tp_0ad04a0d0f8c7015c818f1a10c7a1454" successfully removed
- [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: umount /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: lvremove -f vg_431729f9e9eaac554584ab8784472fb9/tp_f98c2763743d3c9647a112338fa2abb7
- Result: Logical volume "brick_f98c2763743d3c9647a112338fa2abb7" successfully removed
- Logical volume "tp_f98c2763743d3c9647a112338fa2abb7" successfully removed
- [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: rmdir /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_0ad04a0d0f8c7015c818f1a10c7a1454
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: rmdir /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_f98c2763743d3c9647a112338fa2abb7
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: sed -i.save "/brick_0ad04a0d0f8c7015c818f1a10c7a1454/d" /var/lib/heketi/fstab
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: lvremove -f vg_ccfc2ad4653c9d38c8e619232307149a/tp_ff00dfa4c19eafa51a18c034c3adc381
- Result: Logical volume "brick_ff00dfa4c19eafa51a18c034c3adc381" successfully removed
- Logical volume "tp_ff00dfa4c19eafa51a18c034c3adc381" successfully removed
- [kubeexec] DEBUG 2018/09/03 09:06:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: sed -i.save "/brick_f98c2763743d3c9647a112338fa2abb7/d" /var/lib/heketi/fstab
- Result:
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 90.92µs
- [kubeexec] DEBUG 2018/09/03 09:06:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: rmdir /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_ff00dfa4c19eafa51a18c034c3adc381
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: sed -i.save "/brick_ff00dfa4c19eafa51a18c034c3adc381/d" /var/lib/heketi/fstab
- Result:
- [heketi] INFO 2018/09/03 09:06:36 Allocating brick set #0
- [heketi] INFO 2018/09/03 09:06:36 Creating brick f6904696fbd30686107ebc84c846d2ff
- [heketi] INFO 2018/09/03 09:06:36 Creating brick 7cf5ffef6984e236ab8a43e2fa4836dd
- [heketi] INFO 2018/09/03 09:06:36 Creating brick 9d235c1bb9a6737dfee12b8424673a0c
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 159.17µs
- [kubeexec] DEBUG 2018/09/03 09:06:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mkdir -p /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mkdir -p /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mkdir -p /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: lvcreate --poolmetadatasize 12288K -c 256K -L 2097152K -T vg_2453bf761ebcb88158a59d64f443d352/tp_f6904696fbd30686107ebc84c846d2ff -V 2097152K -n brick_f6904696fbd30686107ebc84c846d2ff
- Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
- Logical volume "brick_f6904696fbd30686107ebc84c846d2ff" created.
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 89.71µs
- [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: lvcreate --poolmetadatasize 12288K -c 256K -L 2097152K -T vg_ccfc2ad4653c9d38c8e619232307149a/tp_7cf5ffef6984e236ab8a43e2fa4836dd -V 2097152K -n brick_7cf5ffef6984e236ab8a43e2fa4836dd
- Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
- Logical volume "brick_7cf5ffef6984e236ab8a43e2fa4836dd" created.
- [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff
- Result: meta-data=/dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff isize=512 agcount=8, agsize=65520 blks
- = sectsz=512 attr=2, projid32bit=1
- = crc=1 finobt=0, sparse=0
- data = bsize=4096 blocks=524160, imaxpct=25
- = sunit=16 swidth=64 blks
- naming =version 2 bsize=8192 ascii-ci=0 ftype=1
- log =internal log bsize=4096 blocks=2560, version=2
- = sectsz=512 sunit=16 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: awk "BEGIN {print \"/dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: lvcreate --poolmetadatasize 12288K -c 256K -L 2097152K -T vg_431729f9e9eaac554584ab8784472fb9/tp_9d235c1bb9a6737dfee12b8424673a0c -V 2097152K -n brick_9d235c1bb9a6737dfee12b8424673a0c
- Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
- Logical volume "brick_9d235c1bb9a6737dfee12b8424673a0c" created.
- [kubeexec] DEBUG 2018/09/03 09:06:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: mkdir /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff/brick
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd
- Result: meta-data=/dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd isize=512 agcount=8, agsize=65520 blks
- = sectsz=512 attr=2, projid32bit=1
- = crc=1 finobt=0, sparse=0
- data = bsize=4096 blocks=524160, imaxpct=25
- = sunit=16 swidth=64 blks
- naming =version 2 bsize=8192 ascii-ci=0 ftype=1
- log =internal log bsize=4096 blocks=2560, version=2
- = sectsz=512 sunit=16 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 96.901µs
- [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: awk "BEGIN {print \"/dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c
- Result: meta-data=/dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c isize=512 agcount=8, agsize=65520 blks
- = sectsz=512 attr=2, projid32bit=1
- = crc=1 finobt=0, sparse=0
- data = bsize=4096 blocks=524160, imaxpct=25
- = sunit=16 swidth=64 blks
- naming =version 2 bsize=8192 ascii-ci=0 ftype=1
- log =internal log bsize=4096 blocks=2560, version=2
- = sectsz=512 sunit=16 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: awk "BEGIN {print \"/dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: mkdir /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd/brick
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c
- Result:
- [kubeexec] DEBUG 2018/09/03 09:06:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: mkdir /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c/brick
- Result:
- [cmdexec] INFO 2018/09/03 09:06:39 Creating volume heketidbstorage replica 3
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 117.636µs
- [kubeexec] DEBUG 2018/09/03 09:06:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: gluster --mode=script volume create heketidbstorage replica 3 10.100.1.72:/var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff/brick 10.100.1.73:/var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd/brick 10.100.1.71:/var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c/brick
- Result: volume create: heketidbstorage: success: please start the volume to access data
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 201.863µs
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 74.687µs
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 75.684µs
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 200 OK in 73.942µs
- [kubeexec] DEBUG 2018/09/03 09:06:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: gluster --mode=script volume start heketidbstorage
- Result: volume start: heketidbstorage: success
- [asynchttp] INFO 2018/09/03 09:06:44 asynchttp.go:292: Completed job 1d0873543c5a840751fe16c2c8ee9ee6 in 17.256120406s
- [negroni] Started GET /queue/1d0873543c5a840751fe16c2c8ee9ee6
- [negroni] Completed 303 See Other in 170.799µs
- [negroni] Started GET /volumes/b7be4e9566b9b3e61523a239c68391e4
- [negroni] Completed 200 OK in 658.536µs
- [negroni] Started GET /backup/db
- [negroni] Completed 200 OK in 265.79µs
- [heketi] INFO 2018/09/03 09:08:08 Starting Node Health Status refresh
- [cmdexec] INFO 2018/09/03 09:08:08 Check Glusterd service status in node node3
- [kubeexec] DEBUG 2018/09/03 09:08:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
- Result: ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 5min ago
- Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 97 (glusterd)
- CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
- ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- ├─611 /usr/sbin/glusterfsd -s 10.100.1.71 --volfile-id heketidbstorage.10.100.1.71.var-lib-heketi-mounts-vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c-brick -p /var/run/gluster/vols/heketidbstorage/10.100.1.71-var-lib-heketi-mounts-vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c-brick.pid -S /var/run/gluster/99c704fce89e2904.socket --brick-name /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c-brick.log --xlator-option *-posix.glusterd-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name brick --brick-port 49153 --xlator-option heketidbstorage-server.listen-port=49153
- └─634 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
- Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
- Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
- [heketi] INFO 2018/09/03 09:08:08 Periodic health check status: node 1bf89fc30122cefb19a9e33fa5784a13 up=true
- [cmdexec] INFO 2018/09/03 09:08:08 Check Glusterd service status in node node5
- [kubeexec] DEBUG 2018/09/03 09:08:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: systemctl status glusterd
- Result: ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Active: active (running) since Mon 2018-09-03 09:02:59 UTC; 5min ago
- Process: 95 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 96 (glusterd)
- CGroup: /kubepods/burstable/pod1f005e70-af58-11e8-8dd9-fa163ed47b72/0b779139259837e49074fdd9f56183df07c94e4752d38e73234c71f8e5550025/system.slice/glusterd.service
- ├─ 96 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- ├─493 /usr/sbin/glusterfsd -s 10.100.1.73 --volfile-id heketidbstorage.10.100.1.73.var-lib-heketi-mounts-vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd-brick -p /var/run/gluster/vols/heketidbstorage/10.100.1.73-var-lib-heketi-mounts-vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd-brick.pid -S /var/run/gluster/bda722f6ba713278.socket --brick-name /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd-brick.log --xlator-option *-posix.glusterd-uuid=9b521451-5ae2-44ea-ace9-73c51a98bc18 --process-name brick --brick-port 49153 --xlator-option heketidbstorage-server.listen-port=49153
- └─516 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/c75540a109b4b1e0.socket --xlator-option *replicate*.node-uuid=9b521451-5ae2-44ea-ace9-73c51a98bc18 --process-name glustershd
- Sep 03 09:02:56 node5 systemd[1]: Starting GlusterFS, a clustered file-system server...
- Sep 03 09:02:59 node5 systemd[1]: Started GlusterFS, a clustered file-system server.
- [heketi] INFO 2018/09/03 09:08:08 Periodic health check status: node de310d550bef8cd7c2414e6240da36e7 up=true
- [cmdexec] INFO 2018/09/03 09:08:08 Check Glusterd service status in node node4
- [kubeexec] DEBUG 2018/09/03 09:08:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: systemctl status glusterd
- Result: ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Active: active (running) since Mon 2018-09-03 09:03:17 UTC; 4min 51s ago
- Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 97 (glusterd)
- CGroup: /kubepods/burstable/pod1f0dd694-af58-11e8-8dd9-fa163ed47b72/ba6810326497d28a8ceb04e636e84c3a08b9878bb6c3bddd7959451da3e511ee/system.slice/glusterd.service
- ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- ├─485 /usr/sbin/glusterfsd -s 10.100.1.72 --volfile-id heketidbstorage.10.100.1.72.var-lib-heketi-mounts-vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff-brick -p /var/run/gluster/vols/heketidbstorage/10.100.1.72-var-lib-heketi-mounts-vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff-brick.pid -S /var/run/gluster/12b5528ed96b7d64.socket --brick-name /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff-brick.log --xlator-option *-posix.glusterd-uuid=1934d29c-d5f1-4ecb-bc60-9f14d978487e --process-name brick --brick-port 49153 --xlator-option heketidbstorage-server.listen-port=49153
- └─508 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/71e62a0db612eacb.socket --xlator-option *replicate*.node-uuid=1934d29c-d5f1-4ecb-bc60-9f14d978487e --process-name glustershd
- Sep 03 09:03:09 node4 systemd[1]: Starting GlusterFS, a clustered file-system server...
- Sep 03 09:03:17 node4 systemd[1]: Started GlusterFS, a clustered file-system server.
- [heketi] INFO 2018/09/03 09:08:08 Periodic health check status: node fce79176083462c39bafb000a899fff3 up=true
- [heketi] INFO 2018/09/03 09:08:08 Cleaned 0 nodes from health cache
- [heketi] INFO 2018/09/03 09:10:08 Starting Node Health Status refresh
- [cmdexec] INFO 2018/09/03 09:10:08 Check Glusterd service status in node node3
- [kubeexec] DEBUG 2018/09/03 09:10:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node3 Pod: glusterfs-gddwq Command: systemctl status glusterd
- Result: ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Active: active (running) since Mon 2018-09-03 09:03:08 UTC; 7min ago
- Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 97 (glusterd)
- CGroup: /kubepods/burstable/pod1f053be0-af58-11e8-8dd9-fa163ed47b72/4385fe443ab28c5e313f62da4d91115eae38574768f8dedd7c94045bb3d55418/system.slice/glusterd.service
- ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- ├─611 /usr/sbin/glusterfsd -s 10.100.1.71 --volfile-id heketidbstorage.10.100.1.71.var-lib-heketi-mounts-vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c-brick -p /var/run/gluster/vols/heketidbstorage/10.100.1.71-var-lib-heketi-mounts-vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c-brick.pid -S /var/run/gluster/99c704fce89e2904.socket --brick-name /var/lib/heketi/mounts/vg_431729f9e9eaac554584ab8784472fb9/brick_9d235c1bb9a6737dfee12b8424673a0c/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_431729f9e9eaac554584ab8784472fb9-brick_9d235c1bb9a6737dfee12b8424673a0c-brick.log --xlator-option *-posix.glusterd-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name brick --brick-port 49153 --xlator-option heketidbstorage-server.listen-port=49153
- └─634 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f7f3a6a78ba8ed78.socket --xlator-option *replicate*.node-uuid=df011860-bf04-41f6-a9a4-762a8c7a6ca8 --process-name glustershd
- Sep 03 09:03:01 node3 systemd[1]: Starting GlusterFS, a clustered file-system server...
- Sep 03 09:03:08 node3 systemd[1]: Started GlusterFS, a clustered file-system server.
- [heketi] INFO 2018/09/03 09:10:08 Periodic health check status: node 1bf89fc30122cefb19a9e33fa5784a13 up=true
- [cmdexec] INFO 2018/09/03 09:10:08 Check Glusterd service status in node node5
- [kubeexec] DEBUG 2018/09/03 09:10:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node5 Pod: glusterfs-l9ts8 Command: systemctl status glusterd
- Result: ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Active: active (running) since Mon 2018-09-03 09:02:59 UTC; 7min ago
- Process: 95 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 96 (glusterd)
- CGroup: /kubepods/burstable/pod1f005e70-af58-11e8-8dd9-fa163ed47b72/0b779139259837e49074fdd9f56183df07c94e4752d38e73234c71f8e5550025/system.slice/glusterd.service
- ├─ 96 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- ├─493 /usr/sbin/glusterfsd -s 10.100.1.73 --volfile-id heketidbstorage.10.100.1.73.var-lib-heketi-mounts-vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd-brick -p /var/run/gluster/vols/heketidbstorage/10.100.1.73-var-lib-heketi-mounts-vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd-brick.pid -S /var/run/gluster/bda722f6ba713278.socket --brick-name /var/lib/heketi/mounts/vg_ccfc2ad4653c9d38c8e619232307149a/brick_7cf5ffef6984e236ab8a43e2fa4836dd/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_ccfc2ad4653c9d38c8e619232307149a-brick_7cf5ffef6984e236ab8a43e2fa4836dd-brick.log --xlator-option *-posix.glusterd-uuid=9b521451-5ae2-44ea-ace9-73c51a98bc18 --process-name brick --brick-port 49153 --xlator-option heketidbstorage-server.listen-port=49153
- └─516 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/c75540a109b4b1e0.socket --xlator-option *replicate*.node-uuid=9b521451-5ae2-44ea-ace9-73c51a98bc18 --process-name glustershd
- Sep 03 09:02:56 node5 systemd[1]: Starting GlusterFS, a clustered file-system server...
- Sep 03 09:02:59 node5 systemd[1]: Started GlusterFS, a clustered file-system server.
- [heketi] INFO 2018/09/03 09:10:08 Periodic health check status: node de310d550bef8cd7c2414e6240da36e7 up=true
- [cmdexec] INFO 2018/09/03 09:10:08 Check Glusterd service status in node node4
- [kubeexec] DEBUG 2018/09/03 09:10:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:244: Host: node4 Pod: glusterfs-7xwgg Command: systemctl status glusterd
- Result: ● glusterd.service - GlusterFS, a clustered file-system server
- Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
- Active: active (running) since Mon 2018-09-03 09:03:17 UTC; 6min ago
- Process: 96 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
- Main PID: 97 (glusterd)
- CGroup: /kubepods/burstable/pod1f0dd694-af58-11e8-8dd9-fa163ed47b72/ba6810326497d28a8ceb04e636e84c3a08b9878bb6c3bddd7959451da3e511ee/system.slice/glusterd.service
- ├─ 97 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
- ├─485 /usr/sbin/glusterfsd -s 10.100.1.72 --volfile-id heketidbstorage.10.100.1.72.var-lib-heketi-mounts-vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff-brick -p /var/run/gluster/vols/heketidbstorage/10.100.1.72-var-lib-heketi-mounts-vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff-brick.pid -S /var/run/gluster/12b5528ed96b7d64.socket --brick-name /var/lib/heketi/mounts/vg_2453bf761ebcb88158a59d64f443d352/brick_f6904696fbd30686107ebc84c846d2ff/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_2453bf761ebcb88158a59d64f443d352-brick_f6904696fbd30686107ebc84c846d2ff-brick.log --xlator-option *-posix.glusterd-uuid=1934d29c-d5f1-4ecb-bc60-9f14d978487e --process-name brick --brick-port 49153 --xlator-option heketidbstorage-server.listen-port=49153
- └─508 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/71e62a0db612eacb.socket --xlator-option *replicate*.node-uuid=1934d29c-d5f1-4ecb-bc60-9f14d978487e --process-name glustershd
- Sep 03 09:03:09 node4 systemd[1]: Starting GlusterFS, a clustered file-system server...
- Sep 03 09:03:17 node4 systemd[1]: Started GlusterFS, a clustered file-system server.
- [heketi] INFO 2018/09/03 09:10:08 Periodic health check status: node fce79176083462c39bafb000a899fff3 up=true
- [heketi] INFO 2018/09/03 09:10:08 Cleaned 0 nodes from health cache
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement