Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Macbook-Sudakov:~ knoppix$ Downloads/darwin-amd64/helm status allure-ee
- LAST DEPLOYED: Sat Jan 26 11:02:53 2019
- NAMESPACE: default
- STATUS: DEPLOYED
- RESOURCES:
- ==> v1/ConfigMap
- NAME DATA AGE
- allure-ee-rabbitmq-config 2 11h
- ==> v1/Service
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- allure-ee-rabbitmq-headless ClusterIP None <none> 4369/TCP,5672/TCP,25672/TCP,15672/TCP 11h
- allure-ee-rabbitmq ClusterIP 10.106.74.236 <none> 4369/TCP,5672/TCP,25672/TCP,15672/TCP 11h
- allure-ee-report NodePort 10.109.18.240 <none> 8081:32187/TCP 11h
- allure-ee-uaa NodePort 10.99.77.59 <none> 8082:32237/TCP 11h
- allure-ee-ui NodePort 10.111.90.249 <none> 8083:30008/TCP 11h
- ==> v1beta2/Deployment
- NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
- allure-ee-report 1 1 1 0 11h
- allure-ee-uaa 1 1 1 0 11h
- allure-ee-ui 1 1 1 0 11h
- ==> v1beta2/StatefulSet
- NAME DESIRED CURRENT AGE
- allure-ee-rabbitmq 1 1 11h
- ==> v1beta1/Ingress
- NAME HOSTS ADDRESS PORTS AGE
- allure-ee allure.local 80 11h
- ==> v1/Secret
- NAME TYPE DATA AGE
- allure-ee-rabbitmq Opaque 2 11h
- allure-ee Opaque 3 11h
- ==> v1/Role
- NAME AGE
- allure-ee-rabbitmq-endpoint-reader 11h
- ==> v1/RoleBinding
- NAME AGE
- allure-ee-rabbitmq-endpoint-reader 11h
- ==> v1/Pod(related)
- NAME READY STATUS RESTARTS AGE
- allure-ee-report-6f6bc894fb-5x9k2 0/1 Running 38 11h
- allure-ee-uaa-75dc446dc7-d2dts 0/1 Running 35 11h
- allure-ee-ui-55878c4946-dsqll 0/1 Running 35 11h
- allure-ee-rabbitmq-0 1/1 Running 1 11h
- ==> v1/ServiceAccount
- NAME SECRETS AGE
- allure-ee-rabbitmq 1 11h
- Macbook-Sudakov:~ knoppix$ for i in `kubectl get pods | grep allure-ee | awk '{print $1}'`; do echo ">>>>>>>>>>>>>>>>>>>>>>>>> $i"; kubectl describe pod $i; done
- >>>>>>>>>>>>>>>>>>>>>>>>> allure-ee-rabbitmq-0
- Name: allure-ee-rabbitmq-0
- Namespace: default
- Priority: 0
- PriorityClassName: <none>
- Node: minikube/10.0.2.15
- Start Time: Fri, 25 Jan 2019 23:43:29 +0300
- Labels: app=rabbitmq
- chart=rabbitmq-4.1.0
- controller-revision-hash=allure-ee-rabbitmq-79b58f8789
- release=allure-ee
- statefulset.kubernetes.io/pod-name=allure-ee-rabbitmq-0
- Annotations: <none>
- Status: Running
- IP: 172.17.0.13
- Controlled By: StatefulSet/allure-ee-rabbitmq
- Containers:
- rabbitmq:
- Container ID: docker://c9cc56ab7d20345a34399dbf98997a32f022f2f76c449828c4dc40f89b2d3001
- Image: docker.io/bitnami/rabbitmq:3.7.10
- Image ID: docker-pullable://bitnami/rabbitmq@sha256:9d77cf6023180b7b74633a86a378de04840c9d5ed7ffff30f17bc5970c08c6af
- Ports: 4369/TCP, 5672/TCP, 25672/TCP, 15672/TCP
- Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
- Command:
- bash
- -ec
- mkdir -p /opt/bitnami/rabbitmq/.rabbitmq/
- mkdir -p /opt/bitnami/rabbitmq/etc/rabbitmq/
- #persist the erlang cookie in both places for server and cli tools
- echo $RABBITMQ_ERL_COOKIE > /opt/bitnami/rabbitmq/var/lib/rabbitmq/.erlang.cookie
- cp /opt/bitnami/rabbitmq/var/lib/rabbitmq/.erlang.cookie /opt/bitnami/rabbitmq/.rabbitmq/
- #change permission so only the user has access to the cookie file
- chmod 600 /opt/bitnami/rabbitmq/.rabbitmq/.erlang.cookie /opt/bitnami/rabbitmq/var/lib/rabbitmq/.erlang.cookie
- #copy the mounted configuration to both places
- cp /opt/bitnami/rabbitmq/conf/* /opt/bitnami/rabbitmq/etc/rabbitmq
- # Apply resources limits
- ulimit -n "${RABBITMQ_ULIMIT_NOFILES}"
- #replace the default password that is generated
- sed -i "s/CHANGEME/$RABBITMQ_PASSWORD/g" /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf
- # Move logs to stdout
- ln -sF /dev/stdout /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@${MY_POD_IP}.log
- ln -sF /dev/stdout /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@${MY_POD_IP}_upgrade.log
- exec rabbitmq-server
- State: Running
- Started: Sat, 26 Jan 2019 10:59:18 +0300
- Last State: Terminated
- Reason: Completed
- Exit Code: 0
- Started: Fri, 25 Jan 2019 23:44:10 +0300
- Finished: Sat, 26 Jan 2019 01:30:15 +0300
- Ready: True
- Restart Count: 1
- Liveness: exec [sh -c test "$(curl -sS -f --user allure:$RABBITMQ_PASSWORD 127.0.0.1:15672/api/healthchecks/node)" = '{"status":"ok"}'] delay=120s timeout=20s period=30s #success=1 #failure=6
- Readiness: exec [sh -c test "$(curl -sS -f --user allure:$RABBITMQ_PASSWORD 127.0.0.1:15672/api/healthchecks/node)" = '{"status":"ok"}'] delay=10s timeout=20s period=30s #success=1 #failure=3
- Environment:
- MY_POD_IP: (v1:status.podIP)
- MY_POD_NAME: allure-ee-rabbitmq-0 (v1:metadata.name)
- MY_POD_NAMESPACE: default (v1:metadata.namespace)
- K8S_SERVICE_NAME: allure-ee-rabbitmq-headless
- K8S_ADDRESS_TYPE: ip
- RABBITMQ_NODENAME: rabbit@$(MY_POD_IP)
- RABBITMQ_ULIMIT_NOFILES: 65536
- RABBITMQ_USE_LONGNAME: true
- RABBITMQ_ERL_COOKIE: <set to the key 'rabbitmq-erlang-cookie' in secret 'allure-ee-rabbitmq'> Optional: false
- RABBITMQ_PASSWORD: <set to the key 'rabbitmq-password' in secret 'allure-ee-rabbitmq'> Optional: false
- Mounts:
- /opt/bitnami/rabbitmq/conf from config-volume (rw)
- /opt/bitnami/rabbitmq/var/lib/rabbitmq/ from data (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from allure-ee-rabbitmq-token-x5rqf (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- ContainersReady True
- PodScheduled True
- Volumes:
- config-volume:
- Type: ConfigMap (a volume populated by a ConfigMap)
- Name: allure-ee-rabbitmq-config
- Optional: false
- data:
- Type: EmptyDir (a temporary directory that shares a pod's lifetime)
- Medium:
- allure-ee-rabbitmq-token-x5rqf:
- Type: Secret (a volume populated by a Secret)
- SecretName: allure-ee-rabbitmq-token-x5rqf
- Optional: false
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
- node.kubernetes.io/unreachable:NoExecute for 300s
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal SandboxChanged 33m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
- Normal Pulled 33m kubelet, minikube Container image "docker.io/bitnami/rabbitmq:3.7.10" already present on machine
- Normal Created 33m kubelet, minikube Created container
- Normal Started 33m kubelet, minikube Started container
- Warning Unhealthy 30m (x6 over 32m) kubelet, minikube Readiness probe failed: curl: (7) Failed to connect to 127.0.0.1 port 15672: Connection refused
- Warning Unhealthy 30m (x2 over 30m) kubelet, minikube Liveness probe failed: curl: (7) Failed to connect to 127.0.0.1 port 15672: Connection refused
- >>>>>>>>>>>>>>>>>>>>>>>>> allure-ee-report-6f6bc894fb-5x9k2
- Name: allure-ee-report-6f6bc894fb-5x9k2
- Namespace: default
- Priority: 0
- PriorityClassName: <none>
- Node: minikube/10.0.2.15
- Start Time: Fri, 25 Jan 2019 23:43:28 +0300
- Labels: app=allure-ee-report
- pod-template-hash=6f6bc894fb
- Annotations: <none>
- Status: Running
- IP: 172.17.0.14
- Controlled By: ReplicaSet/allure-ee-report-6f6bc894fb
- Containers:
- allure-ee-report:
- Container ID: docker://28575afc72920bb3f87f19e8bd2a117a33999d01c7b4a336fcacc3bc0d70c9a3
- Image: allure/allure-report:latest
- Image ID: docker-pullable://allure/allure-report@sha256:59a2b1978de7d2c3edb61c17d22ea29cd41f1954dbea64a0401800005ad258f6
- Port: 8081/TCP
- Host Port: 0/TCP
- State: Waiting
- Reason: CrashLoopBackOff
- Last State: Terminated
- Reason: Error
- Exit Code: 137
- Started: Sat, 26 Jan 2019 11:30:15 +0300
- Finished: Sat, 26 Jan 2019 11:32:10 +0300
- Ready: False
- Restart Count: 38
- Liveness: http-get http://:http/api/rs/management/health delay=60s timeout=1s period=10s #success=1 #failure=3
- Readiness: http-get http://:http/api/rs/management/health delay=60s timeout=1s period=10s #success=1 #failure=3
- Environment:
- SPRING_RABBITMQ_HOST: allure-ee-rabbitmq
- SPRING_RABBITMQ_USERNAME: allure
- SPRING_RABBITMQ_PASSWORD: allure
- ALLURE_BLOB_STORAGE_FILE_SYSTEM_DIRECTORY: /opt/allure/report/storage
- ALLURE_BLOB_STORAGE_TYPE: FILE_SYSTEM
- JAVA_OPTS: -Xss256k -Xms256m -Xmx1g -XX:+UseStringDeduplication -XX:+UseG1GC
- SERVER_SERVLET_CONTEXTPATH: /api/rs
- SPRING_CLOUD_CONSUL_ENABLED: false
- SPRING_MVC_ASYNC_REQUEST_TIMEOUT: -1
- SPRING_RABBITMQ_LISTENER_SIMPLE_CONCURRENCY: 3
- SPRING_RABBITMQ_LISTENER_SIMPLE_MAX_CONCURRENCY: 15
- SPRING_RABBITMQ_LISTENER_SIMPLE_PREFETCH: 10
- Mounts:
- /opt/allure/report/storage from storage-volume (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from default-token-hkh42 (ro)
- Conditions:
- Type Status
- Initialized True
- Ready False
- ContainersReady False
- PodScheduled True
- Volumes:
- storage-volume:
- Type: EmptyDir (a temporary directory that shares a pod's lifetime)
- Medium:
- default-token-hkh42:
- Type: Secret (a volume populated by a Secret)
- SecretName: default-token-hkh42
- Optional: false
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
- node.kubernetes.io/unreachable:NoExecute for 300s
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Pulled 10h (x18 over 11h) kubelet, minikube Successfully pulled image "allure/allure-report:latest"
- Warning BackOff 10h (x222 over 11h) kubelet, minikube Back-off restarting failed container
- Warning Unhealthy 10h (x157 over 11h) kubelet, minikube Readiness probe failed: Get http://172.17.0.14:8081/api/rs/management/health: dial tcp 172.17.0.14:8081: connect: connection refused
- Normal SandboxChanged 33m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
- Normal Pulling 31m (x2 over 33m) kubelet, minikube pulling image "allure/allure-report:latest"
- Normal Killing 31m kubelet, minikube Killing container with id docker://allure-ee-report:Container failed liveness probe.. Container will be killed and recreated.
- Normal Pulled 31m (x2 over 33m) kubelet, minikube Successfully pulled image "allure/allure-report:latest"
- Normal Created 31m (x2 over 33m) kubelet, minikube Created container
- Normal Started 31m (x2 over 33m) kubelet, minikube Started container
- Warning Unhealthy 29m (x6 over 32m) kubelet, minikube Liveness probe failed: Get http://172.17.0.14:8081/api/rs/management/health: dial tcp 172.17.0.14:8081: connect: connection refused
- Warning BackOff 8m11s (x48 over 23m) kubelet, minikube Back-off restarting failed container
- Warning Unhealthy 3m12s (x50 over 32m) kubelet, minikube Readiness probe failed: Get http://172.17.0.14:8081/api/rs/management/health: dial tcp 172.17.0.14:8081: connect: connection refused
- >>>>>>>>>>>>>>>>>>>>>>>>> allure-ee-uaa-75dc446dc7-d2dts
- Name: allure-ee-uaa-75dc446dc7-d2dts
- Namespace: default
- Priority: 0
- PriorityClassName: <none>
- Node: minikube/10.0.2.15
- Start Time: Fri, 25 Jan 2019 23:43:28 +0300
- Labels: app=allure-ee-uaa
- pod-template-hash=75dc446dc7
- Annotations: <none>
- Status: Running
- IP: 172.17.0.7
- Controlled By: ReplicaSet/allure-ee-uaa-75dc446dc7
- Containers:
- allure-ee-uaa:
- Container ID: docker://c1c57547fce3e0a6adb310cd1b715f2e9842125750eaa2e54e3da0d5b9c4e738
- Image: allure/allure-uaa:latest
- Image ID: docker-pullable://allure/allure-uaa@sha256:4e50aba88540dfdbe483d5aa629fb0c7c9f691458295f61f81b64699d362fbe9
- Port: 8082/TCP
- Host Port: 0/TCP
- State: Waiting
- Reason: CrashLoopBackOff
- Last State: Terminated
- Reason: Error
- Exit Code: 137
- Started: Sat, 26 Jan 2019 11:30:21 +0300
- Finished: Sat, 26 Jan 2019 11:32:16 +0300
- Ready: False
- Restart Count: 35
- Liveness: http-get http://:http/api/uaa/management/health delay=60s timeout=1s period=10s #success=1 #failure=3
- Readiness: http-get http://:http/api/uaa/management/health delay=60s timeout=1s period=10s #success=1 #failure=3
- Environment:
- JAVA_OPTS: -Xss256k -Xms256m -Xmx256m -XX:+UseStringDeduplication -XX:+UseG1GC
- SERVER_SERVLET_CONTEXTPATH: /api/uaa
- SPRING_CLOUD_CONSUL_ENABLED: false
- SPRING_MVC_ASYNC_REQUEST_TIMEOUT: -1
- ALLURE_SECURITY_USER_NAME: <set to the key 'username' in secret 'allure-ee'> Optional: false
- ALLURE_SECURITY_USER_PASSWORD: <set to the key 'password' in secret 'allure-ee'> Optional: false
- ALLURE_LICENSE_BODY: <set to the key 'licenseKey' in secret 'allure-ee'> Optional: false
- Mounts:
- /var/run/secrets/kubernetes.io/serviceaccount from default-token-hkh42 (ro)
- Conditions:
- Type Status
- Initialized True
- Ready False
- ContainersReady False
- PodScheduled True
- Volumes:
- default-token-hkh42:
- Type: Secret (a volume populated by a Secret)
- SecretName: default-token-hkh42
- Optional: false
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
- node.kubernetes.io/unreachable:NoExecute for 300s
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Warning Unhealthy 10h (x129 over 11h) kubelet, minikube Readiness probe failed: Get http://172.17.0.12:8082/api/uaa/management/health: dial tcp 172.17.0.12:8082: connect: connection refused
- Warning Unhealthy 10h (x76 over 11h) kubelet, minikube Liveness probe failed: Get http://172.17.0.12:8082/api/uaa/management/health: dial tcp 172.17.0.12:8082: connect: connection refused
- Warning BackOff 10h (x254 over 11h) kubelet, minikube Back-off restarting failed container
- Normal SandboxChanged 33m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
- Normal Pulling 31m (x2 over 33m) kubelet, minikube pulling image "allure/allure-uaa:latest"
- Normal Killing 31m kubelet, minikube Killing container with id docker://allure-ee-uaa:Container failed liveness probe.. Container will be killed and recreated.
- Normal Pulled 31m (x2 over 33m) kubelet, minikube Successfully pulled image "allure/allure-uaa:latest"
- Normal Created 31m (x2 over 33m) kubelet, minikube Created container
- Normal Started 31m (x2 over 33m) kubelet, minikube Started container
- Warning Unhealthy 29m (x9 over 32m) kubelet, minikube Readiness probe failed: Get http://172.17.0.7:8082/api/uaa/management/health: dial tcp 172.17.0.7:8082: connect: connection refused
- Warning BackOff 8m5s (x47 over 23m) kubelet, minikube Back-off restarting failed container
- Warning Unhealthy 3m16s (x25 over 32m) kubelet, minikube Liveness probe failed: Get http://172.17.0.7:8082/api/uaa/management/health: dial tcp 172.17.0.7:8082: connect: connection refused
- >>>>>>>>>>>>>>>>>>>>>>>>> allure-ee-ui-55878c4946-dsqll
- Name: allure-ee-ui-55878c4946-dsqll
- Namespace: default
- Priority: 0
- PriorityClassName: <none>
- Node: minikube/10.0.2.15
- Start Time: Fri, 25 Jan 2019 23:43:28 +0300
- Labels: app=allure-ee-ui
- pod-template-hash=55878c4946
- Annotations: <none>
- Status: Running
- IP: 172.17.0.12
- Controlled By: ReplicaSet/allure-ee-ui-55878c4946
- Containers:
- allure-ee-ui:
- Container ID: docker://f2b1bfbb0c28802c2969f387ac80a7d48cb00e96a07c738ba85531bc07b29c0c
- Image: allure/allure-ui:latest
- Image ID: docker-pullable://allure/allure-ui@sha256:81a08593827fb87cb393d162b0c1a02ca557af62d88770bd48104b76d61e5e66
- Port: 8083/TCP
- Host Port: 0/TCP
- State: Waiting
- Reason: CrashLoopBackOff
- Last State: Terminated
- Reason: Error
- Exit Code: 137
- Started: Sat, 26 Jan 2019 11:30:23 +0300
- Finished: Sat, 26 Jan 2019 11:32:21 +0300
- Ready: False
- Restart Count: 35
- Liveness: http-get http://:http/management/health delay=60s timeout=1s period=10s #success=1 #failure=3
- Readiness: http-get http://:http/management/health delay=60s timeout=1s period=10s #success=1 #failure=3
- Environment:
- JAVA_OPTS: -Xss256k -Xms256m -Xmx256m -XX:+UseStringDeduplication -XX:+UseG1GC
- SPRING_CLOUD_CONSUL_ENABLED: false
- Mounts:
- /var/run/secrets/kubernetes.io/serviceaccount from default-token-hkh42 (ro)
- Conditions:
- Type Status
- Initialized True
- Ready False
- ContainersReady False
- PodScheduled True
- Volumes:
- default-token-hkh42:
- Type: Secret (a volume populated by a Secret)
- SecretName: default-token-hkh42
- Optional: false
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
- node.kubernetes.io/unreachable:NoExecute for 300s
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Warning Unhealthy 10h (x107 over 11h) kubelet, minikube Readiness probe failed: Get http://172.17.0.13:8083/management/health: dial tcp 172.17.0.13:8083: connect: connection refused
- Warning Unhealthy 10h (x73 over 11h) kubelet, minikube Liveness probe failed: Get http://172.17.0.13:8083/management/health: dial tcp 172.17.0.13:8083: connect: connection refused
- Warning BackOff 10h (x247 over 11h) kubelet, minikube Back-off restarting failed container
- Normal SandboxChanged 33m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
- Normal Pulling 31m (x2 over 33m) kubelet, minikube pulling image "allure/allure-ui:latest"
- Normal Killing 31m kubelet, minikube Killing container with id docker://allure-ee-ui:Container failed liveness probe.. Container will be killed and recreated.
- Normal Pulled 31m (x2 over 33m) kubelet, minikube Successfully pulled image "allure/allure-ui:latest"
- Normal Created 31m (x2 over 33m) kubelet, minikube Created container
- Normal Started 31m (x2 over 33m) kubelet, minikube Started container
- Warning Unhealthy 29m (x6 over 32m) kubelet, minikube Liveness probe failed: Get http://172.17.0.12:8083/management/health: dial tcp 172.17.0.12:8083: connect: connection refused
- Warning BackOff 8m15s (x45 over 23m) kubelet, minikube Back-off restarting failed container
- Warning Unhealthy 3m15s (x49 over 32m) kubelet, minikube Readiness probe failed: Get http://172.17.0.12:8083/management/health: dial tcp 172.17.0.12:8083: connect: connection refused
- Macbook-Sudakov:~ knoppix$ for i in `kubectl get pods | grep allure-ee | awk '{print $1}'`; do echo ">>>>>>>>>>>>>>>>>>>>>>>>> $i"; kubectl logs $i; done
- >>>>>>>>>>>>>>>>>>>>>>>>> allure-ee-rabbitmq-0
- 2019-01-26 08:01:20.870 [info] <0.8.0> Log file opened with Lager
- 2019-01-26 08:01:20.908 [info] <0.8.0> Log file opened with Lager
- ## ##
- ## ## RabbitMQ 3.7.10. Copyright (C) 2007-2018 Pivotal Software, Inc.
- ########## Licensed under the MPL. See http://www.rabbitmq.com/
- ###### ##
- ########## Logs: /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@172.17.0.13.log
- /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@172.17.0.13_upgrade.log
- Starting broker...
- 2019-01-26 08:01:27.207 [info] <0.213.0>
- Starting RabbitMQ 3.7.10 on Erlang 21.2
- Copyright (C) 2007-2018 Pivotal Software, Inc.
- Licensed under the MPL. See http://www.rabbitmq.com/
- 2019-01-26 08:01:27.277 [info] <0.213.0>
- node : rabbit@172.17.0.13
- home dir : /opt/bitnami/rabbitmq/.rabbitmq
- config file(s) : /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf
- cookie hash : EiyJAid/cGQFUQSXz9jULg==
- log(s) : /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@172.17.0.13.log
- : /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@172.17.0.13_upgrade.log
- database dir : /opt/bitnami/rabbitmq/var/lib/rabbitmq/mnesia/rabbit@172.17.0.13
- 2019-01-26 08:02:24.155 [info] <0.221.0> Memory high watermark set to 796 MiB (835020390 bytes) of 1990 MiB (2087550976 bytes) total
- 2019-01-26 08:02:24.631 [info] <0.223.0> Enabling free disk space monitoring
- 2019-01-26 08:02:24.631 [info] <0.223.0> Disk free limit set to 50MB
- 2019-01-26 08:02:24.818 [info] <0.226.0> Limiting to approx 65436 file handles (58890 sockets)
- 2019-01-26 08:02:24.830 [info] <0.227.0> FHC read buffering: OFF
- 2019-01-26 08:02:24.832 [info] <0.227.0> FHC write buffering: ON
- 2019-01-26 08:02:24.967 [info] <0.213.0> Node database directory at /opt/bitnami/rabbitmq/var/lib/rabbitmq/mnesia/rabbit@172.17.0.13 is empty. Assuming we need to join an existing cluster or initialise from scratch...
- 2019-01-26 08:02:24.968 [info] <0.213.0> Configured peer discovery backend: rabbit_peer_discovery_k8s
- 2019-01-26 08:02:24.986 [info] <0.213.0> Will try to lock with peer discovery backend rabbit_peer_discovery_k8s
- 2019-01-26 08:02:24.987 [info] <0.213.0> Peer discovery backend does not support locking, falling back to randomized delay
- 2019-01-26 08:02:24.987 [info] <0.213.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping randomized startup delay.
- 2019-01-26 08:02:27.574 [info] <0.213.0> k8s endpoint listing returned nodes not yet ready: 172.17.0.13
- 2019-01-26 08:02:27.574 [info] <0.213.0> All discovered existing cluster peers:
- 2019-01-26 08:02:27.575 [info] <0.213.0> Discovered no peer nodes to cluster with
- 2019-01-26 08:02:29.328 [info] <0.43.0> Application mnesia exited with reason: stopped
- 2019-01-26 08:02:35.371 [info] <0.213.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
- 2019-01-26 08:02:35.828 [info] <0.213.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
- 2019-01-26 08:02:36.475 [info] <0.213.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
- 2019-01-26 08:02:36.475 [info] <0.213.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping registration.
- 2019-01-26 08:02:36.499 [info] <0.213.0> Priority queues enabled, real BQ is rabbit_variable_queue
- 2019-01-26 08:02:36.783 [info] <0.398.0> Starting rabbit_node_monitor
- 2019-01-26 08:02:37.052 [info] <0.213.0> message_store upgrades: 1 to apply
- 2019-01-26 08:02:37.052 [info] <0.213.0> message_store upgrades: Applying rabbit_variable_queue:move_messages_to_vhost_store
- 2019-01-26 08:02:37.058 [info] <0.213.0> message_store upgrades: No durable queues found. Skipping message store migration
- 2019-01-26 08:02:37.058 [info] <0.213.0> message_store upgrades: Removing the old message store data
- 2019-01-26 08:02:37.066 [info] <0.213.0> message_store upgrades: All upgrades applied successfully
- 2019-01-26 08:02:37.137 [info] <0.213.0> Management plugin: using rates mode 'basic'
- 2019-01-26 08:02:37.144 [info] <0.213.0> Adding vhost '/'
- 2019-01-26 08:02:37.203 [info] <0.438.0> Making sure data directory '/opt/bitnami/rabbitmq/var/lib/rabbitmq/mnesia/rabbit@172.17.0.13/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists
- 2019-01-26 08:02:37.226 [info] <0.438.0> Starting message stores for vhost '/'
- 2019-01-26 08:02:37.229 [info] <0.442.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index
- 2019-01-26 08:02:37.253 [info] <0.438.0> Started message store of type transient for vhost '/'
- 2019-01-26 08:02:37.253 [info] <0.445.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
- 2019-01-26 08:02:37.263 [warning] <0.445.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": rebuilding indices from scratch
- 2019-01-26 08:02:37.266 [info] <0.438.0> Started message store of type persistent for vhost '/'
- 2019-01-26 08:02:37.272 [info] <0.213.0> Creating user 'allure'
- 2019-01-26 08:02:37.279 [info] <0.213.0> Setting user tags for user 'allure' to [administrator]
- 2019-01-26 08:02:37.286 [info] <0.213.0> Setting permissions for 'allure' in '/' to '.*', '.*', '.*'
- 2019-01-26 08:02:37.738 [warning] <0.469.0> Setting Ranch options together with socket options is deprecated. Please use the new map syntax that allows specifying socket options separately from other options.
- 2019-01-26 08:02:37.739 [info] <0.483.0> started TCP listener on [::]:5672
- 2019-01-26 08:02:37.751 [info] <0.213.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@172.17.0.13'
- 2019-01-26 08:02:37.761 [info] <0.213.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@172.17.0.13'
- 2019-01-26 08:02:37.800 [info] <0.537.0> Peer discovery: enabling node cleanup (will only log warnings). Check interval: 10 seconds.
- 2019-01-26 08:02:37.898 [info] <0.546.0> Management plugin: HTTP (non-TLS) listener started on port 15672
- 2019-01-26 08:02:37.899 [info] <0.652.0> Statistics database started.
- 2019-01-26 08:02:37.920 [notice] <0.106.0> Changed loghwm of /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@172.17.0.13.log to 50
- completed with 5 plugins.
- 2019-01-26 08:02:41.656 [info] <0.8.0> Server startup complete; 5 plugins started.
- * rabbitmq_peer_discovery_k8s
- * rabbitmq_management
- * rabbitmq_web_dispatch
- * rabbitmq_peer_discovery_common
- * rabbitmq_management_agent
- >>>>>>>>>>>>>>>>>>>>>>>>> allure-ee-report-6f6bc894fb-5x9k2
- 2019-01-26 08:30:54.486 INFO 6 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@6950e31: startup date [Sat Jan 26 08:30:54 GMT 2019]; root of context hierarchy
- 2019-01-26 08:31:07.981 INFO 6 --- [ main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
- 2019-01-26 08:31:08.856 INFO 6 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$b554b25f] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
- ___ ____
- / | / / /_ __________
- / /| | / / / / / / ___/ _ \
- / ___ |/ / / /_/ / / / __/
- /_/ |_/_/_/\__,_/_/ \___/
- Powered by Spring Boot (v2.0.6.RELEASE)
- 2019-01-26 08:31:30.427 INFO 6 --- [ main] i.q.allure.report.ReportApplication : The following profiles are active: prod
- 2019-01-26 08:31:31.353 INFO 6 --- [ main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@4690b489: startup date [Sat Jan 26 08:31:31 GMT 2019]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@6950e31
- >>>>>>>>>>>>>>>>>>>>>>>>> allure-ee-uaa-75dc446dc7-d2dts
- 2019-01-26 08:31:07.831 INFO 6 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@1f57539: startup date [Sat Jan 26 08:31:07 GMT 2019]; root of context hierarchy
- 2019-01-26 08:31:34.328 INFO 6 --- [ main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
- 2019-01-26 08:31:35.767 INFO 6 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$dfc0aa1a] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
- ___ ____
- / | / / /_ __________
- / /| | / / / / / / ___/ _ \
- / ___ |/ / / /_/ / / / __/
- /_/ |_/_/_/\__,_/_/ \___/
- Powered by Spring Boot (v2.0.6.RELEASE)
- 2019-01-26 08:31:44.009 INFO 6 --- [ main] io.qameta.allure.uaa.UaaApplication : The following profiles are active: prod
- 2019-01-26 08:31:44.465 INFO 6 --- [ main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@78dd667e: startup date [Sat Jan 26 08:31:44 GMT 2019]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@1f57539
- >>>>>>>>>>>>>>>>>>>>>>>>> allure-ee-ui-55878c4946-dsqll
- 2019-01-26 08:31:31.837 INFO 6 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@2752f6e2: startup date [Sat Jan 26 08:31:31 GMT 2019]; root of context hierarchy
- 2019-01-26 08:31:42.264 INFO 6 --- [ main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
- 2019-01-26 08:31:46.706 INFO 6 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$6fbe5f35] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
- ___ ____
- / | / / /_ __________
- / /| | / / / / / / ___/ _ \
- / ___ |/ / / /_/ / / / __/
- /_/ |_/_/_/\__,_/_/ \___/
- Powered by Spring Boot (v2.0.6.RELEASE)
- 2019-01-26 08:32:09.396 INFO 6 --- [ main] io.qameta.allure.ui.UiApplication : The following profiles are active: prod
- 2019-01-26 08:32:10.877 INFO 6 --- [ main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@52feb982: startup date [Sat Jan 26 08:32:10 GMT 2019]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@2752f6e2
- 2019-01-26 08:32:20.623 INFO 6 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=8ee50e35-9ebf-3101-9af5-1c005793c13a
- 2019-01-26 08:32:20.736 INFO 6 --- [ main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
- 2019-01-26 08:32:21.176 INFO 6 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.retry.annotation.RetryConfiguration' of type [org.springframework.retry.annotation.RetryConfiguration$$EnhancerBySpringCGLIB$$4967f45d] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement