SHARE
TWEET

Untitled

a guest Jan 26th, 2019 124 Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. Macbook-Sudakov:~ knoppix$ Downloads/darwin-amd64/helm status allure-ee
  2. LAST DEPLOYED: Sat Jan 26 11:02:53 2019
  3. NAMESPACE: default
  4. STATUS: DEPLOYED
  5.  
  6. RESOURCES:
  7. ==> v1/ConfigMap
  8. NAME                       DATA  AGE
  9. allure-ee-rabbitmq-config  2     11h
  10.  
  11. ==> v1/Service
  12. NAME                         TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)                                AGE
  13. allure-ee-rabbitmq-headless  ClusterIP  None           <none>       4369/TCP,5672/TCP,25672/TCP,15672/TCP  11h
  14. allure-ee-rabbitmq           ClusterIP  10.106.74.236  <none>       4369/TCP,5672/TCP,25672/TCP,15672/TCP  11h
  15. allure-ee-report             NodePort   10.109.18.240  <none>       8081:32187/TCP                         11h
  16. allure-ee-uaa                NodePort   10.99.77.59    <none>       8082:32237/TCP                         11h
  17. allure-ee-ui                 NodePort   10.111.90.249  <none>       8083:30008/TCP                         11h
  18.  
  19. ==> v1beta2/Deployment
  20. NAME              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
  21. allure-ee-report  1        1        1           0          11h
  22. allure-ee-uaa     1        1        1           0          11h
  23. allure-ee-ui      1        1        1           0          11h
  24.  
  25. ==> v1beta2/StatefulSet
  26. NAME                DESIRED  CURRENT  AGE
  27. allure-ee-rabbitmq  1        1        11h
  28.  
  29. ==> v1beta1/Ingress
  30. NAME       HOSTS         ADDRESS  PORTS  AGE
  31. allure-ee  allure.local  80       11h
  32.  
  33. ==> v1/Secret
  34. NAME                TYPE    DATA  AGE
  35. allure-ee-rabbitmq  Opaque  2     11h
  36. allure-ee           Opaque  3     11h
  37.  
  38. ==> v1/Role
  39. NAME                                AGE
  40. allure-ee-rabbitmq-endpoint-reader  11h
  41.  
  42. ==> v1/RoleBinding
  43. NAME                                AGE
  44. allure-ee-rabbitmq-endpoint-reader  11h
  45.  
  46. ==> v1/Pod(related)
  47. NAME                               READY  STATUS   RESTARTS  AGE
  48. allure-ee-report-6f6bc894fb-5x9k2  0/1    Running  38        11h
  49. allure-ee-uaa-75dc446dc7-d2dts     0/1    Running  35        11h
  50. allure-ee-ui-55878c4946-dsqll      0/1    Running  35        11h
  51. allure-ee-rabbitmq-0               1/1    Running  1         11h
  52.  
  53. ==> v1/ServiceAccount
  54. NAME                SECRETS  AGE
  55. allure-ee-rabbitmq  1        11h
  56.  
  57.  
  58.  
  59. Macbook-Sudakov:~ knoppix$ for i in `kubectl get pods | grep allure-ee | awk '{print $1}'`; do echo ">>>>>>>>>>>>>>>>>>>>>>>>>  $i"; kubectl describe pod $i; done
  60. >>>>>>>>>>>>>>>>>>>>>>>>>  allure-ee-rabbitmq-0
  61. Name:               allure-ee-rabbitmq-0
  62. Namespace:          default
  63. Priority:           0
  64. PriorityClassName:  <none>
  65. Node:               minikube/10.0.2.15
  66. Start Time:         Fri, 25 Jan 2019 23:43:29 +0300
  67. Labels:             app=rabbitmq
  68.                     chart=rabbitmq-4.1.0
  69.                     controller-revision-hash=allure-ee-rabbitmq-79b58f8789
  70.                     release=allure-ee
  71.                     statefulset.kubernetes.io/pod-name=allure-ee-rabbitmq-0
  72. Annotations:        <none>
  73. Status:             Running
  74. IP:                 172.17.0.13
  75. Controlled By:      StatefulSet/allure-ee-rabbitmq
  76. Containers:
  77.   rabbitmq:
  78.     Container ID:  docker://c9cc56ab7d20345a34399dbf98997a32f022f2f76c449828c4dc40f89b2d3001
  79.     Image:         docker.io/bitnami/rabbitmq:3.7.10
  80.     Image ID:      docker-pullable://bitnami/rabbitmq@sha256:9d77cf6023180b7b74633a86a378de04840c9d5ed7ffff30f17bc5970c08c6af
  81.     Ports:         4369/TCP, 5672/TCP, 25672/TCP, 15672/TCP
  82.     Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
  83.     Command:
  84.       bash
  85.       -ec
  86.       mkdir -p /opt/bitnami/rabbitmq/.rabbitmq/
  87.       mkdir -p /opt/bitnami/rabbitmq/etc/rabbitmq/
  88.       #persist the erlang cookie in both places for server and cli tools
  89.       echo $RABBITMQ_ERL_COOKIE > /opt/bitnami/rabbitmq/var/lib/rabbitmq/.erlang.cookie
  90.       cp /opt/bitnami/rabbitmq/var/lib/rabbitmq/.erlang.cookie /opt/bitnami/rabbitmq/.rabbitmq/
  91.       #change permission so only the user has access to the cookie file
  92.       chmod 600 /opt/bitnami/rabbitmq/.rabbitmq/.erlang.cookie /opt/bitnami/rabbitmq/var/lib/rabbitmq/.erlang.cookie
  93.       #copy the mounted configuration to both places
  94.       cp  /opt/bitnami/rabbitmq/conf/* /opt/bitnami/rabbitmq/etc/rabbitmq
  95.       # Apply resources limits
  96.       ulimit -n "${RABBITMQ_ULIMIT_NOFILES}"
  97.       #replace the default password that is generated
  98.       sed -i "s/CHANGEME/$RABBITMQ_PASSWORD/g" /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf
  99.       # Move logs to stdout
  100.       ln -sF /dev/stdout /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@${MY_POD_IP}.log
  101.       ln -sF /dev/stdout /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@${MY_POD_IP}_upgrade.log
  102.       exec rabbitmq-server
  103.  
  104.     State:          Running
  105.       Started:      Sat, 26 Jan 2019 10:59:18 +0300
  106.     Last State:     Terminated
  107.       Reason:       Completed
  108.       Exit Code:    0
  109.       Started:      Fri, 25 Jan 2019 23:44:10 +0300
  110.       Finished:     Sat, 26 Jan 2019 01:30:15 +0300
  111.     Ready:          True
  112.     Restart Count:  1
  113.     Liveness:       exec [sh -c test "$(curl -sS -f --user allure:$RABBITMQ_PASSWORD 127.0.0.1:15672/api/healthchecks/node)" = '{"status":"ok"}'] delay=120s timeout=20s period=30s #success=1 #failure=6
  114.     Readiness:      exec [sh -c test "$(curl -sS -f --user allure:$RABBITMQ_PASSWORD 127.0.0.1:15672/api/healthchecks/node)" = '{"status":"ok"}'] delay=10s timeout=20s period=30s #success=1 #failure=3
  115.     Environment:
  116.       MY_POD_IP:                 (v1:status.podIP)
  117.       MY_POD_NAME:              allure-ee-rabbitmq-0 (v1:metadata.name)
  118.       MY_POD_NAMESPACE:         default (v1:metadata.namespace)
  119.       K8S_SERVICE_NAME:         allure-ee-rabbitmq-headless
  120.       K8S_ADDRESS_TYPE:         ip
  121.       RABBITMQ_NODENAME:        rabbit@$(MY_POD_IP)
  122.       RABBITMQ_ULIMIT_NOFILES:  65536
  123.       RABBITMQ_USE_LONGNAME:    true
  124.       RABBITMQ_ERL_COOKIE:      <set to the key 'rabbitmq-erlang-cookie' in secret 'allure-ee-rabbitmq'>  Optional: false
  125.       RABBITMQ_PASSWORD:        <set to the key 'rabbitmq-password' in secret 'allure-ee-rabbitmq'>       Optional: false
  126.     Mounts:
  127.       /opt/bitnami/rabbitmq/conf from config-volume (rw)
  128.       /opt/bitnami/rabbitmq/var/lib/rabbitmq/ from data (rw)
  129.       /var/run/secrets/kubernetes.io/serviceaccount from allure-ee-rabbitmq-token-x5rqf (ro)
  130. Conditions:
  131.   Type              Status
  132.   Initialized       True
  133.   Ready             True
  134.   ContainersReady   True
  135.   PodScheduled      True
  136. Volumes:
  137.   config-volume:
  138.     Type:      ConfigMap (a volume populated by a ConfigMap)
  139.     Name:      allure-ee-rabbitmq-config
  140.     Optional:  false
  141.   data:
  142.     Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
  143.     Medium:
  144.   allure-ee-rabbitmq-token-x5rqf:
  145.     Type:        Secret (a volume populated by a Secret)
  146.     SecretName:  allure-ee-rabbitmq-token-x5rqf
  147.     Optional:    false
  148. QoS Class:       BestEffort
  149. Node-Selectors:  <none>
  150. Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
  151.                  node.kubernetes.io/unreachable:NoExecute for 300s
  152. Events:
  153.   Type     Reason          Age                From               Message
  154.   ----     ------          ----               ----               -------
  155.   Normal   SandboxChanged  33m                kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  156.   Normal   Pulled          33m                kubelet, minikube  Container image "docker.io/bitnami/rabbitmq:3.7.10" already present on machine
  157.   Normal   Created         33m                kubelet, minikube  Created container
  158.   Normal   Started         33m                kubelet, minikube  Started container
  159.   Warning  Unhealthy       30m (x6 over 32m)  kubelet, minikube  Readiness probe failed: curl: (7) Failed to connect to 127.0.0.1 port 15672: Connection refused
  160.   Warning  Unhealthy       30m (x2 over 30m)  kubelet, minikube  Liveness probe failed: curl: (7) Failed to connect to 127.0.0.1 port 15672: Connection refused
  161. >>>>>>>>>>>>>>>>>>>>>>>>>  allure-ee-report-6f6bc894fb-5x9k2
  162. Name:               allure-ee-report-6f6bc894fb-5x9k2
  163. Namespace:          default
  164. Priority:           0
  165. PriorityClassName:  <none>
  166. Node:               minikube/10.0.2.15
  167. Start Time:         Fri, 25 Jan 2019 23:43:28 +0300
  168. Labels:             app=allure-ee-report
  169.                     pod-template-hash=6f6bc894fb
  170. Annotations:        <none>
  171. Status:             Running
  172. IP:                 172.17.0.14
  173. Controlled By:      ReplicaSet/allure-ee-report-6f6bc894fb
  174. Containers:
  175.   allure-ee-report:
  176.     Container ID:   docker://28575afc72920bb3f87f19e8bd2a117a33999d01c7b4a336fcacc3bc0d70c9a3
  177.     Image:          allure/allure-report:latest
  178.     Image ID:       docker-pullable://allure/allure-report@sha256:59a2b1978de7d2c3edb61c17d22ea29cd41f1954dbea64a0401800005ad258f6
  179.     Port:           8081/TCP
  180.     Host Port:      0/TCP
  181.     State:          Waiting
  182.       Reason:       CrashLoopBackOff
  183.     Last State:     Terminated
  184.       Reason:       Error
  185.       Exit Code:    137
  186.       Started:      Sat, 26 Jan 2019 11:30:15 +0300
  187.       Finished:     Sat, 26 Jan 2019 11:32:10 +0300
  188.     Ready:          False
  189.     Restart Count:  38
  190.     Liveness:       http-get http://:http/api/rs/management/health delay=60s timeout=1s period=10s #success=1 #failure=3
  191.     Readiness:      http-get http://:http/api/rs/management/health delay=60s timeout=1s period=10s #success=1 #failure=3
  192.     Environment:
  193.       SPRING_RABBITMQ_HOST:                             allure-ee-rabbitmq
  194.       SPRING_RABBITMQ_USERNAME:                         allure
  195.       SPRING_RABBITMQ_PASSWORD:                         allure
  196.       ALLURE_BLOB_STORAGE_FILE_SYSTEM_DIRECTORY:        /opt/allure/report/storage
  197.       ALLURE_BLOB_STORAGE_TYPE:                         FILE_SYSTEM
  198.       JAVA_OPTS:                                        -Xss256k -Xms256m -Xmx1g -XX:+UseStringDeduplication -XX:+UseG1GC
  199.  
  200.       SERVER_SERVLET_CONTEXTPATH:                       /api/rs
  201.       SPRING_CLOUD_CONSUL_ENABLED:                      false
  202.       SPRING_MVC_ASYNC_REQUEST_TIMEOUT:                 -1
  203.       SPRING_RABBITMQ_LISTENER_SIMPLE_CONCURRENCY:      3
  204.       SPRING_RABBITMQ_LISTENER_SIMPLE_MAX_CONCURRENCY:  15
  205.       SPRING_RABBITMQ_LISTENER_SIMPLE_PREFETCH:         10
  206.     Mounts:
  207.       /opt/allure/report/storage from storage-volume (rw)
  208.       /var/run/secrets/kubernetes.io/serviceaccount from default-token-hkh42 (ro)
  209. Conditions:
  210.   Type              Status
  211.   Initialized       True
  212.   Ready             False
  213.   ContainersReady   False
  214.   PodScheduled      True
  215. Volumes:
  216.   storage-volume:
  217.     Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
  218.     Medium:
  219.   default-token-hkh42:
  220.     Type:        Secret (a volume populated by a Secret)
  221.     SecretName:  default-token-hkh42
  222.     Optional:    false
  223. QoS Class:       BestEffort
  224. Node-Selectors:  <none>
  225. Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
  226.                  node.kubernetes.io/unreachable:NoExecute for 300s
  227. Events:
  228.   Type     Reason          Age                   From               Message
  229.   ----     ------          ----                  ----               -------
  230.   Normal   Pulled          10h (x18 over 11h)    kubelet, minikube  Successfully pulled image "allure/allure-report:latest"
  231.   Warning  BackOff         10h (x222 over 11h)   kubelet, minikube  Back-off restarting failed container
  232.   Warning  Unhealthy       10h (x157 over 11h)   kubelet, minikube  Readiness probe failed: Get http://172.17.0.14:8081/api/rs/management/health: dial tcp 172.17.0.14:8081: connect: connection refused
  233.   Normal   SandboxChanged  33m                   kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  234.   Normal   Pulling         31m (x2 over 33m)     kubelet, minikube  pulling image "allure/allure-report:latest"
  235.   Normal   Killing         31m                   kubelet, minikube  Killing container with id docker://allure-ee-report:Container failed liveness probe.. Container will be killed and recreated.
  236.   Normal   Pulled          31m (x2 over 33m)     kubelet, minikube  Successfully pulled image "allure/allure-report:latest"
  237.   Normal   Created         31m (x2 over 33m)     kubelet, minikube  Created container
  238.   Normal   Started         31m (x2 over 33m)     kubelet, minikube  Started container
  239.   Warning  Unhealthy       29m (x6 over 32m)     kubelet, minikube  Liveness probe failed: Get http://172.17.0.14:8081/api/rs/management/health: dial tcp 172.17.0.14:8081: connect: connection refused
  240.   Warning  BackOff         8m11s (x48 over 23m)  kubelet, minikube  Back-off restarting failed container
  241.   Warning  Unhealthy       3m12s (x50 over 32m)  kubelet, minikube  Readiness probe failed: Get http://172.17.0.14:8081/api/rs/management/health: dial tcp 172.17.0.14:8081: connect: connection refused
  242. >>>>>>>>>>>>>>>>>>>>>>>>>  allure-ee-uaa-75dc446dc7-d2dts
  243. Name:               allure-ee-uaa-75dc446dc7-d2dts
  244. Namespace:          default
  245. Priority:           0
  246. PriorityClassName:  <none>
  247. Node:               minikube/10.0.2.15
  248. Start Time:         Fri, 25 Jan 2019 23:43:28 +0300
  249. Labels:             app=allure-ee-uaa
  250.                     pod-template-hash=75dc446dc7
  251. Annotations:        <none>
  252. Status:             Running
  253. IP:                 172.17.0.7
  254. Controlled By:      ReplicaSet/allure-ee-uaa-75dc446dc7
  255. Containers:
  256.   allure-ee-uaa:
  257.     Container ID:   docker://c1c57547fce3e0a6adb310cd1b715f2e9842125750eaa2e54e3da0d5b9c4e738
  258.     Image:          allure/allure-uaa:latest
  259.     Image ID:       docker-pullable://allure/allure-uaa@sha256:4e50aba88540dfdbe483d5aa629fb0c7c9f691458295f61f81b64699d362fbe9
  260.     Port:           8082/TCP
  261.     Host Port:      0/TCP
  262.     State:          Waiting
  263.       Reason:       CrashLoopBackOff
  264.     Last State:     Terminated
  265.       Reason:       Error
  266.       Exit Code:    137
  267.       Started:      Sat, 26 Jan 2019 11:30:21 +0300
  268.       Finished:     Sat, 26 Jan 2019 11:32:16 +0300
  269.     Ready:          False
  270.     Restart Count:  35
  271.     Liveness:       http-get http://:http/api/uaa/management/health delay=60s timeout=1s period=10s #success=1 #failure=3
  272.     Readiness:      http-get http://:http/api/uaa/management/health delay=60s timeout=1s period=10s #success=1 #failure=3
  273.     Environment:
  274.       JAVA_OPTS:                         -Xss256k -Xms256m -Xmx256m -XX:+UseStringDeduplication -XX:+UseG1GC
  275.  
  276.       SERVER_SERVLET_CONTEXTPATH:        /api/uaa
  277.       SPRING_CLOUD_CONSUL_ENABLED:       false
  278.       SPRING_MVC_ASYNC_REQUEST_TIMEOUT:  -1
  279.       ALLURE_SECURITY_USER_NAME:         <set to the key 'username' in secret 'allure-ee'>    Optional: false
  280.       ALLURE_SECURITY_USER_PASSWORD:     <set to the key 'password' in secret 'allure-ee'>    Optional: false
  281.       ALLURE_LICENSE_BODY:               <set to the key 'licenseKey' in secret 'allure-ee'>  Optional: false
  282.     Mounts:
  283.       /var/run/secrets/kubernetes.io/serviceaccount from default-token-hkh42 (ro)
  284. Conditions:
  285.   Type              Status
  286.   Initialized       True
  287.   Ready             False
  288.   ContainersReady   False
  289.   PodScheduled      True
  290. Volumes:
  291.   default-token-hkh42:
  292.     Type:        Secret (a volume populated by a Secret)
  293.     SecretName:  default-token-hkh42
  294.     Optional:    false
  295. QoS Class:       BestEffort
  296. Node-Selectors:  <none>
  297. Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
  298.                  node.kubernetes.io/unreachable:NoExecute for 300s
  299. Events:
  300.   Type     Reason          Age                   From               Message
  301.   ----     ------          ----                  ----               -------
  302.   Warning  Unhealthy       10h (x129 over 11h)   kubelet, minikube  Readiness probe failed: Get http://172.17.0.12:8082/api/uaa/management/health: dial tcp 172.17.0.12:8082: connect: connection refused
  303.   Warning  Unhealthy       10h (x76 over 11h)    kubelet, minikube  Liveness probe failed: Get http://172.17.0.12:8082/api/uaa/management/health: dial tcp 172.17.0.12:8082: connect: connection refused
  304.   Warning  BackOff         10h (x254 over 11h)   kubelet, minikube  Back-off restarting failed container
  305.   Normal   SandboxChanged  33m                   kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  306.   Normal   Pulling         31m (x2 over 33m)     kubelet, minikube  pulling image "allure/allure-uaa:latest"
  307.   Normal   Killing         31m                   kubelet, minikube  Killing container with id docker://allure-ee-uaa:Container failed liveness probe.. Container will be killed and recreated.
  308.   Normal   Pulled          31m (x2 over 33m)     kubelet, minikube  Successfully pulled image "allure/allure-uaa:latest"
  309.   Normal   Created         31m (x2 over 33m)     kubelet, minikube  Created container
  310.   Normal   Started         31m (x2 over 33m)     kubelet, minikube  Started container
  311.   Warning  Unhealthy       29m (x9 over 32m)     kubelet, minikube  Readiness probe failed: Get http://172.17.0.7:8082/api/uaa/management/health: dial tcp 172.17.0.7:8082: connect: connection refused
  312.   Warning  BackOff         8m5s (x47 over 23m)   kubelet, minikube  Back-off restarting failed container
  313.   Warning  Unhealthy       3m16s (x25 over 32m)  kubelet, minikube  Liveness probe failed: Get http://172.17.0.7:8082/api/uaa/management/health: dial tcp 172.17.0.7:8082: connect: connection refused
  314. >>>>>>>>>>>>>>>>>>>>>>>>>  allure-ee-ui-55878c4946-dsqll
  315. Name:               allure-ee-ui-55878c4946-dsqll
  316. Namespace:          default
  317. Priority:           0
  318. PriorityClassName:  <none>
  319. Node:               minikube/10.0.2.15
  320. Start Time:         Fri, 25 Jan 2019 23:43:28 +0300
  321. Labels:             app=allure-ee-ui
  322.                     pod-template-hash=55878c4946
  323. Annotations:        <none>
  324. Status:             Running
  325. IP:                 172.17.0.12
  326. Controlled By:      ReplicaSet/allure-ee-ui-55878c4946
  327. Containers:
  328.   allure-ee-ui:
  329.     Container ID:   docker://f2b1bfbb0c28802c2969f387ac80a7d48cb00e96a07c738ba85531bc07b29c0c
  330.     Image:          allure/allure-ui:latest
  331.     Image ID:       docker-pullable://allure/allure-ui@sha256:81a08593827fb87cb393d162b0c1a02ca557af62d88770bd48104b76d61e5e66
  332.     Port:           8083/TCP
  333.     Host Port:      0/TCP
  334.     State:          Waiting
  335.       Reason:       CrashLoopBackOff
  336.     Last State:     Terminated
  337.       Reason:       Error
  338.       Exit Code:    137
  339.       Started:      Sat, 26 Jan 2019 11:30:23 +0300
  340.       Finished:     Sat, 26 Jan 2019 11:32:21 +0300
  341.     Ready:          False
  342.     Restart Count:  35
  343.     Liveness:       http-get http://:http/management/health delay=60s timeout=1s period=10s #success=1 #failure=3
  344.     Readiness:      http-get http://:http/management/health delay=60s timeout=1s period=10s #success=1 #failure=3
  345.     Environment:
  346.       JAVA_OPTS:                    -Xss256k -Xms256m -Xmx256m -XX:+UseStringDeduplication -XX:+UseG1GC
  347.  
  348.       SPRING_CLOUD_CONSUL_ENABLED:  false
  349.     Mounts:
  350.       /var/run/secrets/kubernetes.io/serviceaccount from default-token-hkh42 (ro)
  351. Conditions:
  352.   Type              Status
  353.   Initialized       True
  354.   Ready             False
  355.   ContainersReady   False
  356.   PodScheduled      True
  357. Volumes:
  358.   default-token-hkh42:
  359.     Type:        Secret (a volume populated by a Secret)
  360.     SecretName:  default-token-hkh42
  361.     Optional:    false
  362. QoS Class:       BestEffort
  363. Node-Selectors:  <none>
  364. Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
  365.                  node.kubernetes.io/unreachable:NoExecute for 300s
  366. Events:
  367.   Type     Reason          Age                   From               Message
  368.   ----     ------          ----                  ----               -------
  369.   Warning  Unhealthy       10h (x107 over 11h)   kubelet, minikube  Readiness probe failed: Get http://172.17.0.13:8083/management/health: dial tcp 172.17.0.13:8083: connect: connection refused
  370.   Warning  Unhealthy       10h (x73 over 11h)    kubelet, minikube  Liveness probe failed: Get http://172.17.0.13:8083/management/health: dial tcp 172.17.0.13:8083: connect: connection refused
  371.   Warning  BackOff         10h (x247 over 11h)   kubelet, minikube  Back-off restarting failed container
  372.   Normal   SandboxChanged  33m                   kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  373.   Normal   Pulling         31m (x2 over 33m)     kubelet, minikube  pulling image "allure/allure-ui:latest"
  374.   Normal   Killing         31m                   kubelet, minikube  Killing container with id docker://allure-ee-ui:Container failed liveness probe.. Container will be killed and recreated.
  375.   Normal   Pulled          31m (x2 over 33m)     kubelet, minikube  Successfully pulled image "allure/allure-ui:latest"
  376.   Normal   Created         31m (x2 over 33m)     kubelet, minikube  Created container
  377.   Normal   Started         31m (x2 over 33m)     kubelet, minikube  Started container
  378.   Warning  Unhealthy       29m (x6 over 32m)     kubelet, minikube  Liveness probe failed: Get http://172.17.0.12:8083/management/health: dial tcp 172.17.0.12:8083: connect: connection refused
  379.   Warning  BackOff         8m15s (x45 over 23m)  kubelet, minikube  Back-off restarting failed container
  380.   Warning  Unhealthy       3m15s (x49 over 32m)  kubelet, minikube  Readiness probe failed: Get http://172.17.0.12:8083/management/health: dial tcp 172.17.0.12:8083: connect: connection refused
  381. Macbook-Sudakov:~ knoppix$ for i in `kubectl get pods | grep allure-ee | awk '{print $1}'`; do echo ">>>>>>>>>>>>>>>>>>>>>>>>>  $i"; kubectl logs $i; done
  382. >>>>>>>>>>>>>>>>>>>>>>>>>  allure-ee-rabbitmq-0
  383. 2019-01-26 08:01:20.870 [info] <0.8.0> Log file opened with Lager
  384. 2019-01-26 08:01:20.908 [info] <0.8.0> Log file opened with Lager
  385.  
  386.   ##  ##
  387.   ##  ##      RabbitMQ 3.7.10. Copyright (C) 2007-2018 Pivotal Software, Inc.
  388.   ##########  Licensed under the MPL.  See http://www.rabbitmq.com/
  389.   ######  ##
  390.   ##########  Logs: /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@172.17.0.13.log
  391.                     /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@172.17.0.13_upgrade.log
  392.  
  393.               Starting broker...
  394. 2019-01-26 08:01:27.207 [info] <0.213.0>
  395.  Starting RabbitMQ 3.7.10 on Erlang 21.2
  396.  Copyright (C) 2007-2018 Pivotal Software, Inc.
  397.  Licensed under the MPL.  See http://www.rabbitmq.com/
  398. 2019-01-26 08:01:27.277 [info] <0.213.0>
  399.  node           : rabbit@172.17.0.13
  400.  home dir       : /opt/bitnami/rabbitmq/.rabbitmq
  401.  config file(s) : /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf
  402.  cookie hash    : EiyJAid/cGQFUQSXz9jULg==
  403.  log(s)         : /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@172.17.0.13.log
  404.                 : /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@172.17.0.13_upgrade.log
  405.  database dir   : /opt/bitnami/rabbitmq/var/lib/rabbitmq/mnesia/rabbit@172.17.0.13
  406. 2019-01-26 08:02:24.155 [info] <0.221.0> Memory high watermark set to 796 MiB (835020390 bytes) of 1990 MiB (2087550976 bytes) total
  407. 2019-01-26 08:02:24.631 [info] <0.223.0> Enabling free disk space monitoring
  408. 2019-01-26 08:02:24.631 [info] <0.223.0> Disk free limit set to 50MB
  409. 2019-01-26 08:02:24.818 [info] <0.226.0> Limiting to approx 65436 file handles (58890 sockets)
  410. 2019-01-26 08:02:24.830 [info] <0.227.0> FHC read buffering:  OFF
  411. 2019-01-26 08:02:24.832 [info] <0.227.0> FHC write buffering: ON
  412. 2019-01-26 08:02:24.967 [info] <0.213.0> Node database directory at /opt/bitnami/rabbitmq/var/lib/rabbitmq/mnesia/rabbit@172.17.0.13 is empty. Assuming we need to join an existing cluster or initialise from scratch...
  413. 2019-01-26 08:02:24.968 [info] <0.213.0> Configured peer discovery backend: rabbit_peer_discovery_k8s
  414. 2019-01-26 08:02:24.986 [info] <0.213.0> Will try to lock with peer discovery backend rabbit_peer_discovery_k8s
  415. 2019-01-26 08:02:24.987 [info] <0.213.0> Peer discovery backend does not support locking, falling back to randomized delay
  416. 2019-01-26 08:02:24.987 [info] <0.213.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping randomized startup delay.
  417. 2019-01-26 08:02:27.574 [info] <0.213.0> k8s endpoint listing returned nodes not yet ready: 172.17.0.13
  418. 2019-01-26 08:02:27.574 [info] <0.213.0> All discovered existing cluster peers:
  419. 2019-01-26 08:02:27.575 [info] <0.213.0> Discovered no peer nodes to cluster with
  420. 2019-01-26 08:02:29.328 [info] <0.43.0> Application mnesia exited with reason: stopped
  421. 2019-01-26 08:02:35.371 [info] <0.213.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
  422. 2019-01-26 08:02:35.828 [info] <0.213.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
  423. 2019-01-26 08:02:36.475 [info] <0.213.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
  424. 2019-01-26 08:02:36.475 [info] <0.213.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping registration.
  425. 2019-01-26 08:02:36.499 [info] <0.213.0> Priority queues enabled, real BQ is rabbit_variable_queue
  426. 2019-01-26 08:02:36.783 [info] <0.398.0> Starting rabbit_node_monitor
  427. 2019-01-26 08:02:37.052 [info] <0.213.0> message_store upgrades: 1 to apply
  428. 2019-01-26 08:02:37.052 [info] <0.213.0> message_store upgrades: Applying rabbit_variable_queue:move_messages_to_vhost_store
  429. 2019-01-26 08:02:37.058 [info] <0.213.0> message_store upgrades: No durable queues found. Skipping message store migration
  430. 2019-01-26 08:02:37.058 [info] <0.213.0> message_store upgrades: Removing the old message store data
  431. 2019-01-26 08:02:37.066 [info] <0.213.0> message_store upgrades: All upgrades applied successfully
  432. 2019-01-26 08:02:37.137 [info] <0.213.0> Management plugin: using rates mode 'basic'
  433. 2019-01-26 08:02:37.144 [info] <0.213.0> Adding vhost '/'
  434. 2019-01-26 08:02:37.203 [info] <0.438.0> Making sure data directory '/opt/bitnami/rabbitmq/var/lib/rabbitmq/mnesia/rabbit@172.17.0.13/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists
  435. 2019-01-26 08:02:37.226 [info] <0.438.0> Starting message stores for vhost '/'
  436. 2019-01-26 08:02:37.229 [info] <0.442.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index
  437. 2019-01-26 08:02:37.253 [info] <0.438.0> Started message store of type transient for vhost '/'
  438. 2019-01-26 08:02:37.253 [info] <0.445.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
  439. 2019-01-26 08:02:37.263 [warning] <0.445.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": rebuilding indices from scratch
  440. 2019-01-26 08:02:37.266 [info] <0.438.0> Started message store of type persistent for vhost '/'
  441. 2019-01-26 08:02:37.272 [info] <0.213.0> Creating user 'allure'
  442. 2019-01-26 08:02:37.279 [info] <0.213.0> Setting user tags for user 'allure' to [administrator]
  443. 2019-01-26 08:02:37.286 [info] <0.213.0> Setting permissions for 'allure' in '/' to '.*', '.*', '.*'
  444. 2019-01-26 08:02:37.738 [warning] <0.469.0> Setting Ranch options together with socket options is deprecated. Please use the new map syntax that allows specifying socket options separately from other options.
  445. 2019-01-26 08:02:37.739 [info] <0.483.0> started TCP listener on [::]:5672
  446. 2019-01-26 08:02:37.751 [info] <0.213.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@172.17.0.13'
  447. 2019-01-26 08:02:37.761 [info] <0.213.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@172.17.0.13'
  448. 2019-01-26 08:02:37.800 [info] <0.537.0> Peer discovery: enabling node cleanup (will only log warnings). Check interval: 10 seconds.
  449. 2019-01-26 08:02:37.898 [info] <0.546.0> Management plugin: HTTP (non-TLS) listener started on port 15672
  450. 2019-01-26 08:02:37.899 [info] <0.652.0> Statistics database started.
  451. 2019-01-26 08:02:37.920 [notice] <0.106.0> Changed loghwm of /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@172.17.0.13.log to 50
  452.  completed with 5 plugins.
  453. 2019-01-26 08:02:41.656 [info] <0.8.0> Server startup complete; 5 plugins started.
  454.  * rabbitmq_peer_discovery_k8s
  455.  * rabbitmq_management
  456.  * rabbitmq_web_dispatch
  457.  * rabbitmq_peer_discovery_common
  458.  * rabbitmq_management_agent
  459. >>>>>>>>>>>>>>>>>>>>>>>>>  allure-ee-report-6f6bc894fb-5x9k2
  460. 2019-01-26 08:30:54.486  INFO 6 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@6950e31: startup date [Sat Jan 26 08:30:54 GMT 2019]; root of context hierarchy
  461. 2019-01-26 08:31:07.981  INFO 6 --- [           main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
  462. 2019-01-26 08:31:08.856  INFO 6 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$b554b25f] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
  463.  
  464.     ___    ____
  465.    /   |  / / /_  __________
  466.   / /| | / / / / / / ___/ _ \
  467.  / ___ |/ / / /_/ / /  /  __/
  468. /_/  |_/_/_/\__,_/_/   \___/
  469.  
  470.  
  471. Powered by Spring Boot  (v2.0.6.RELEASE)
  472. 2019-01-26 08:31:30.427  INFO 6 --- [           main] i.q.allure.report.ReportApplication      : The following profiles are active: prod
  473. 2019-01-26 08:31:31.353  INFO 6 --- [           main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@4690b489: startup date [Sat Jan 26 08:31:31 GMT 2019]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@6950e31
  474. >>>>>>>>>>>>>>>>>>>>>>>>>  allure-ee-uaa-75dc446dc7-d2dts
  475. 2019-01-26 08:31:07.831  INFO 6 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@1f57539: startup date [Sat Jan 26 08:31:07 GMT 2019]; root of context hierarchy
  476. 2019-01-26 08:31:34.328  INFO 6 --- [           main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
  477. 2019-01-26 08:31:35.767  INFO 6 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$dfc0aa1a] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
  478.  
  479.     ___    ____
  480.    /   |  / / /_  __________
  481.   / /| | / / / / / / ___/ _ \
  482.  / ___ |/ / / /_/ / /  /  __/
  483. /_/  |_/_/_/\__,_/_/   \___/
  484.  
  485.  
  486. Powered by Spring Boot  (v2.0.6.RELEASE)
  487. 2019-01-26 08:31:44.009  INFO 6 --- [           main] io.qameta.allure.uaa.UaaApplication      : The following profiles are active: prod
  488. 2019-01-26 08:31:44.465  INFO 6 --- [           main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@78dd667e: startup date [Sat Jan 26 08:31:44 GMT 2019]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@1f57539
  489. >>>>>>>>>>>>>>>>>>>>>>>>>  allure-ee-ui-55878c4946-dsqll
  490. 2019-01-26 08:31:31.837  INFO 6 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@2752f6e2: startup date [Sat Jan 26 08:31:31 GMT 2019]; root of context hierarchy
  491. 2019-01-26 08:31:42.264  INFO 6 --- [           main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
  492. 2019-01-26 08:31:46.706  INFO 6 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$6fbe5f35] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
  493.  
  494.     ___    ____
  495.    /   |  / / /_  __________
  496.   / /| | / / / / / / ___/ _ \
  497.  / ___ |/ / / /_/ / /  /  __/
  498. /_/  |_/_/_/\__,_/_/   \___/
  499.  
  500.  
  501. Powered by Spring Boot  (v2.0.6.RELEASE)
  502. 2019-01-26 08:32:09.396  INFO 6 --- [           main] io.qameta.allure.ui.UiApplication        : The following profiles are active: prod
  503. 2019-01-26 08:32:10.877  INFO 6 --- [           main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@52feb982: startup date [Sat Jan 26 08:32:10 GMT 2019]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@2752f6e2
  504. 2019-01-26 08:32:20.623  INFO 6 --- [           main] o.s.cloud.context.scope.GenericScope     : BeanFactory id=8ee50e35-9ebf-3101-9af5-1c005793c13a
  505. 2019-01-26 08:32:20.736  INFO 6 --- [           main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
  506. 2019-01-26 08:32:21.176  INFO 6 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.retry.annotation.RetryConfiguration' of type [org.springframework.retry.annotation.RetryConfiguration$$EnhancerBySpringCGLIB$$4967f45d] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
RAW Paste Data
We use cookies for various purposes including analytics. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. OK, I Understand
Not a member of Pastebin yet?
Sign Up, it unlocks many cool features!
 
Top