Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Name: etcd-jenkins-kube-master
- Namespace: kube-system
- Node: jenkins-kube-master/172.20.43.30
- Start Time: Mon, 19 Mar 2018 13:54:56 -0400
- Labels: component=etcd
- tier=control-plane
- Annotations: kubernetes.io/config.hash=7278f85057e8bf5cb81c9f96d3b25320
- kubernetes.io/config.mirror=7278f85057e8bf5cb81c9f96d3b25320
- kubernetes.io/config.seen=2018-03-19T13:54:51.582570305-04:00
- kubernetes.io/config.source=file
- scheduler.alpha.kubernetes.io/critical-pod=
- Status: Running
- IP: 172.20.43.30
- Containers:
- etcd:
- Container ID: docker://96ffe6a6e41fe03ecc81cd17f91f2024d3c2442da65b765262e7bc988e45ab40
- Image: gcr.io/google_containers/etcd-amd64:3.1.11
- Image ID: docker-pullable://gcr.io/google_containers/etcd-amd64@sha256:54889c08665d241e321ca5ce976b2df0f766794b698d53faf6b7dacb95316680
- Port: <none>
- Command:
- etcd
- --listen-client-urls=http://127.0.0.1:2379
- --advertise-client-urls=http://127.0.0.1:2379
- --data-dir=/var/lib/etcd
- State: Running
- Started: Fri, 23 Mar 2018 14:12:17 -0400
- Last State: Terminated
- Reason: Completed
- Exit Code: 0
- Started: Wed, 21 Mar 2018 17:38:28 -0400
- Finished: Fri, 23 Mar 2018 14:11:50 -0400
- Ready: True
- Restart Count: 3
- Liveness: http-get http://127.0.0.1:2379/health delay=15s timeout=15s period=10s #success=1 #failure=8
- Environment: <none>
- Mounts:
- /var/lib/etcd from etcd (rw)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- etcd:
- Type: HostPath (bare host directory volume)
- Path: /var/lib/etcd
- HostPathType: DirectoryOrCreate
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: :NoExecute
- Events: <none>
- Name: kube-apiserver-jenkins-kube-master
- Namespace: kube-system
- Node: jenkins-kube-master/172.20.43.30
- Start Time: Mon, 19 Mar 2018 13:54:56 -0400
- Labels: component=kube-apiserver
- tier=control-plane
- Annotations: kubernetes.io/config.hash=b80020550c8b69e2c282df50c5594498
- kubernetes.io/config.mirror=b80020550c8b69e2c282df50c5594498
- kubernetes.io/config.seen=2018-03-19T13:54:51.582586437-04:00
- kubernetes.io/config.source=file
- scheduler.alpha.kubernetes.io/critical-pod=
- Status: Running
- IP: 172.20.43.30
- Containers:
- kube-apiserver:
- Container ID: docker://fdf4f410db8c8179458bd857728d2e37236f5a3decb2f9560d44edae34ec2e62
- Image: gcr.io/google_containers/kube-apiserver-amd64:v1.9.4
- Image ID: docker-pullable://gcr.io/google_containers/kube-apiserver-amd64@sha256:ed117d618b49663e48c0579e447c529c9aaec4bef31c86a3c6f033211f89131b
- Port: <none>
- Command:
- kube-apiserver
- --advertise-address=172.20.43.30
- --service-cluster-ip-range=10.96.0.0/12
- --secure-port=6443
- --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
- --requestheader-group-headers=X-Remote-Group
- --requestheader-allowed-names=front-proxy-client
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --insecure-port=0
- --allow-privileged=true
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --enable-bootstrap-token-auth=true
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --requestheader-username-headers=X-Remote-User
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --authorization-mode=Node,RBAC
- --etcd-servers=http://127.0.0.1:2379
- State: Running
- Started: Fri, 23 Mar 2018 14:12:31 -0400
- Last State: Terminated
- Reason: Error
- Exit Code: 137
- Started: Fri, 23 Mar 2018 14:11:51 -0400
- Finished: Fri, 23 Mar 2018 14:12:13 -0400
- Ready: True
- Restart Count: 3
- Requests:
- cpu: 250m
- Liveness: http-get https://172.20.43.30:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
- Environment: <none>
- Mounts:
- /etc/kubernetes/pki from k8s-certs (ro)
- /etc/pki from ca-certs-etc-pki (ro)
- /etc/ssl/certs from ca-certs (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- ca-certs-etc-pki:
- Type: HostPath (bare host directory volume)
- Path: /etc/pki
- HostPathType: DirectoryOrCreate
- k8s-certs:
- Type: HostPath (bare host directory volume)
- Path: /etc/kubernetes/pki
- HostPathType: DirectoryOrCreate
- ca-certs:
- Type: HostPath (bare host directory volume)
- Path: /etc/ssl/certs
- HostPathType: DirectoryOrCreate
- QoS Class: Burstable
- Node-Selectors: <none>
- Tolerations: :NoExecute
- Events: <none>
- Name: kube-controller-manager-jenkins-kube-master
- Namespace: kube-system
- Node: jenkins-kube-master/172.20.43.30
- Start Time: Mon, 19 Mar 2018 13:54:56 -0400
- Labels: component=kube-controller-manager
- tier=control-plane
- Annotations: kubernetes.io/config.hash=edaf99815085187382e94923ac2cd9ba
- kubernetes.io/config.mirror=edaf99815085187382e94923ac2cd9ba
- kubernetes.io/config.seen=2018-03-19T13:54:51.582593161-04:00
- kubernetes.io/config.source=file
- scheduler.alpha.kubernetes.io/critical-pod=
- Status: Running
- IP: 172.20.43.30
- Containers:
- kube-controller-manager:
- Container ID: docker://43ab56e7953a8e6b40a5132844773c6c2bd22ee05ca9578e6007a487f3744216
- Image: gcr.io/google_containers/kube-controller-manager-amd64:v1.9.4
- Image ID: docker-pullable://gcr.io/google_containers/kube-controller-manager-amd64@sha256:320d64ebfd516bb1548868a1e2440eb8cf9f5573dcb6f1915cc5f64d39412eb1
- Port: <none>
- Command:
- kube-controller-manager
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --address=127.0.0.1
- --leader-elect=true
- --use-service-account-credentials=true
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --allocate-node-cidrs=true
- --cluster-cidr=172.20.43.0/16
- --node-cidr-mask-size=24
- State: Running
- Started: Fri, 23 Mar 2018 14:12:17 -0400
- Last State: Terminated
- Reason: Error
- Exit Code: 2
- Started: Fri, 23 Mar 2018 14:11:51 -0400
- Finished: Fri, 23 Mar 2018 14:12:03 -0400
- Ready: True
- Restart Count: 4
- Requests:
- cpu: 200m
- Liveness: http-get http://127.0.0.1:10252/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
- Environment: <none>
- Mounts:
- /etc/kubernetes/controller-manager.conf from kubeconfig (ro)
- /etc/kubernetes/pki from k8s-certs (ro)
- /etc/pki from ca-certs-etc-pki (ro)
- /etc/ssl/certs from ca-certs (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- k8s-certs:
- Type: HostPath (bare host directory volume)
- Path: /etc/kubernetes/pki
- HostPathType: DirectoryOrCreate
- ca-certs:
- Type: HostPath (bare host directory volume)
- Path: /etc/ssl/certs
- HostPathType: DirectoryOrCreate
- kubeconfig:
- Type: HostPath (bare host directory volume)
- Path: /etc/kubernetes/controller-manager.conf
- HostPathType: FileOrCreate
- ca-certs-etc-pki:
- Type: HostPath (bare host directory volume)
- Path: /etc/pki
- HostPathType: DirectoryOrCreate
- QoS Class: Burstable
- Node-Selectors: <none>
- Tolerations: :NoExecute
- Events: <none>
- Name: kube-dns-6f4fd4bdf-7pr9q
- Namespace: kube-system
- Node: jenkins-kube-master/172.20.43.30
- Start Time: Mon, 19 Mar 2018 16:00:12 -0400
- Labels: k8s-app=kube-dns
- pod-template-hash=290980689
- Annotations: <none>
- Status: Running
- IP: 172.20.0.4
- Controlled By: ReplicaSet/kube-dns-6f4fd4bdf
- Containers:
- kubedns:
- Container ID: docker://5c7d6e6b379a754a84c8d339aac5f10eb4bb932903f7075c4ed8bfef0f4aa61b
- Image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
- Image ID: docker-pullable://gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:f5bddc71efe905f4e4b96f3ca346414be6d733610c1525b98fff808f93966680
- Ports: 10053/UDP, 10053/TCP, 10055/TCP
- Args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
- State: Running
- Started: Fri, 23 Mar 2018 14:12:28 -0400
- Last State: Terminated
- Reason: ContainerCannotRun
- Message: cannot join network of a non running container: d69578592b486e733333871cec3ec83c253738525e573818d80d7586c1f6c814
- Exit Code: 128
- Started: Fri, 23 Mar 2018 14:12:15 -0400
- Finished: Fri, 23 Mar 2018 14:12:15 -0400
- Ready: True
- Restart Count: 2
- Limits:
- memory: 170Mi
- Requests:
- cpu: 100m
- memory: 70Mi
- Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
- Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
- Environment:
- PROMETHEUS_PORT: 10055
- Mounts:
- /kube-dns-config from kube-dns-config (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4v69t (ro)
- dnsmasq:
- Container ID: docker://79cb96582cc6c9499920487dbd068f35a3a5878261571f4d99b1011caed1690c
- Image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
- Image ID: docker-pullable://gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:6cfb9f9c2756979013dbd3074e852c2d8ac99652570c5d17d152e0c0eb3321d6
- Ports: 53/UDP, 53/TCP
- Args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
- State: Running
- Started: Fri, 23 Mar 2018 14:12:28 -0400
- Last State: Terminated
- Reason: ContainerCannotRun
- Message: cannot join network of a non running container: d69578592b486e733333871cec3ec83c253738525e573818d80d7586c1f6c814
- Exit Code: 128
- Started: Fri, 23 Mar 2018 14:12:15 -0400
- Finished: Fri, 23 Mar 2018 14:12:15 -0400
- Ready: True
- Restart Count: 2
- Requests:
- cpu: 150m
- memory: 20Mi
- Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
- Environment: <none>
- Mounts:
- /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4v69t (ro)
- sidecar:
- Container ID: docker://8810f93cb5227d7f336c1ca1d9a44f052ed34a887347dc24912e0381568f523d
- Image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
- Image ID: docker-pullable://gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:f80f5f9328107dc516d67f7b70054354b9367d31d4946a3bffd3383d83d7efe8
- Port: 10054/TCP
- Args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
- State: Running
- Started: Fri, 23 Mar 2018 14:12:28 -0400
- Last State: Terminated
- Reason: ContainerCannotRun
- Message: cannot join network of a non running container: d69578592b486e733333871cec3ec83c253738525e573818d80d7586c1f6c814
- Exit Code: 128
- Started: Fri, 23 Mar 2018 14:12:15 -0400
- Finished: Fri, 23 Mar 2018 14:12:15 -0400
- Ready: True
- Restart Count: 2
- Requests:
- cpu: 10m
- memory: 20Mi
- Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
- Environment: <none>
- Mounts:
- /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4v69t (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- kube-dns-config:
- Type: ConfigMap (a volume populated by a ConfigMap)
- Name: kube-dns
- Optional: true
- kube-dns-token-4v69t:
- Type: Secret (a volume populated by a Secret)
- SecretName: kube-dns-token-4v69t
- Optional: false
- QoS Class: Burstable
- Node-Selectors: <none>
- Tolerations: CriticalAddonsOnly
- node-role.kubernetes.io/master:NoSchedule
- node.kubernetes.io/not-ready:NoExecute for 300s
- node.kubernetes.io/unreachable:NoExecute for 300s
- Events: <none>
- Name: kube-flannel-ds-pvk8m
- Namespace: kube-system
- Node: jenkins-slave-003/172.20.43.72
- Start Time: Tue, 20 Mar 2018 09:16:03 -0400
- Labels: app=flannel
- controller-revision-hash=884040133
- pod-template-generation=1
- tier=node
- Annotations: <none>
- Status: Running
- IP: 172.20.43.72
- Controlled By: DaemonSet/kube-flannel-ds
- Init Containers:
- install-cni:
- Container ID: docker://87d1045113509070c70fc4d18a106ae133c59847fed503c68c183a8c470492ae
- Image: quay.io/coreos/flannel:v0.10.0-amd64
- Image ID: docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
- Port: <none>
- Command:
- cp
- Args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
- State: Terminated
- Reason: Completed
- Exit Code: 0
- Started: Fri, 23 Mar 2018 14:19:29 -0400
- Finished: Fri, 23 Mar 2018 14:19:29 -0400
- Ready: True
- Restart Count: 0
- Environment: <none>
- Mounts:
- /etc/cni/net.d from cni (rw)
- /etc/kube-flannel/ from flannel-cfg (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-868pq (ro)
- Containers:
- kube-flannel:
- Container ID: docker://82101d867ed078056b1a7365811b6de289ac6253d30410752b3e56fcc08b4cc9
- Image: quay.io/coreos/flannel:v0.10.0-amd64
- Image ID: docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
- Port: <none>
- Command:
- /opt/bin/flanneld
- Args:
- --ip-masq
- --kube-subnet-mgr
- State: Running
- Started: Fri, 23 Mar 2018 14:19:30 -0400
- Last State: Terminated
- Reason: Error
- Exit Code: 2
- Started: Tue, 20 Mar 2018 09:16:19 -0400
- Finished: Fri, 23 Mar 2018 14:19:25 -0400
- Ready: True
- Restart Count: 1
- Limits:
- cpu: 100m
- memory: 50Mi
- Requests:
- cpu: 100m
- memory: 50Mi
- Environment:
- POD_NAME: kube-flannel-ds-pvk8m (v1:metadata.name)
- POD_NAMESPACE: kube-system (v1:metadata.namespace)
- Mounts:
- /etc/kube-flannel/ from flannel-cfg (rw)
- /run from run (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-868pq (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- run:
- Type: HostPath (bare host directory volume)
- Path: /run
- HostPathType:
- cni:
- Type: HostPath (bare host directory volume)
- Path: /etc/cni/net.d
- HostPathType:
- flannel-cfg:
- Type: ConfigMap (a volume populated by a ConfigMap)
- Name: kube-flannel-cfg
- Optional: false
- flannel-token-868pq:
- Type: Secret (a volume populated by a Secret)
- SecretName: flannel-token-868pq
- Optional: false
- QoS Class: Guaranteed
- Node-Selectors: beta.kubernetes.io/arch=amd64
- Tolerations: node-role.kubernetes.io/master:NoSchedule
- node.kubernetes.io/disk-pressure:NoSchedule
- node.kubernetes.io/memory-pressure:NoSchedule
- node.kubernetes.io/not-ready:NoExecute
- node.kubernetes.io/unreachable:NoExecute
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Killing 59m kubelet, jenkins-slave-003 Killing container with id docker://kube-flannel:Need to kill Pod
- Warning FailedCreatePodSandBox 59m kubelet, jenkins-slave-003 Failed create pod sandbox.
- Normal SandboxChanged 59m (x2 over 59m) kubelet, jenkins-slave-003 Pod sandbox changed, it will be killed and re-created.
- Normal Pulled 59m kubelet, jenkins-slave-003 Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
- Normal Created 59m (x2 over 3d) kubelet, jenkins-slave-003 Created container
- Normal Started 59m (x2 over 3d) kubelet, jenkins-slave-003 Started container
- Normal Created 59m (x2 over 3d) kubelet, jenkins-slave-003 Created container
- Normal Started 59m (x2 over 3d) kubelet, jenkins-slave-003 Started container
- Normal Pulled 59m (x2 over 3d) kubelet, jenkins-slave-003 Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
- Name: kube-flannel-ds-q4fsl
- Namespace: kube-system
- Node: jenkins-slave-002/172.20.43.68
- Start Time: Tue, 20 Mar 2018 09:15:36 -0400
- Labels: app=flannel
- controller-revision-hash=884040133
- pod-template-generation=1
- tier=node
- Annotations: <none>
- Status: Running
- IP: 172.20.43.68
- Controlled By: DaemonSet/kube-flannel-ds
- Init Containers:
- install-cni:
- Container ID: docker://487df4d36ec0abcd2303ef969a59087d3b95a57ad39e462aa1fe82f4567e0664
- Image: quay.io/coreos/flannel:v0.10.0-amd64
- Image ID: docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
- Port: <none>
- Command:
- cp
- Args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
- State: Terminated
- Reason: Completed
- Exit Code: 0
- Started: Fri, 23 Mar 2018 14:18:59 -0400
- Finished: Fri, 23 Mar 2018 14:18:59 -0400
- Ready: True
- Restart Count: 0
- Environment: <none>
- Mounts:
- /etc/cni/net.d from cni (rw)
- /etc/kube-flannel/ from flannel-cfg (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-868pq (ro)
- Containers:
- kube-flannel:
- Container ID: docker://e6bbcf68c830f27c44f865cd66536eb268865908d83616ff2457031cb9687c8e
- Image: quay.io/coreos/flannel:v0.10.0-amd64
- Image ID: docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
- Port: <none>
- Command:
- /opt/bin/flanneld
- Args:
- --ip-masq
- --kube-subnet-mgr
- State: Running
- Started: Fri, 23 Mar 2018 14:19:00 -0400
- Last State: Terminated
- Reason: Error
- Exit Code: 2
- Started: Tue, 20 Mar 2018 09:15:51 -0400
- Finished: Fri, 23 Mar 2018 14:18:55 -0400
- Ready: True
- Restart Count: 1
- Limits:
- cpu: 100m
- memory: 50Mi
- Requests:
- cpu: 100m
- memory: 50Mi
- Environment:
- POD_NAME: kube-flannel-ds-q4fsl (v1:metadata.name)
- POD_NAMESPACE: kube-system (v1:metadata.namespace)
- Mounts:
- /etc/kube-flannel/ from flannel-cfg (rw)
- /run from run (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-868pq (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- run:
- Type: HostPath (bare host directory volume)
- Path: /run
- HostPathType:
- cni:
- Type: HostPath (bare host directory volume)
- Path: /etc/cni/net.d
- HostPathType:
- flannel-cfg:
- Type: ConfigMap (a volume populated by a ConfigMap)
- Name: kube-flannel-cfg
- Optional: false
- flannel-token-868pq:
- Type: Secret (a volume populated by a Secret)
- SecretName: flannel-token-868pq
- Optional: false
- QoS Class: Guaranteed
- Node-Selectors: beta.kubernetes.io/arch=amd64
- Tolerations: node-role.kubernetes.io/master:NoSchedule
- node.kubernetes.io/disk-pressure:NoSchedule
- node.kubernetes.io/memory-pressure:NoSchedule
- node.kubernetes.io/not-ready:NoExecute
- node.kubernetes.io/unreachable:NoExecute
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Killing 59m kubelet, jenkins-slave-002 Killing container with id docker://kube-flannel:Need to kill Pod
- Warning FailedCreatePodSandBox 59m kubelet, jenkins-slave-002 Failed create pod sandbox.
- Normal SandboxChanged 59m (x2 over 59m) kubelet, jenkins-slave-002 Pod sandbox changed, it will be killed and re-created.
- Normal Pulled 59m kubelet, jenkins-slave-002 Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
- Normal Created 59m (x2 over 3d) kubelet, jenkins-slave-002 Created container
- Normal Started 59m (x2 over 3d) kubelet, jenkins-slave-002 Started container
- Normal Pulled 59m (x2 over 3d) kubelet, jenkins-slave-002 Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
- Normal Created 59m (x2 over 3d) kubelet, jenkins-slave-002 Created container
- Normal Started 59m (x2 over 3d) kubelet, jenkins-slave-002 Started container
- Name: kube-flannel-ds-qhxn6
- Namespace: kube-system
- Node: jenkins-kube-master/172.20.43.30
- Start Time: Mon, 19 Mar 2018 15:59:21 -0400
- Labels: app=flannel
- controller-revision-hash=884040133
- pod-template-generation=1
- tier=node
- Annotations: <none>
- Status: Running
- IP: 172.20.43.30
- Controlled By: DaemonSet/kube-flannel-ds
- Init Containers:
- install-cni:
- Container ID: docker://b8c540507363463c0b9bf8d87ca8154a71812e1fda0ff071c6981cfc1426f23c
- Image: quay.io/coreos/flannel:v0.10.0-amd64
- Image ID: docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
- Port: <none>
- Command:
- cp
- Args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
- State: Terminated
- Reason: Completed
- Exit Code: 0
- Started: Fri, 23 Mar 2018 14:12:16 -0400
- Finished: Fri, 23 Mar 2018 14:12:17 -0400
- Ready: True
- Restart Count: 1
- Environment: <none>
- Mounts:
- /etc/cni/net.d from cni (rw)
- /etc/kube-flannel/ from flannel-cfg (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-868pq (ro)
- Containers:
- kube-flannel:
- Container ID: docker://09ea2ef1564982302f9e66a86218cc34472e393a24aac263c37db9fed488b90b
- Image: quay.io/coreos/flannel:v0.10.0-amd64
- Image ID: docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
- Port: <none>
- Command:
- /opt/bin/flanneld
- Args:
- --ip-masq
- --kube-subnet-mgr
- State: Running
- Started: Fri, 23 Mar 2018 14:13:04 -0400
- Last State: Terminated
- Reason: Error
- Exit Code: 1
- Started: Fri, 23 Mar 2018 14:12:19 -0400
- Finished: Fri, 23 Mar 2018 14:12:50 -0400
- Ready: True
- Restart Count: 2
- Limits:
- cpu: 100m
- memory: 50Mi
- Requests:
- cpu: 100m
- memory: 50Mi
- Environment:
- POD_NAME: kube-flannel-ds-qhxn6 (v1:metadata.name)
- POD_NAMESPACE: kube-system (v1:metadata.namespace)
- Mounts:
- /etc/kube-flannel/ from flannel-cfg (rw)
- /run from run (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-868pq (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- run:
- Type: HostPath (bare host directory volume)
- Path: /run
- HostPathType:
- cni:
- Type: HostPath (bare host directory volume)
- Path: /etc/cni/net.d
- HostPathType:
- flannel-cfg:
- Type: ConfigMap (a volume populated by a ConfigMap)
- Name: kube-flannel-cfg
- Optional: false
- flannel-token-868pq:
- Type: Secret (a volume populated by a Secret)
- SecretName: flannel-token-868pq
- Optional: false
- QoS Class: Guaranteed
- Node-Selectors: beta.kubernetes.io/arch=amd64
- Tolerations: node-role.kubernetes.io/master:NoSchedule
- node.kubernetes.io/disk-pressure:NoSchedule
- node.kubernetes.io/memory-pressure:NoSchedule
- node.kubernetes.io/not-ready:NoExecute
- node.kubernetes.io/unreachable:NoExecute
- Events: <none>
- Name: kube-flannel-ds-tkspz
- Namespace: kube-system
- Node: jenkins-slave-01/172.20.43.74
- Start Time: Tue, 20 Mar 2018 09:04:20 -0400
- Labels: app=flannel
- controller-revision-hash=884040133
- pod-template-generation=1
- tier=node
- Annotations: <none>
- Status: Running
- IP: 172.20.43.74
- Controlled By: DaemonSet/kube-flannel-ds
- Init Containers:
- install-cni:
- Container ID: docker://98fa2bbe418b094185f503ab699b32a55247b6116dd3c24e7937afded8237509
- Image: quay.io/coreos/flannel:v0.10.0-amd64
- Image ID: docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
- Port: <none>
- Command:
- cp
- Args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
- State: Terminated
- Reason: Completed
- Exit Code: 0
- Started: Fri, 23 Mar 2018 14:18:27 -0400
- Finished: Fri, 23 Mar 2018 14:18:27 -0400
- Ready: True
- Restart Count: 2
- Environment: <none>
- Mounts:
- /etc/cni/net.d from cni (rw)
- /etc/kube-flannel/ from flannel-cfg (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-868pq (ro)
- Containers:
- kube-flannel:
- Container ID: docker://f4c4cd14dfb4d1405d4ec17f2737f0c750838600a29e1075a486e33024a355cd
- Image: quay.io/coreos/flannel:v0.10.0-amd64
- Image ID: docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
- Port: <none>
- Command:
- /opt/bin/flanneld
- Args:
- --ip-masq
- --kube-subnet-mgr
- State: Running
- Started: Fri, 23 Mar 2018 14:18:29 -0400
- Last State: Terminated
- Reason: Error
- Exit Code: 2
- Started: Tue, 20 Mar 2018 09:04:34 -0400
- Finished: Fri, 23 Mar 2018 14:18:14 -0400
- Ready: True
- Restart Count: 1
- Limits:
- cpu: 100m
- memory: 50Mi
- Requests:
- cpu: 100m
- memory: 50Mi
- Environment:
- POD_NAME: kube-flannel-ds-tkspz (v1:metadata.name)
- POD_NAMESPACE: kube-system (v1:metadata.namespace)
- Mounts:
- /etc/kube-flannel/ from flannel-cfg (rw)
- /run from run (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-868pq (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- run:
- Type: HostPath (bare host directory volume)
- Path: /run
- HostPathType:
- cni:
- Type: HostPath (bare host directory volume)
- Path: /etc/cni/net.d
- HostPathType:
- flannel-cfg:
- Type: ConfigMap (a volume populated by a ConfigMap)
- Name: kube-flannel-cfg
- Optional: false
- flannel-token-868pq:
- Type: Secret (a volume populated by a Secret)
- SecretName: flannel-token-868pq
- Optional: false
- QoS Class: Guaranteed
- Node-Selectors: beta.kubernetes.io/arch=amd64
- Tolerations: node-role.kubernetes.io/master:NoSchedule
- node.kubernetes.io/disk-pressure:NoSchedule
- node.kubernetes.io/memory-pressure:NoSchedule
- node.kubernetes.io/not-ready:NoExecute
- node.kubernetes.io/unreachable:NoExecute
- Events: <none>
- Name: kube-flannel-ds-vgqsb
- Namespace: kube-system
- Node: jenkins-slave-004/172.20.43.70
- Start Time: Tue, 20 Mar 2018 09:16:21 -0400
- Labels: app=flannel
- controller-revision-hash=884040133
- pod-template-generation=1
- tier=node
- Annotations: <none>
- Status: Running
- IP: 172.20.43.70
- Controlled By: DaemonSet/kube-flannel-ds
- Init Containers:
- install-cni:
- Container ID: docker://fd8b91dd4e31e025644f6b5c9b9e3cdfab268ba6531114c0b23f60d1c724848f
- Image: quay.io/coreos/flannel:v0.10.0-amd64
- Image ID: docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
- Port: <none>
- Command:
- cp
- Args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
- State: Terminated
- Reason: Completed
- Exit Code: 0
- Started: Fri, 23 Mar 2018 14:20:09 -0400
- Finished: Fri, 23 Mar 2018 14:20:09 -0400
- Ready: True
- Restart Count: 0
- Environment: <none>
- Mounts:
- /etc/cni/net.d from cni (rw)
- /etc/kube-flannel/ from flannel-cfg (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-868pq (ro)
- Containers:
- kube-flannel:
- Container ID: docker://939a3dd5afcf0e7d34c3b1fce3798344cd81b5d8baa46aa11a105b98264ec63e
- Image: quay.io/coreos/flannel:v0.10.0-amd64
- Image ID: docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
- Port: <none>
- Command:
- /opt/bin/flanneld
- Args:
- --ip-masq
- --kube-subnet-mgr
- State: Running
- Started: Fri, 23 Mar 2018 14:20:11 -0400
- Last State: Terminated
- Reason: Error
- Exit Code: 2
- Started: Tue, 20 Mar 2018 09:16:36 -0400
- Finished: Fri, 23 Mar 2018 14:20:06 -0400
- Ready: True
- Restart Count: 1
- Limits:
- cpu: 100m
- memory: 50Mi
- Requests:
- cpu: 100m
- memory: 50Mi
- Environment:
- POD_NAME: kube-flannel-ds-vgqsb (v1:metadata.name)
- POD_NAMESPACE: kube-system (v1:metadata.namespace)
- Mounts:
- /etc/kube-flannel/ from flannel-cfg (rw)
- /run from run (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-868pq (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- run:
- Type: HostPath (bare host directory volume)
- Path: /run
- HostPathType:
- cni:
- Type: HostPath (bare host directory volume)
- Path: /etc/cni/net.d
- HostPathType:
- flannel-cfg:
- Type: ConfigMap (a volume populated by a ConfigMap)
- Name: kube-flannel-cfg
- Optional: false
- flannel-token-868pq:
- Type: Secret (a volume populated by a Secret)
- SecretName: flannel-token-868pq
- Optional: false
- QoS Class: Guaranteed
- Node-Selectors: beta.kubernetes.io/arch=amd64
- Tolerations: node-role.kubernetes.io/master:NoSchedule
- node.kubernetes.io/disk-pressure:NoSchedule
- node.kubernetes.io/memory-pressure:NoSchedule
- node.kubernetes.io/not-ready:NoExecute
- node.kubernetes.io/unreachable:NoExecute
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Killing 58m kubelet, jenkins-slave-004 Killing container with id docker://kube-flannel:Need to kill Pod
- Warning FailedCreatePodSandBox 58m kubelet, jenkins-slave-004 Failed create pod sandbox.
- Normal SandboxChanged 58m (x2 over 58m) kubelet, jenkins-slave-004 Pod sandbox changed, it will be killed and re-created.
- Normal Pulled 58m kubelet, jenkins-slave-004 Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
- Normal Created 58m (x2 over 3d) kubelet, jenkins-slave-004 Created container
- Normal Started 58m (x2 over 3d) kubelet, jenkins-slave-004 Started container
- Normal Created 58m (x2 over 3d) kubelet, jenkins-slave-004 Created container
- Normal Pulled 58m (x2 over 3d) kubelet, jenkins-slave-004 Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
- Normal Started 58m (x2 over 3d) kubelet, jenkins-slave-004 Started container
- Name: kube-proxy-7np9b
- Namespace: kube-system
- Node: jenkins-slave-01/172.20.43.74
- Start Time: Tue, 20 Mar 2018 09:04:20 -0400
- Labels: controller-revision-hash=446521190
- k8s-app=kube-proxy
- pod-template-generation=1
- Annotations: <none>
- Status: Running
- IP: 172.20.43.74
- Controlled By: DaemonSet/kube-proxy
- Containers:
- kube-proxy:
- Container ID: docker://60c4b26d2b669f5a527e7c1711023d9fca02eb004ea1782a23ef7ba4f513cd08
- Image: gcr.io/google_containers/kube-proxy-amd64:v1.9.4
- Image ID: docker-pullable://gcr.io/google_containers/kube-proxy-amd64@sha256:424a9dfc295f26f9d1e8070836d6fa08c83f22d86e592dfccddc847a85b1ef20
- Port: <none>
- Command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- State: Running
- Started: Fri, 23 Mar 2018 14:18:42 -0400
- Last State: Terminated
- Reason: Error
- Exit Code: 2
- Started: Fri, 23 Mar 2018 14:18:15 -0400
- Finished: Fri, 23 Mar 2018 14:18:25 -0400
- Ready: True
- Restart Count: 2
- Environment: <none>
- Mounts:
- /lib/modules from lib-modules (ro)
- /run/xtables.lock from xtables-lock (rw)
- /var/lib/kube-proxy from kube-proxy (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-rhlh9 (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- kube-proxy:
- Type: ConfigMap (a volume populated by a ConfigMap)
- Name: kube-proxy
- Optional: false
- xtables-lock:
- Type: HostPath (bare host directory volume)
- Path: /run/xtables.lock
- HostPathType: FileOrCreate
- lib-modules:
- Type: HostPath (bare host directory volume)
- Path: /lib/modules
- HostPathType:
- kube-proxy-token-rhlh9:
- Type: Secret (a volume populated by a Secret)
- SecretName: kube-proxy-token-rhlh9
- Optional: false
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: node-role.kubernetes.io/master:NoSchedule
- node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
- node.kubernetes.io/disk-pressure:NoSchedule
- node.kubernetes.io/memory-pressure:NoSchedule
- node.kubernetes.io/not-ready:NoExecute
- node.kubernetes.io/unreachable:NoExecute
- Events: <none>
- Name: kube-proxy-9lx8h
- Namespace: kube-system
- Node: jenkins-kube-master/172.20.43.30
- Start Time: Mon, 19 Mar 2018 12:34:38 -0400
- Labels: controller-revision-hash=446521190
- k8s-app=kube-proxy
- pod-template-generation=1
- Annotations: <none>
- Status: Running
- IP: 172.20.43.30
- Controlled By: DaemonSet/kube-proxy
- Containers:
- kube-proxy:
- Container ID: docker://9b061ca8037108176cd3b97bb7cd19364447579a26c60f0f9d1879a0f027986f
- Image: gcr.io/google_containers/kube-proxy-amd64:v1.9.4
- Image ID: docker-pullable://gcr.io/google_containers/kube-proxy-amd64@sha256:424a9dfc295f26f9d1e8070836d6fa08c83f22d86e592dfccddc847a85b1ef20
- Port: <none>
- Command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- State: Running
- Started: Fri, 23 Mar 2018 14:12:16 -0400
- Last State: Terminated
- Reason: Error
- Exit Code: 2
- Started: Mon, 19 Mar 2018 13:55:08 -0400
- Finished: Fri, 23 Mar 2018 14:11:50 -0400
- Ready: True
- Restart Count: 2
- Environment: <none>
- Mounts:
- /lib/modules from lib-modules (ro)
- /run/xtables.lock from xtables-lock (rw)
- /var/lib/kube-proxy from kube-proxy (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-rhlh9 (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- kube-proxy:
- Type: ConfigMap (a volume populated by a ConfigMap)
- Name: kube-proxy
- Optional: false
- xtables-lock:
- Type: HostPath (bare host directory volume)
- Path: /run/xtables.lock
- HostPathType: FileOrCreate
- lib-modules:
- Type: HostPath (bare host directory volume)
- Path: /lib/modules
- HostPathType:
- kube-proxy-token-rhlh9:
- Type: Secret (a volume populated by a Secret)
- SecretName: kube-proxy-token-rhlh9
- Optional: false
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: node-role.kubernetes.io/master:NoSchedule
- node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
- node.kubernetes.io/disk-pressure:NoSchedule
- node.kubernetes.io/memory-pressure:NoSchedule
- node.kubernetes.io/not-ready:NoExecute
- node.kubernetes.io/unreachable:NoExecute
- Events: <none>
- Name: kube-proxy-f46d8
- Namespace: kube-system
- Node: jenkins-slave-002/172.20.43.68
- Start Time: Tue, 20 Mar 2018 09:15:36 -0400
- Labels: controller-revision-hash=446521190
- k8s-app=kube-proxy
- pod-template-generation=1
- Annotations: <none>
- Status: Running
- IP: 172.20.43.68
- Controlled By: DaemonSet/kube-proxy
- Containers:
- kube-proxy:
- Container ID: docker://729717d3c2da726f5c15e14e3dc69101c9a956bb858085b6e92991251d886e27
- Image: gcr.io/google_containers/kube-proxy-amd64:v1.9.4
- Image ID: docker-pullable://gcr.io/google_containers/kube-proxy-amd64@sha256:424a9dfc295f26f9d1e8070836d6fa08c83f22d86e592dfccddc847a85b1ef20
- Port: <none>
- Command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- State: Running
- Started: Fri, 23 Mar 2018 14:19:15 -0400
- Last State: Terminated
- Exit Code: 0
- Started: Mon, 01 Jan 0001 00:00:00 +0000
- Finished: Mon, 01 Jan 0001 00:00:00 +0000
- Ready: True
- Restart Count: 2
- Environment: <none>
- Mounts:
- /lib/modules from lib-modules (ro)
- /run/xtables.lock from xtables-lock (rw)
- /var/lib/kube-proxy from kube-proxy (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-rhlh9 (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- kube-proxy:
- Type: ConfigMap (a volume populated by a ConfigMap)
- Name: kube-proxy
- Optional: false
- xtables-lock:
- Type: HostPath (bare host directory volume)
- Path: /run/xtables.lock
- HostPathType: FileOrCreate
- lib-modules:
- Type: HostPath (bare host directory volume)
- Path: /lib/modules
- HostPathType:
- kube-proxy-token-rhlh9:
- Type: Secret (a volume populated by a Secret)
- SecretName: kube-proxy-token-rhlh9
- Optional: false
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: node-role.kubernetes.io/master:NoSchedule
- node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
- node.kubernetes.io/disk-pressure:NoSchedule
- node.kubernetes.io/memory-pressure:NoSchedule
- node.kubernetes.io/not-ready:NoExecute
- node.kubernetes.io/unreachable:NoExecute
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Warning Failed 59m kubelet, jenkins-slave-002 Error: failed to get container "kube-proxy" log path: failed to inspect container "kube-proxy": error during connect: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.31/containers/kube-proxy/json: read unix @->/var/run/docker.sock: read: connection reset by peer
- Normal SandboxChanged 59m (x2 over 59m) kubelet, jenkins-slave-002 Pod sandbox changed, it will be killed and re-created.
- Warning BackOff 59m (x3 over 59m) kubelet, jenkins-slave-002 Back-off restarting failed container
- Normal Pulled 59m (x2 over 59m) kubelet, jenkins-slave-002 Container image "gcr.io/google_containers/kube-proxy-amd64:v1.9.4" already present on machine
- Normal Created 59m (x3 over 3d) kubelet, jenkins-slave-002 Created container
- Normal Started 59m (x2 over 3d) kubelet, jenkins-slave-002 Started container
- Name: kube-proxy-fdtx9
- Namespace: kube-system
- Node: jenkins-slave-003/172.20.43.72
- Start Time: Tue, 20 Mar 2018 09:16:03 -0400
- Labels: controller-revision-hash=446521190
- k8s-app=kube-proxy
- pod-template-generation=1
- Annotations: <none>
- Status: Running
- IP: 172.20.43.72
- Controlled By: DaemonSet/kube-proxy
- Containers:
- kube-proxy:
- Container ID: docker://1cfe7917d9d4c4094654b9c8af2e1748c1fccb41c0455e570095474f840b7582
- Image: gcr.io/google_containers/kube-proxy-amd64:v1.9.4
- Image ID: docker-pullable://gcr.io/google_containers/kube-proxy-amd64@sha256:424a9dfc295f26f9d1e8070836d6fa08c83f22d86e592dfccddc847a85b1ef20
- Port: <none>
- Command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- State: Running
- Started: Fri, 23 Mar 2018 14:19:29 -0400
- Last State: Terminated
- Reason: Error
- Exit Code: 2
- Started: Tue, 20 Mar 2018 09:16:13 -0400
- Finished: Fri, 23 Mar 2018 14:19:24 -0400
- Ready: True
- Restart Count: 1
- Environment: <none>
- Mounts:
- /lib/modules from lib-modules (ro)
- /run/xtables.lock from xtables-lock (rw)
- /var/lib/kube-proxy from kube-proxy (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-rhlh9 (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- kube-proxy:
- Type: ConfigMap (a volume populated by a ConfigMap)
- Name: kube-proxy
- Optional: false
- xtables-lock:
- Type: HostPath (bare host directory volume)
- Path: /run/xtables.lock
- HostPathType: FileOrCreate
- lib-modules:
- Type: HostPath (bare host directory volume)
- Path: /lib/modules
- HostPathType:
- kube-proxy-token-rhlh9:
- Type: Secret (a volume populated by a Secret)
- SecretName: kube-proxy-token-rhlh9
- Optional: false
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: node-role.kubernetes.io/master:NoSchedule
- node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
- node.kubernetes.io/disk-pressure:NoSchedule
- node.kubernetes.io/memory-pressure:NoSchedule
- node.kubernetes.io/not-ready:NoExecute
- node.kubernetes.io/unreachable:NoExecute
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Warning FailedCreatePodSandBox 59m kubelet, jenkins-slave-003 Failed create pod sandbox.
- Normal SandboxChanged 59m (x2 over 59m) kubelet, jenkins-slave-003 Pod sandbox changed, it will be killed and re-created.
- Normal Created 59m (x2 over 3d) kubelet, jenkins-slave-003 Created container
- Normal Started 59m (x2 over 3d) kubelet, jenkins-slave-003 Started container
- Normal Pulled 59m kubelet, jenkins-slave-003 Container image "gcr.io/google_containers/kube-proxy-amd64:v1.9.4" already present on machine
- Name: kube-proxy-kmnjf
- Namespace: kube-system
- Node: jenkins-slave-004/172.20.43.70
- Start Time: Tue, 20 Mar 2018 09:16:21 -0400
- Labels: controller-revision-hash=446521190
- k8s-app=kube-proxy
- pod-template-generation=1
- Annotations: <none>
- Status: Running
- IP: 172.20.43.70
- Controlled By: DaemonSet/kube-proxy
- Containers:
- kube-proxy:
- Container ID: docker://b7b4f13fba0f8cf5c0037520becff4edcd141a0417ccbe91cf99b205ec5ce680
- Image: gcr.io/google_containers/kube-proxy-amd64:v1.9.4
- Image ID: docker-pullable://gcr.io/google_containers/kube-proxy-amd64@sha256:424a9dfc295f26f9d1e8070836d6fa08c83f22d86e592dfccddc847a85b1ef20
- Port: <none>
- Command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- State: Running
- Started: Fri, 23 Mar 2018 14:20:09 -0400
- Last State: Terminated
- Reason: Error
- Exit Code: 2
- Started: Tue, 20 Mar 2018 09:16:31 -0400
- Finished: Fri, 23 Mar 2018 14:20:04 -0400
- Ready: True
- Restart Count: 1
- Environment: <none>
- Mounts:
- /lib/modules from lib-modules (ro)
- /run/xtables.lock from xtables-lock (rw)
- /var/lib/kube-proxy from kube-proxy (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-rhlh9 (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- kube-proxy:
- Type: ConfigMap (a volume populated by a ConfigMap)
- Name: kube-proxy
- Optional: false
- xtables-lock:
- Type: HostPath (bare host directory volume)
- Path: /run/xtables.lock
- HostPathType: FileOrCreate
- lib-modules:
- Type: HostPath (bare host directory volume)
- Path: /lib/modules
- HostPathType:
- kube-proxy-token-rhlh9:
- Type: Secret (a volume populated by a Secret)
- SecretName: kube-proxy-token-rhlh9
- Optional: false
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: node-role.kubernetes.io/master:NoSchedule
- node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
- node.kubernetes.io/disk-pressure:NoSchedule
- node.kubernetes.io/memory-pressure:NoSchedule
- node.kubernetes.io/not-ready:NoExecute
- node.kubernetes.io/unreachable:NoExecute
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Warning FailedCreatePodSandBox 58m kubelet, jenkins-slave-004 Failed create pod sandbox.
- Normal SandboxChanged 58m (x2 over 58m) kubelet, jenkins-slave-004 Pod sandbox changed, it will be killed and re-created.
- Normal Created 58m (x2 over 3d) kubelet, jenkins-slave-004 Created container
- Normal Started 58m (x2 over 3d) kubelet, jenkins-slave-004 Started container
- Normal Pulled 58m kubelet, jenkins-slave-004 Container image "gcr.io/google_containers/kube-proxy-amd64:v1.9.4" already present on machine
- Name: kube-scheduler-jenkins-kube-master
- Namespace: kube-system
- Node: jenkins-kube-master/172.20.43.30
- Start Time: Mon, 19 Mar 2018 13:54:56 -0400
- Labels: component=kube-scheduler
- tier=control-plane
- Annotations: kubernetes.io/config.hash=b44c3443d26842a5c591b28224a6b4ff
- kubernetes.io/config.mirror=b44c3443d26842a5c591b28224a6b4ff
- kubernetes.io/config.seen=2018-03-19T13:54:51.582598431-04:00
- kubernetes.io/config.source=file
- scheduler.alpha.kubernetes.io/critical-pod=
- Status: Running
- IP: 172.20.43.30
- Containers:
- kube-scheduler:
- Container ID: docker://8b6d8642921957114b3ef813b8110038f96f7c1fdc85a907cafc2e5f934e536f
- Image: gcr.io/google_containers/kube-scheduler-amd64:v1.9.4
- Image ID: docker-pullable://gcr.io/google_containers/kube-scheduler-amd64@sha256:9b2a75b415aa34261a3c9041d82693f3285acfc380695b5fef7668bde1d88532
- Port: <none>
- Command:
- kube-scheduler
- --address=127.0.0.1
- --leader-elect=true
- --kubeconfig=/etc/kubernetes/scheduler.conf
- State: Running
- Started: Fri, 23 Mar 2018 14:12:16 -0400
- Last State: Terminated
- Reason: Error
- Exit Code: 2
- Started: Mon, 19 Mar 2018 13:54:58 -0400
- Finished: Fri, 23 Mar 2018 14:11:50 -0400
- Ready: True
- Restart Count: 2
- Requests:
- cpu: 100m
- Liveness: http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
- Environment: <none>
- Mounts:
- /etc/kubernetes/scheduler.conf from kubeconfig (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- PodScheduled True
- Volumes:
- kubeconfig:
- Type: HostPath (bare host directory volume)
- Path: /etc/kubernetes/scheduler.conf
- HostPathType: FileOrCreate
- QoS Class: Burstable
- Node-Selectors: <none>
- Tolerations: :NoExecute
- Events: <none>
- Name: kubernetes-dashboard-5bd6f767c7-xf42n
- Namespace: kube-system
- Node: jenkins-slave-004/172.20.43.70
- Start Time: Tue, 20 Mar 2018 09:32:48 -0400
- Labels: k8s-app=kubernetes-dashboard
- pod-template-hash=1682932373
- Annotations: <none>
- Status: Running
- IP: 172.20.4.3
- Controlled By: ReplicaSet/kubernetes-dashboard-5bd6f767c7
- Containers:
- kubernetes-dashboard:
- Container ID: docker://e4ebfa7450a3e00b941ef30ae05df33c9c7beb1d25105d312f7ca20127f54ef0
- Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
- Image ID: docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:dc4026c1b595435ef5527ca598e1e9c4343076926d7d62b365c44831395adbd0
- Port: 8443/TCP
- Args:
- --auto-generate-certificates
- State: Waiting
- Reason: CrashLoopBackOff
- Last State: Terminated
- Reason: Error
- Exit Code: 1
- Started: Fri, 23 Mar 2018 15:14:28 -0400
- Finished: Fri, 23 Mar 2018 15:14:58 -0400
- Ready: False
- Restart Count: 870
- Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
- Environment: <none>
- Mounts:
- /certs from kubernetes-dashboard-certs (rw)
- /tmp from tmp-volume (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-bj5k8 (ro)
- Conditions:
- Type Status
- Initialized True
- Ready False
- PodScheduled True
- Volumes:
- kubernetes-dashboard-certs:
- Type: Secret (a volume populated by a Secret)
- SecretName: kubernetes-dashboard-certs
- Optional: false
- tmp-volume:
- Type: EmptyDir (a temporary directory that shares a pod's lifetime)
- Medium:
- kubernetes-dashboard-token-bj5k8:
- Type: Secret (a volume populated by a Secret)
- SecretName: kubernetes-dashboard-token-bj5k8
- Optional: false
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: node-role.kubernetes.io/master:NoSchedule
- node.kubernetes.io/not-ready:NoExecute for 300s
- node.kubernetes.io/unreachable:NoExecute for 300s
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Warning BackOff 1m (x19486 over 3d) kubelet, jenkins-slave-004 Back-off restarting failed container
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement