Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- ContainerD issue
- Componenstatuses:
- ```
- NAME STATUS MESSAGE ERROR
- controller-manager Healthy ok
- etcd-0 Healthy {"health": "true"}
- scheduler Healthy ok
- ```
- Pods:
- ```
- NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
- kube-system kube-dns-6c857864fb-ljmf8 0/3 ContainerCreating 0 6m <none> k8s-worker-1
- kube-system weave-net-xrr2k 0/2 ContainerCreating 0 6m 192.168.50.31 k8s-worker-1
- ```
- Nodes:
- ```
- NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
- k8s-worker-1 Ready <none> 6m v1.9.0 <none> CentOS Linux 7 (Core) 3.10.0-693.11.1.el7.x86_64 cri-containerd://1.0.0-beta.1
- ```
- Containerd config default:
- ```
- [root@k8s-worker-1 vagrant]# /usr/local/bin/containerd config default
- root = "/var/lib/containerd"
- state = "/run/containerd"
- no_subreaper = false
- oom_score = 0
- [grpc]
- address = "/run/containerd/containerd.sock"
- uid = 0
- gid = 0
- [debug]
- address = "/run/containerd/debug.sock"
- uid = 0
- gid = 0
- level = "info"
- [metrics]
- address = ""
- [cgroup]
- path = ""
- ```
- permissions on socket in containerd default config:
- ```srw-rw----. 1 root root 0 Feb 2 19:55 /run/containerd/containerd.sock```
- permissions on socket in kubelet service file:
- ```srwxr-xr-x. 1 root root 0 Feb 2 20:14 /var/run/cri-containerd.sock```
- Networks created for containerd:
- ```
- [root@k8s-worker-1 net.d]# cat 10-bridge.conf
- {
- "cniVersion": "0.3.1",
- "name": "bridge",
- "type": "bridge",
- "bridge": "cnio0",
- "isGateway": true,
- "ipMasq": true,
- "ipam": {
- "type": "host-local",
- "ranges": [
- [{"subnet": "10.200.1.0/24"}]
- ],
- "routes": [{"dst": "0.0.0.0/0"}]
- }
- }
- [root@k8s-worker-1 net.d]# cat 99-loopback.conf
- {
- "cniVersion": "0.3.1",
- "type": "loopback"
- }
- ```
- routes on worker nodes:
- ```
- [root@k8s-worker-1 net.d]# route -n
- Kernel IP routing table
- Destination Gateway Genmask Flags Metric Ref Use Iface
- 0.0.0.0 10.0.2.2 0.0.0.0 UG 100 0 0 enp0s3
- 10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
- 172.28.128.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
- 192.168.50.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s9
- ```
- hosts file in /etc
- ```
- [root@k8s-worker-1 net.d]# cat /etc/hosts
- 127.0.0.1 k8s-worker-1 k8s-worker-1
- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
- ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
- 192.168.50.4 k8s-loadbalancer
- 192.168.50.20 k8s-master
- 192.168.50.11 k8s-etcd-1
- 192.168.50.21 k8s-master-1
- 192.168.50.31 k8s-worker-1
- ```
- ip adresses on worker node:
- ```
- [root@k8s-worker-1 net.d]# ip add
- 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- valid_lft forever preferred_lft forever
- inet6 ::1/128 scope host
- valid_lft forever preferred_lft forever
- 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
- link/ether 08:00:27:16:98:18 brd ff:ff:ff:ff:ff:ff
- inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
- valid_lft 84994sec preferred_lft 84994sec
- inet6 fe80::5a11:34ec:f996:4f3/64 scope link
- valid_lft forever preferred_lft forever
- 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
- link/ether 08:00:27:8e:54:25 brd ff:ff:ff:ff:ff:ff
- inet 172.28.128.5/24 brd 172.28.128.255 scope global dynamic enp0s8
- valid_lft 942sec preferred_lft 942sec
- inet6 fe80::a00:27ff:fe8e:5425/64 scope link
- valid_lft forever preferred_lft forever
- 4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
- link/ether 08:00:27:38:71:d7 brd ff:ff:ff:ff:ff:ff
- inet 192.168.50.31/24 brd 192.168.50.255 scope global enp0s9
- valid_lft forever preferred_lft forever
- inet6 fe80::a00:27ff:fe38:71d7/64 scope link
- valid_lft forever preferred_lft forever
- ```
- Describe node in kubernetes:
- ```
- Name: k8s-worker-1
- Roles: <none>
- Labels: beta.kubernetes.io/arch=amd64
- beta.kubernetes.io/os=linux
- kubernetes.io/hostname=k8s-worker-1
- Annotations: node.alpha.kubernetes.io/ttl=0
- volumes.kubernetes.io/controller-managed-attach-detach=true
- Taints: <none>
- CreationTimestamp: Fri, 02 Feb 2018 19:55:35 +0000
- Conditions:
- Type Status LastHeartbeatTime LastTransitionTime Reason Message
- ---- ------ ----------------- ------------------ ------ -------
- OutOfDisk False Fri, 02 Feb 2018 20:02:34 +0000 Fri, 02 Feb 2018 19:55:35 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
- MemoryPressure False Fri, 02 Feb 2018 20:02:34 +0000 Fri, 02 Feb 2018 19:55:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
- DiskPressure False Fri, 02 Feb 2018 20:02:34 +0000 Fri, 02 Feb 2018 19:55:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
- Ready True Fri, 02 Feb 2018 20:02:34 +0000 Fri, 02 Feb 2018 19:55:35 +0000 KubeletReady kubelet is posting ready status
- Addresses:
- InternalIP: 192.168.50.31
- Hostname: k8s-worker-1
- Capacity:
- cpu: 1
- memory: 1883560Ki
- pods: 110
- Allocatable:
- cpu: 1
- memory: 1781160Ki
- pods: 110
- System Info:
- Machine ID: 996415857b5549c38c6cd6912af487f2
- System UUID: B2E09B4B-9CD9-493C-A94A-6220D7761C47
- Boot ID: a4c19123-637f-4ba2-a145-ab62fd458d16
- Kernel Version: 3.10.0-693.11.1.el7.x86_64
- OS Image: CentOS Linux 7 (Core)
- Operating System: linux
- Architecture: amd64
- Container Runtime Version: cri-containerd://1.0.0-beta.1
- Kubelet Version: v1.9.0
- Kube-Proxy Version: v1.9.0
- ExternalID: k8s-worker-1
- Non-terminated Pods: (2 in total)
- Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
- --------- ---- ------------ ---------- --------------- -------------
- kube-system kube-dns-6c857864fb-ljmf8 260m (26%) 0 (0%) 110Mi (6%) 170Mi (9%)
- kube-system weave-net-xrr2k 20m (2%) 0 (0%) 0 (0%) 0 (0%)
- Allocated resources:
- (Total limits may be over 100 percent, i.e., overcommitted.)
- CPU Requests CPU Limits Memory Requests Memory Limits
- ------------ ---------- --------------- -------------
- 280m (28%) 0 (0%) 110Mi (6%) 170Mi (9%)
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Starting 7m kubelet, k8s-worker-1 Starting kubelet.
- Warning InvalidDiskCapacity 7m kubelet, k8s-worker-1 invalid capacity 0 on image filesystem
- Normal NodeAllocatableEnforced 7m kubelet, k8s-worker-1 Updated Node Allocatable limit across pods
- Normal NodeHasSufficientDisk 7m (x2 over 7m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientDisk
- Normal NodeHasSufficientMemory 7m (x2 over 7m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientMemory
- Normal NodeHasNoDiskPressure 7m (x2 over 7m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasNoDiskPressure
- Normal NodeReady 7m kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeReady
- Normal Starting 7m kube-proxy, k8s-worker-1 Starting kube-proxy.
- Normal Starting 6m kubelet, k8s-worker-1 Starting kubelet.
- Normal NodeHasSufficientDisk 6m (x2 over 6m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientDisk
- Normal NodeHasSufficientMemory 6m (x2 over 6m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientMemory
- Normal NodeHasNoDiskPressure 6m (x2 over 6m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasNoDiskPressure
- Warning InvalidDiskCapacity 6m kubelet, k8s-worker-1 invalid capacity 0 on image filesystem
- Normal NodeAllocatableEnforced 6m kubelet, k8s-worker-1 Updated Node Allocatable limit across pods
- Normal NodeHasNoDiskPressure 6m kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasNoDiskPressure
- Normal NodeAllocatableEnforced 6m kubelet, k8s-worker-1 Updated Node Allocatable limit across pods
- Warning InvalidDiskCapacity 6m kubelet, k8s-worker-1 invalid capacity 0 on image filesystem
- Normal NodeHasSufficientDisk 6m kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientDisk
- Normal NodeHasSufficientMemory 6m kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientMemory
- Normal Starting 6m kubelet, k8s-worker-1 Starting kubelet.
- Normal Starting 6m kubelet, k8s-worker-1 Starting kubelet.
- Normal NodeHasSufficientDisk 6m (x2 over 6m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientDisk
- Normal NodeHasSufficientMemory 6m (x2 over 6m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientMemory
- Normal NodeHasNoDiskPressure 6m (x2 over 6m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasNoDiskPressure
- Warning InvalidDiskCapacity 6m kubelet, k8s-worker-1 invalid capacity 0 on image filesystem
- Normal NodeAllocatableEnforced 6m kubelet, k8s-worker-1 Updated Node Allocatable limit across pods
- Normal Starting 6m kubelet, k8s-worker-1 Starting kubelet.
- Normal NodeAllocatableEnforced 6m kubelet, k8s-worker-1 Updated Node Allocatable limit across pods
- Warning InvalidDiskCapacity 6m kubelet, k8s-worker-1 invalid capacity 0 on image filesystem
- ```
- containerd status on worker:
- ```
- ● containerd.service - containerd container runtime
- Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: disabled)
- Active: active (running) since Fri 2018-02-02 19:55:34 UTC; 8min ago
- Docs: https://containerd.io
- Process: 14527 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
- Main PID: 14529 (containerd)
- Memory: 10.2M
- CGroup: /system.slice/containerd.service
- └─14529 /usr/local/bin/containerd
- ```
- kubelet status on worker:
- ```
- ● kubelet.service - Kubernetes Kubelet
- Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
- Active: active (running) since Fri 2018-02-02 20:04:42 UTC; 5s ago
- Docs: https://github.com/kubernetes/kubernetes
- Main PID: 16694 (kubelet)
- Memory: 23.2M
- CGroup: /system.slice/kubelet.service
- └─16694 /usr/local/bin/kubelet --allow-privileged=true --anonymous-auth=false --authorization-mode=Webhook --client-ca-file=/var/lib/kubernetes/ca.pem --cloud-provider= --cluster-dns=10.32.0.10 --cluster-domain=cluster.local --container-runtime=remote --container-runtime-endpoint=unix:///var/run/cri-containerd.sock --image-pull-progress-deadline=2m --kubeconfig=/var/lib/kubelet/kubeconfig --network-plugin=cni --pod-cidr=10.200.1.0/24 --register-node=true --runtime-request-timeout=15m --tls-cert-file=/var/lib/kubelet/k8s-worker-1.pem --tls-private-key-file=/var/lib/kubelet/k8s-worker-1-key.pem --v=2
- ```
- containerd status for default:
- ```
- ● containerd.service - containerd container runtime
- Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: disabled)
- Active: active (running) since Fri 2018-02-02 19:55:34 UTC; 14min ago
- Docs: https://containerd.io
- Process: 14527 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
- Main PID: 14529 (containerd)
- Memory: 10.2M
- CGroup: /system.slice/containerd.service
- └─14529 /usr/local/bin/containerd
- ```
- containerd service file:
- ```
- [root@k8s-worker-1 vagrant]# cat /etc/systemd/system/containerd.service
- [Unit]
- Description=containerd container runtime
- Documentation=https://containerd.io
- After=network.target
- [Service]
- ExecStartPre=/sbin/modprobe overlay
- ExecStart=/usr/local/bin/containerd
- Restart=always
- RestartSec=5
- Delegate=yes
- KillMode=process
- OOMScoreAdjust=-999
- LimitNOFILE=1048576
- # Having non-zero Limit*s causes performance problems due to accounting overhead
- # in the kernel. We recommend using cgroups to do container-local accounting.
- LimitNPROC=infinity
- LimitCORE=infinity
- [Install]
- WantedBy=multi-user.target
- ```
- kubelet service file:
- ```
- [root@k8s-worker-1 vagrant]# cat /etc/systemd/system/kubelet.service
- [Unit]
- Description=Kubernetes Kubelet
- Documentation=https://github.com/kubernetes/kubernetes
- After=cri-containerd.service
- Requires=cri-containerd.service
- [Service]
- ExecStart=/usr/local/bin/kubelet \
- --allow-privileged=true \
- --anonymous-auth=false \
- --authorization-mode=Webhook \
- --client-ca-file=/var/lib/kubernetes/ca.pem \
- --cloud-provider= \
- --cluster-dns=10.32.0.10 \
- --cluster-domain=cluster.local \
- --container-runtime=remote \
- --container-runtime-endpoint=unix:///var/run/cri-containerd.sock \
- --image-pull-progress-deadline=2m \
- --kubeconfig=/var/lib/kubelet/kubeconfig \
- --network-plugin=cni \
- --pod-cidr=10.200.1.0/24 \
- --register-node=true \
- --runtime-request-timeout=15m \
- --tls-cert-file=/var/lib/kubelet/k8s-worker-1.pem \
- --tls-private-key-file=/var/lib/kubelet/k8s-worker-1-key.pem \
- --v=2
- Restart=on-failure
- RestartSec=5
- [Install]
- WantedBy=multi-user.target
- ```
- containerd output to journallog:
- ```
- Feb 02 19:55:34 k8s-worker-1 systemd[1]: Starting containerd container runtime...
- Feb 02 19:55:34 k8s-worker-1 systemd[1]: Started containerd container runtime.
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="starting containerd" module=containerd revision=6c7abf7c76c1973d4fb4b0bad51691de84869a51 version=v1.0.0-6-g6c7abf7
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="setting subreaper..." module=containerd
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." module=containerd type=io.containerd.content.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." module=containerd type=io.containerd.snapshotter.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" module=containerd
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." module=containerd type=io.containerd.snapshotter.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." module=containerd type=io.containerd.metadata.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" module="containerd/io.containerd.metadata.v1.bolt"
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." module=containerd type=io.containerd.differ.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." module=containerd type=io.containerd.gc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." module=containerd type=io.containerd.grpc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." module=containerd type=io.containerd.grpc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." module=containerd type=io.containerd.grpc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." module=containerd type=io.containerd.grpc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." module=containerd type=io.containerd.grpc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." module=containerd type=io.containerd.grpc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." module=containerd type=io.containerd.grpc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." module=containerd type=io.containerd.grpc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." module=containerd type=io.containerd.grpc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." module=containerd type=io.containerd.monitor.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." module=containerd type=io.containerd.runtime.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." module=containerd type=io.containerd.grpc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." module=containerd type=io.containerd.grpc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." module=containerd type=io.containerd.grpc.v1
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg=serving... address="/run/containerd/debug.sock" module="containerd/debug"
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg=serving... address="/run/containerd/containerd.sock" module="containerd/grpc"
- Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="containerd successfully booted in 0.051302s" module=containerd
- ```
- kubelet output to journallog:
- ```
- Feb 02 20:06:41 k8s-worker-1 systemd[1]: Started Kubernetes Kubelet.
- Feb 02 20:06:41 k8s-worker-1 systemd[1]: Starting Kubernetes Kubelet...
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314625 17146 flags.go:52] FLAG: --address="0.0.0.0"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314667 17146 flags.go:52] FLAG: --allow-privileged="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314671 17146 flags.go:52] FLAG: --alsologtostderr="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314675 17146 flags.go:52] FLAG: --anonymous-auth="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314677 17146 flags.go:52] FLAG: --application-metrics-count-limit="100"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314680 17146 flags.go:52] FLAG: --authentication-token-webhook="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314682 17146 flags.go:52] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314687 17146 flags.go:52] FLAG: --authorization-mode="Webhook"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314691 17146 flags.go:52] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314694 17146 flags.go:52] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314697 17146 flags.go:52] FLAG: --azure-container-registry-config=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314700 17146 flags.go:52] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314703 17146 flags.go:52] FLAG: --bootstrap-checkpoint-path=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314705 17146 flags.go:52] FLAG: --bootstrap-kubeconfig=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314708 17146 flags.go:52] FLAG: --cadvisor-port="4194"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314712 17146 flags.go:52] FLAG: --cert-dir="/var/lib/kubelet/pki"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314715 17146 flags.go:52] FLAG: --cgroup-driver="cgroupfs"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314717 17146 flags.go:52] FLAG: --cgroup-root=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314720 17146 flags.go:52] FLAG: --cgroups-per-qos="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314722 17146 flags.go:52] FLAG: --chaos-chance="0"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314727 17146 flags.go:52] FLAG: --client-ca-file="/var/lib/kubernetes/ca.pem"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314730 17146 flags.go:52] FLAG: --cloud-config=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314732 17146 flags.go:52] FLAG: --cloud-provider=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314734 17146 flags.go:52] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314739 17146 flags.go:52] FLAG: --cluster-dns="[10.32.0.10]"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314745 17146 flags.go:52] FLAG: --cluster-domain="cluster.local"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314748 17146 flags.go:52] FLAG: --cni-bin-dir=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314750 17146 flags.go:52] FLAG: --cni-conf-dir=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314753 17146 flags.go:52] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314756 17146 flags.go:52] FLAG: --container-runtime="remote"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314758 17146 flags.go:52] FLAG: --container-runtime-endpoint="unix:///var/run/cri-containerd.sock"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314761 17146 flags.go:52] FLAG: --containerd="unix:///var/run/containerd.sock"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314763 17146 flags.go:52] FLAG: --containerized="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314766 17146 flags.go:52] FLAG: --contention-profiling="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314768 17146 flags.go:52] FLAG: --cpu-cfs-quota="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314771 17146 flags.go:52] FLAG: --cpu-manager-policy="none"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314774 17146 flags.go:52] FLAG: --cpu-manager-reconcile-period="10s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314776 17146 flags.go:52] FLAG: --docker="unix:///var/run/docker.sock"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314779 17146 flags.go:52] FLAG: --docker-disable-shared-pid="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314782 17146 flags.go:52] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314785 17146 flags.go:52] FLAG: --docker-env-metadata-whitelist=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314787 17146 flags.go:52] FLAG: --docker-only="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314790 17146 flags.go:52] FLAG: --docker-root="/var/lib/docker"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314792 17146 flags.go:52] FLAG: --docker-tls="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314795 17146 flags.go:52] FLAG: --docker-tls-ca="ca.pem"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314798 17146 flags.go:52] FLAG: --docker-tls-cert="cert.pem"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314800 17146 flags.go:52] FLAG: --docker-tls-key="key.pem"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314803 17146 flags.go:52] FLAG: --dynamic-config-dir=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314807 17146 flags.go:52] FLAG: --enable-controller-attach-detach="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314809 17146 flags.go:52] FLAG: --enable-custom-metrics="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314812 17146 flags.go:52] FLAG: --enable-debugging-handlers="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314814 17146 flags.go:52] FLAG: --enable-load-reader="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314817 17146 flags.go:52] FLAG: --enable-server="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314819 17146 flags.go:52] FLAG: --enforce-node-allocatable="[pods]"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314822 17146 flags.go:52] FLAG: --event-burst="10"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314825 17146 flags.go:52] FLAG: --event-qps="5"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314827 17146 flags.go:52] FLAG: --event-storage-age-limit="default=0"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314830 17146 flags.go:52] FLAG: --event-storage-event-limit="default=0"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314832 17146 flags.go:52] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314842 17146 flags.go:52] FLAG: --eviction-max-pod-grace-period="0"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314844 17146 flags.go:52] FLAG: --eviction-minimum-reclaim=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314848 17146 flags.go:52] FLAG: --eviction-pressure-transition-period="5m0s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314851 17146 flags.go:52] FLAG: --eviction-soft=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314854 17146 flags.go:52] FLAG: --eviction-soft-grace-period=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314856 17146 flags.go:52] FLAG: --exit-on-lock-contention="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314859 17146 flags.go:52] FLAG: --experimental-allocatable-ignore-eviction="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314861 17146 flags.go:52] FLAG: --experimental-allowed-unsafe-sysctls="[]"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314893 17146 flags.go:52] FLAG: --experimental-bootstrap-kubeconfig=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314896 17146 flags.go:52] FLAG: --experimental-check-node-capabilities-before-mount="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314898 17146 flags.go:52] FLAG: --experimental-dockershim="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314901 17146 flags.go:52] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314903 17146 flags.go:52] FLAG: --experimental-fail-swap-on="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314906 17146 flags.go:52] FLAG: --experimental-kernel-memcg-notification="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314908 17146 flags.go:52] FLAG: --experimental-mounter-path=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314911 17146 flags.go:52] FLAG: --experimental-qos-reserved=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314914 17146 flags.go:52] FLAG: --fail-swap-on="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314916 17146 flags.go:52] FLAG: --feature-gates=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314920 17146 flags.go:52] FLAG: --file-check-frequency="20s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314923 17146 flags.go:52] FLAG: --global-housekeeping-interval="1m0s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314925 17146 flags.go:52] FLAG: --google-json-key=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314928 17146 flags.go:52] FLAG: --hairpin-mode="promiscuous-bridge"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314930 17146 flags.go:52] FLAG: --healthz-bind-address="127.0.0.1"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314933 17146 flags.go:52] FLAG: --healthz-port="10248"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314936 17146 flags.go:52] FLAG: --host-ipc-sources="[*]"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314939 17146 flags.go:52] FLAG: --host-network-sources="[*]"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314943 17146 flags.go:52] FLAG: --host-pid-sources="[*]"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314945 17146 flags.go:52] FLAG: --hostname-override=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314948 17146 flags.go:52] FLAG: --housekeeping-interval="10s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314951 17146 flags.go:52] FLAG: --http-check-frequency="20s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314953 17146 flags.go:52] FLAG: --image-gc-high-threshold="85"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314956 17146 flags.go:52] FLAG: --image-gc-low-threshold="80"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314958 17146 flags.go:52] FLAG: --image-pull-progress-deadline="2m0s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314961 17146 flags.go:52] FLAG: --image-service-endpoint=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314963 17146 flags.go:52] FLAG: --init-config-dir=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314966 17146 flags.go:52] FLAG: --iptables-drop-bit="15"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314968 17146 flags.go:52] FLAG: --iptables-masquerade-bit="14"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314971 17146 flags.go:52] FLAG: --keep-terminated-pod-volumes="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314973 17146 flags.go:52] FLAG: --kube-api-burst="10"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314976 17146 flags.go:52] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314979 17146 flags.go:52] FLAG: --kube-api-qps="5"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314981 17146 flags.go:52] FLAG: --kube-reserved=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314984 17146 flags.go:52] FLAG: --kube-reserved-cgroup=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314986 17146 flags.go:52] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314989 17146 flags.go:52] FLAG: --kubelet-cgroups=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314991 17146 flags.go:52] FLAG: --lock-file=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314994 17146 flags.go:52] FLAG: --log-backtrace-at=":0"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314997 17146 flags.go:52] FLAG: --log-cadvisor-usage="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315000 17146 flags.go:52] FLAG: --log-dir=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315002 17146 flags.go:52] FLAG: --log-flush-frequency="5s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315005 17146 flags.go:52] FLAG: --logtostderr="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315007 17146 flags.go:52] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315011 17146 flags.go:52] FLAG: --make-iptables-util-chains="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315013 17146 flags.go:52] FLAG: --manifest-url=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315016 17146 flags.go:52] FLAG: --manifest-url-header=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315020 17146 flags.go:52] FLAG: --master-service-namespace="default"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315023 17146 flags.go:52] FLAG: --max-open-files="1000000"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315027 17146 flags.go:52] FLAG: --max-pods="110"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315030 17146 flags.go:52] FLAG: --maximum-dead-containers="-1"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315032 17146 flags.go:52] FLAG: --maximum-dead-containers-per-container="1"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315035 17146 flags.go:52] FLAG: --minimum-container-ttl-duration="0s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315037 17146 flags.go:52] FLAG: --minimum-image-ttl-duration="2m0s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315040 17146 flags.go:52] FLAG: --network-plugin="cni"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315044 17146 flags.go:52] FLAG: --network-plugin-mtu="0"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315048 17146 flags.go:52] FLAG: --node-ip=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315051 17146 flags.go:52] FLAG: --node-labels=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315057 17146 flags.go:52] FLAG: --node-status-update-frequency="10s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315062 17146 flags.go:52] FLAG: --non-masquerade-cidr="10.0.0.0/8"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315065 17146 flags.go:52] FLAG: --oom-score-adj="-999"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315068 17146 flags.go:52] FLAG: --pod-cidr="10.200.1.0/24"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315070 17146 flags.go:52] FLAG: --pod-infra-container-image="gcr.io/google_containers/pause-amd64:3.0"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315073 17146 flags.go:52] FLAG: --pod-manifest-path=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315076 17146 flags.go:52] FLAG: --pods-per-core="0"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315079 17146 flags.go:52] FLAG: --port="10250"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315081 17146 flags.go:52] FLAG: --protect-kernel-defaults="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315084 17146 flags.go:52] FLAG: --provider-id=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315086 17146 flags.go:52] FLAG: --read-only-port="10255"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315089 17146 flags.go:52] FLAG: --really-crash-for-testing="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315092 17146 flags.go:52] FLAG: --register-node="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315095 17146 flags.go:52] FLAG: --register-schedulable="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315097 17146 flags.go:52] FLAG: --register-with-taints=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315101 17146 flags.go:52] FLAG: --registry-burst="10"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315103 17146 flags.go:52] FLAG: --registry-qps="5"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315106 17146 flags.go:52] FLAG: --require-kubeconfig="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315109 17146 flags.go:52] FLAG: --resolv-conf="/etc/resolv.conf"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315111 17146 flags.go:52] FLAG: --rkt-api-endpoint="localhost:15441"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315114 17146 flags.go:52] FLAG: --rkt-path=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315117 17146 flags.go:52] FLAG: --rkt-stage1-image=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315119 17146 flags.go:52] FLAG: --root-dir="/var/lib/kubelet"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315122 17146 flags.go:52] FLAG: --rotate-certificates="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315124 17146 flags.go:52] FLAG: --runonce="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315127 17146 flags.go:52] FLAG: --runtime-cgroups=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315129 17146 flags.go:52] FLAG: --runtime-request-timeout="15m0s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315132 17146 flags.go:52] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315135 17146 flags.go:52] FLAG: --serialize-image-pulls="true"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315137 17146 flags.go:52] FLAG: --stderrthreshold="2"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315140 17146 flags.go:52] FLAG: --storage-driver-buffer-duration="1m0s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315143 17146 flags.go:52] FLAG: --storage-driver-db="cadvisor"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315145 17146 flags.go:52] FLAG: --storage-driver-host="localhost:8086"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315148 17146 flags.go:52] FLAG: --storage-driver-password="root"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315150 17146 flags.go:52] FLAG: --storage-driver-secure="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315153 17146 flags.go:52] FLAG: --storage-driver-table="stats"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315155 17146 flags.go:52] FLAG: --storage-driver-user="root"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315158 17146 flags.go:52] FLAG: --streaming-connection-idle-timeout="4h0m0s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315160 17146 flags.go:52] FLAG: --sync-frequency="1m0s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315163 17146 flags.go:52] FLAG: --system-cgroups=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315166 17146 flags.go:52] FLAG: --system-reserved=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315168 17146 flags.go:52] FLAG: --system-reserved-cgroup=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315171 17146 flags.go:52] FLAG: --tls-cert-file="/var/lib/kubelet/k8s-worker-1.pem"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315173 17146 flags.go:52] FLAG: --tls-private-key-file="/var/lib/kubelet/k8s-worker-1-key.pem"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315176 17146 flags.go:52] FLAG: --v="2"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315179 17146 flags.go:52] FLAG: --version="false"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315184 17146 flags.go:52] FLAG: --vmodule=""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315187 17146 flags.go:52] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315190 17146 flags.go:52] FLAG: --volume-stats-agg-period="1m0s"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315209 17146 feature_gate.go:220] feature gates: &{{} map[]}
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315227 17146 controller.go:114] kubelet config controller: starting controller
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315231 17146 controller.go:118] kubelet config controller: validating combination of defaults and flags
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.352139 17146 mount_linux.go:202] Detected OS with systemd
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.354285 17146 server.go:182] Version: v1.9.0
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.354310 17146 feature_gate.go:220] feature gates: &{{} map[]}
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.354364 17146 plugins.go:101] No cloud provider specified.
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.354371 17146 server.go:303] No cloud provider specified: "" from the config file: ""
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.357031 17146 manager.go:151] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.357954 17146 fs.go:139] Filesystem UUIDs: map[3bb37c5d-8f9e-4ea6-a103-c8b4a03c99f9:/dev/dm-0 764c16e1-5712-4212-8b34-de5f2d6f039d:/dev/dm-2 7ca96e9b-437c-42ed-895c-7be12796c8a0:/dev/sda1 d5272035-3c33-4816-a127-d19febbe1b4c:/dev/dm-1]
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.357966 17146 fs.go:140] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 minor:17 fsType:tmpfs blockSize:0} /dev/mapper/centos-root:{mountpoint:/ major:253 minor:0 fsType:xfs blockSize:0} /dev/sda1:{mountpoint:/boot major:8 minor:1 fsType:xfs blockSize:0} /dev/mapper/centos-home:{mountpoint:/home major:253 minor:2 fsType:xfs blockSize:0}]
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.358882 17146 manager.go:225] Machine: {NumCores:1 CpuFrequency:2904002 MemoryCapacity:1928765440 HugePages:[{PageSize:2048 NumPages:0}] MachineID:996415857b5549c38c6cd6912af487f2 SystemUUID:B2E09B4B-9CD9-493C-A94A-6220D7761C47 BootID:a4c19123-637f-4ba2-a145-ab62fd458d16 Filesystems:[{Device:tmpfs DeviceMajor:0 DeviceMinor:17 Capacity:964382720 Type:vfs Inodes:235445 HasInodes:true} {Device:/dev/mapper/centos-root DeviceMajor:253 DeviceMinor:0 Capacity:43985149952 Type:vfs Inodes:21487616 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:1063256064 Type:vfs Inodes:524288 HasInodes:true} {Device:/dev/mapper/centos-home DeviceMajor:253 DeviceMinor:2 Capacity:21472735232 Type:vfs Inodes:10489856 HasInodes:true}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:44006637568 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:2147483648 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:21483225088 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:68719476736 Scheduler:cfq}] NetworkDevices:[{Name:enp0s3 MacAddress:08:00:27:16:98:18 Speed:1000 Mtu:1500} {Name:enp0s8 MacAddress:08:00:27:8e:54:25 Speed:100 Mtu:1500} {Name:enp0s9 MacAddress:08:00:27:38:71:d7 Speed:100 Mtu:1500}] Topology:[{Id:0 Memory:2147016704 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:4194304 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359082 17146 manager.go:231] Version: {KernelVersion:3.10.0-693.11.1.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:Unknown DockerAPIVersion:Unknown CadvisorVersion: CadvisorRevision:}
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359356 17146 server.go:428] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359525 17146 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359531 17146 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359597 17146 container_manager_linux.go:266] Creating device plugin manager: false
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359639 17146 server.go:693] Using root directory: /var/lib/kubelet
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359662 17146 kubelet.go:313] Watching apiserver
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: W0202 20:06:41.398258 17146 kubelet_network.go:132] Hairpin mode set to "promiscuous-bridge" but container runtime is "remote", ignoring
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.398282 17146 kubelet.go:571] Hairpin mode set to "none"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.398493 17146 plugins.go:190] Loaded network plugin "cni"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.398523 17146 remote_runtime.go:43] Connecting to runtime service unix:///var/run/cri-containerd.sock
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.401934 17146 kuberuntime_manager.go:186] Container runtime cri-containerd initialized, version: 1.0.0-beta.1, apiVersion: 0.0.0
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.401988 17146 kuberuntime_manager.go:918] updating runtime config through cri with podcidr 10.200.1.0/24
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.404840 17146 kubelet_network.go:196] Setting Pod CIDR: -> 10.200.1.0/24
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405108 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/aws-ebs"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405121 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/empty-dir"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405131 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/gce-pd"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405141 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/git-repo"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405149 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/host-path"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405157 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/nfs"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405167 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/secret"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405175 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/iscsi"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405184 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/glusterfs"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405193 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/rbd"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405203 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/cinder"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405212 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/quobyte"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405219 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/cephfs"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405230 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/downward-api"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405239 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/fc"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405247 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/flocker"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405256 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/azure-file"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405266 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/configmap"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405275 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/vsphere-volume"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405284 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/azure-disk"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405292 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/photon-pd"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405300 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/projected"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405309 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/portworx-volume"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405320 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/scaleio"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405369 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/local-volume"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405380 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/storageos"
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405497 17146 server.go:755] Started kubelet
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.417564 17146 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.418569 17146 server.go:129] Starting to listen on 0.0.0.0:10250
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.419068 17146 server.go:299] Adding debug handlers to kubelet server.
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.419933 17146 server.go:149] Starting to listen read-only on 0.0.0.0:10255
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.421515 17146 kubelet_node_status.go:431] Recording NodeHasSufficientDisk event message for node k8s-worker-1
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.421535 17146 kubelet_node_status.go:431] Recording NodeHasSufficientMemory event message for node k8s-worker-1
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.421541 17146 kubelet_node_status.go:431] Recording NodeHasNoDiskPressure event message for node k8s-worker-1
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.421984 17146 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.421999 17146 status_manager.go:140] Starting to sync pod status with apiserver
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.422009 17146 kubelet.go:1767] Starting kubelet main sync loop.
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.422026 17146 kubelet.go:1778] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: E0202 20:06:41.422690 17146 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.422711 17146 volume_manager.go:245] The desired_state_of_world populator starts
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.422715 17146 volume_manager.go:247] Starting Kubelet Volume Manager
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: E0202 20:06:41.437026 17146 cri_stats_provider.go:219] Failed to get the info of the filesystem with id "3bb37c5d-8f9e-4ea6-a103-c8b4a03c99f9": cannot find device "/dev/dm-0" in partitions.
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: E0202 20:06:41.437044 17146 kubelet.go:1275] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.439557 17146 factory.go:136] Registering containerd factory
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.439705 17146 factory.go:54] Registering systemd factory
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.439892 17146 factory.go:86] Registering Raw factory
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.440042 17146 manager.go:1178] Started watching for new ooms in manager
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.443180 17146 manager.go:329] Starting recovery of all containers
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.488397 17146 manager.go:334] Recovery completed
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.567123 17146 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.569173 17146 kubelet_node_status.go:431] Recording NodeHasSufficientDisk event message for node k8s-worker-1
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.569194 17146 kubelet_node_status.go:431] Recording NodeHasSufficientMemory event message for node k8s-worker-1
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.569204 17146 kubelet_node_status.go:431] Recording NodeHasNoDiskPressure event message for node k8s-worker-1
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.569218 17146 kubelet_node_status.go:82] Attempting to register node k8s-worker-1
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.578360 17146 kubelet_node_status.go:127] Node k8s-worker-1 was previously registered
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.578397 17146 kubelet_node_status.go:85] Successfully registered node k8s-worker-1
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.580040 17146 kuberuntime_manager.go:918] updating runtime config through cri with podcidr
- Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.581013 17146 kubelet_network.go:196] Setting Pod CIDR: 10.200.1.0/24 ->
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.422608 17146 kubelet.go:1836] SyncLoop (ADD, "api"): "kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818), weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)"
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523192 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/08e4b826-0853-11e8-8adc-080027169818-kube-dns-config") pod "kube-dns-6c857864fb-ljmf8" (UID: "08e4b826-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523228 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-token-hmv74" (UniqueName: "kubernetes.io/secret/08e4b826-0853-11e8-8adc-080027169818-kube-dns-token-hmv74") pod "kube-dns-6c857864fb-ljmf8" (UID: "08e4b826-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523252 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "weavedb" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-weavedb") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523270 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-lib-modules") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523290 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "weave-net-token-m2k9v" (UniqueName: "kubernetes.io/secret/09b34091-0853-11e8-8adc-080027169818-weave-net-token-m2k9v") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523307 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-bin") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523324 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin2" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-bin2") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523341 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-conf" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-conf") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523358 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "dbus" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-dbus") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523376 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-xtables-lock") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624100 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "dbus" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-dbus") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624202 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-xtables-lock") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624289 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "cni-bin" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-bin") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624350 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "cni-bin2" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-bin2") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624403 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "cni-conf" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-conf") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624567 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-lib-modules") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624742 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "weave-net-token-m2k9v" (UniqueName: "kubernetes.io/secret/09b34091-0853-11e8-8adc-080027169818-weave-net-token-m2k9v") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624812 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/08e4b826-0853-11e8-8adc-080027169818-kube-dns-config") pod "kube-dns-6c857864fb-ljmf8" (UID: "08e4b826-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624874 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "kube-dns-token-hmv74" (UniqueName: "kubernetes.io/secret/08e4b826-0853-11e8-8adc-080027169818-kube-dns-token-hmv74") pod "kube-dns-6c857864fb-ljmf8" (UID: "08e4b826-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624929 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "weavedb" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-weavedb") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.625086 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "weavedb" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-weavedb") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.626295 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "dbus" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-dbus") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.626411 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-xtables-lock") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.626479 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "cni-bin" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-bin") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.626744 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "cni-bin2" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-bin2") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.626838 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "cni-conf" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-conf") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.626910 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-lib-modules") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.654230 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "weave-net-token-m2k9v" (UniqueName: "kubernetes.io/secret/09b34091-0853-11e8-8adc-080027169818-weave-net-token-m2k9v") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.656284 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/08e4b826-0853-11e8-8adc-080027169818-kube-dns-config") pod "kube-dns-6c857864fb-ljmf8" (UID: "08e4b826-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.657987 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "kube-dns-token-hmv74" (UniqueName: "kubernetes.io/secret/08e4b826-0853-11e8-8adc-080027169818-kube-dns-token-hmv74") pod "kube-dns-6c857864fb-ljmf8" (UID: "08e4b826-0853-11e8-8adc-080027169818")
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.730960 17146 kuberuntime_manager.go:385] No sandbox for pod "kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818)" can be found. Need to start a new one
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.747170 17146 kuberuntime_manager.go:385] No sandbox for pod "weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)" can be found. Need to start a new one
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:46 transport: http2Client.notifyError got notified that the client transport was broken EOF.
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:46 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:46 transport: http2Client.notifyError got notified that the client transport was broken read unix @->/var/run/cri-containerd.sock: read: connection reset by peer.
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:46 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.763872 17146 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Internal desc = transport is closing
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.763935 17146 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)" failed: rpc error: code = Internal desc = transport is closing
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.763978 17146 kuberuntime_manager.go:647] createPodSandbox for pod "weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)" failed: rpc error: code = Internal desc = transport is closing
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.764033 17146 pod_workers.go:186] Error syncing pod 09b34091-0853-11e8-8adc-080027169818 ("weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)"), skipping: failed to "CreatePodSandbox" for "weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)" with CreatePodSandboxError: "CreatePodSandbox for pod \"weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)\" failed: rpc error: code = Internal desc = transport is closing"
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.764479 17146 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Internal desc = transport is closing
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.764498 17146 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818)" failed: rpc error: code = Internal desc = transport is closing
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.764504 17146 kuberuntime_manager.go:647] createPodSandbox for pod "kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818)" failed: rpc error: code = Internal desc = transport is closing
- Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.764526 17146 pod_workers.go:186] Error syncing pod 08e4b826-0853-11e8-8adc-080027169818 ("kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818)"), skipping: failed to "CreatePodSandbox" for "kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818)\" failed: rpc error: code = Internal desc = transport is closing"
- Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.429719 17146 remote_runtime.go:169] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.429857 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.429879 17146 kubelet_pods.go:1045] Error listing containers: &status.statusError{Code:14, Message:"grpc: the connection is unavailable", Details:[]*any.Any(nil)}
- Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.429931 17146 kubelet.go:1925] Failed cleaning pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.562905 17146 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.562979 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.563006 17146 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:47 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
- Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:47 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
- Feb 02 20:06:48 k8s-worker-1 kubelet[17146]: E0202 20:06:48.563841 17146 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:48 k8s-worker-1 kubelet[17146]: E0202 20:06:48.563919 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:48 k8s-worker-1 kubelet[17146]: E0202 20:06:48.563949 17146 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:49 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
- Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.432987 17146 remote_runtime.go:169] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.433124 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.433165 17146 kubelet_pods.go:1029] Error listing containers: &status.statusError{Code:14, Message:"grpc: the connection is unavailable", Details:[]*any.Any(nil)}
- Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.433220 17146 kubelet.go:1925] Failed cleaning pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:49 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
- Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.564550 17146 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.564660 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.564692 17146 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:50 k8s-worker-1 kubelet[17146]: E0202 20:06:50.565029 17146 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:50 k8s-worker-1 kubelet[17146]: E0202 20:06:50.565106 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:50 k8s-worker-1 kubelet[17146]: E0202 20:06:50.565136 17146 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.429442 17146 remote_runtime.go:169] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.429527 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.429556 17146 kubelet_pods.go:1029] Error listing containers: &status.statusError{Code:14, Message:"grpc: the connection is unavailable", Details:[]*any.Any(nil)}
- Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.429595 17146 kubelet.go:1925] Failed cleaning pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.546988 17146 remote_runtime.go:434] Status from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.547070 17146 kubelet.go:2089] Container runtime sanity check failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.565719 17146 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.565808 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.565841 17146 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.598123 17146 remote_runtime.go:69] Version from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.598189 17146 kuberuntime_manager.go:245] Get remote runtime version failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
- Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
- Feb 02 20:06:51 k8s-worker-1 systemd[1]: Stopping Kubernetes Kubelet...
- ```
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement