Advertisement
hoeghh

ContainerD <--> kubelet not working

Feb 2nd, 2018
806
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 78.43 KB | None | 0 0
  1. ContainerD issue
  2.  
  3. Componenstatuses:
  4. ```
  5. NAME STATUS MESSAGE ERROR
  6. controller-manager Healthy ok
  7. etcd-0 Healthy {"health": "true"}
  8. scheduler Healthy ok
  9. ```
  10.  
  11. Pods:
  12. ```
  13. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
  14. kube-system kube-dns-6c857864fb-ljmf8 0/3 ContainerCreating 0 6m <none> k8s-worker-1
  15. kube-system weave-net-xrr2k 0/2 ContainerCreating 0 6m 192.168.50.31 k8s-worker-1
  16. ```
  17.  
  18. Nodes:
  19. ```
  20. NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  21. k8s-worker-1 Ready <none> 6m v1.9.0 <none> CentOS Linux 7 (Core) 3.10.0-693.11.1.el7.x86_64 cri-containerd://1.0.0-beta.1
  22. ```
  23.  
  24. Containerd config default:
  25. ```
  26. [root@k8s-worker-1 vagrant]# /usr/local/bin/containerd config default
  27. root = "/var/lib/containerd"
  28. state = "/run/containerd"
  29. no_subreaper = false
  30. oom_score = 0
  31.  
  32. [grpc]
  33. address = "/run/containerd/containerd.sock"
  34. uid = 0
  35. gid = 0
  36.  
  37. [debug]
  38. address = "/run/containerd/debug.sock"
  39. uid = 0
  40. gid = 0
  41. level = "info"
  42.  
  43. [metrics]
  44. address = ""
  45.  
  46. [cgroup]
  47. path = ""
  48. ```
  49.  
  50. permissions on socket in containerd default config:
  51. ```srw-rw----. 1 root root 0 Feb 2 19:55 /run/containerd/containerd.sock```
  52.  
  53. permissions on socket in kubelet service file:
  54. ```srwxr-xr-x. 1 root root 0 Feb 2 20:14 /var/run/cri-containerd.sock```
  55.  
  56.  
  57. Networks created for containerd:
  58. ```
  59. [root@k8s-worker-1 net.d]# cat 10-bridge.conf
  60. {
  61. "cniVersion": "0.3.1",
  62. "name": "bridge",
  63. "type": "bridge",
  64. "bridge": "cnio0",
  65. "isGateway": true,
  66. "ipMasq": true,
  67. "ipam": {
  68. "type": "host-local",
  69. "ranges": [
  70. [{"subnet": "10.200.1.0/24"}]
  71. ],
  72. "routes": [{"dst": "0.0.0.0/0"}]
  73. }
  74. }
  75. [root@k8s-worker-1 net.d]# cat 99-loopback.conf
  76. {
  77. "cniVersion": "0.3.1",
  78. "type": "loopback"
  79. }
  80. ```
  81.  
  82. routes on worker nodes:
  83. ```
  84. [root@k8s-worker-1 net.d]# route -n
  85. Kernel IP routing table
  86. Destination Gateway Genmask Flags Metric Ref Use Iface
  87. 0.0.0.0 10.0.2.2 0.0.0.0 UG 100 0 0 enp0s3
  88. 10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
  89. 172.28.128.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
  90. 192.168.50.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s9
  91. ```
  92.  
  93. hosts file in /etc
  94. ```
  95. [root@k8s-worker-1 net.d]# cat /etc/hosts
  96. 127.0.0.1 k8s-worker-1 k8s-worker-1
  97. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  98. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  99. 192.168.50.4 k8s-loadbalancer
  100. 192.168.50.20 k8s-master
  101. 192.168.50.11 k8s-etcd-1
  102. 192.168.50.21 k8s-master-1
  103. 192.168.50.31 k8s-worker-1
  104. ```
  105.  
  106. ip adresses on worker node:
  107. ```
  108. [root@k8s-worker-1 net.d]# ip add
  109. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
  110. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  111. inet 127.0.0.1/8 scope host lo
  112. valid_lft forever preferred_lft forever
  113. inet6 ::1/128 scope host
  114. valid_lft forever preferred_lft forever
  115. 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
  116. link/ether 08:00:27:16:98:18 brd ff:ff:ff:ff:ff:ff
  117. inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
  118. valid_lft 84994sec preferred_lft 84994sec
  119. inet6 fe80::5a11:34ec:f996:4f3/64 scope link
  120. valid_lft forever preferred_lft forever
  121. 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
  122. link/ether 08:00:27:8e:54:25 brd ff:ff:ff:ff:ff:ff
  123. inet 172.28.128.5/24 brd 172.28.128.255 scope global dynamic enp0s8
  124. valid_lft 942sec preferred_lft 942sec
  125. inet6 fe80::a00:27ff:fe8e:5425/64 scope link
  126. valid_lft forever preferred_lft forever
  127. 4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
  128. link/ether 08:00:27:38:71:d7 brd ff:ff:ff:ff:ff:ff
  129. inet 192.168.50.31/24 brd 192.168.50.255 scope global enp0s9
  130. valid_lft forever preferred_lft forever
  131. inet6 fe80::a00:27ff:fe38:71d7/64 scope link
  132. valid_lft forever preferred_lft forever
  133. ```
  134.  
  135.  
  136.  
  137. Describe node in kubernetes:
  138. ```
  139. Name: k8s-worker-1
  140. Roles: <none>
  141. Labels: beta.kubernetes.io/arch=amd64
  142. beta.kubernetes.io/os=linux
  143. kubernetes.io/hostname=k8s-worker-1
  144. Annotations: node.alpha.kubernetes.io/ttl=0
  145. volumes.kubernetes.io/controller-managed-attach-detach=true
  146. Taints: <none>
  147. CreationTimestamp: Fri, 02 Feb 2018 19:55:35 +0000
  148. Conditions:
  149. Type Status LastHeartbeatTime LastTransitionTime Reason Message
  150. ---- ------ ----------------- ------------------ ------ -------
  151. OutOfDisk False Fri, 02 Feb 2018 20:02:34 +0000 Fri, 02 Feb 2018 19:55:35 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
  152. MemoryPressure False Fri, 02 Feb 2018 20:02:34 +0000 Fri, 02 Feb 2018 19:55:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
  153. DiskPressure False Fri, 02 Feb 2018 20:02:34 +0000 Fri, 02 Feb 2018 19:55:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
  154. Ready True Fri, 02 Feb 2018 20:02:34 +0000 Fri, 02 Feb 2018 19:55:35 +0000 KubeletReady kubelet is posting ready status
  155. Addresses:
  156. InternalIP: 192.168.50.31
  157. Hostname: k8s-worker-1
  158. Capacity:
  159. cpu: 1
  160. memory: 1883560Ki
  161. pods: 110
  162. Allocatable:
  163. cpu: 1
  164. memory: 1781160Ki
  165. pods: 110
  166. System Info:
  167. Machine ID: 996415857b5549c38c6cd6912af487f2
  168. System UUID: B2E09B4B-9CD9-493C-A94A-6220D7761C47
  169. Boot ID: a4c19123-637f-4ba2-a145-ab62fd458d16
  170. Kernel Version: 3.10.0-693.11.1.el7.x86_64
  171. OS Image: CentOS Linux 7 (Core)
  172. Operating System: linux
  173. Architecture: amd64
  174. Container Runtime Version: cri-containerd://1.0.0-beta.1
  175. Kubelet Version: v1.9.0
  176. Kube-Proxy Version: v1.9.0
  177. ExternalID: k8s-worker-1
  178. Non-terminated Pods: (2 in total)
  179. Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
  180. --------- ---- ------------ ---------- --------------- -------------
  181. kube-system kube-dns-6c857864fb-ljmf8 260m (26%) 0 (0%) 110Mi (6%) 170Mi (9%)
  182. kube-system weave-net-xrr2k 20m (2%) 0 (0%) 0 (0%) 0 (0%)
  183. Allocated resources:
  184. (Total limits may be over 100 percent, i.e., overcommitted.)
  185. CPU Requests CPU Limits Memory Requests Memory Limits
  186. ------------ ---------- --------------- -------------
  187. 280m (28%) 0 (0%) 110Mi (6%) 170Mi (9%)
  188. Events:
  189. Type Reason Age From Message
  190. ---- ------ ---- ---- -------
  191. Normal Starting 7m kubelet, k8s-worker-1 Starting kubelet.
  192. Warning InvalidDiskCapacity 7m kubelet, k8s-worker-1 invalid capacity 0 on image filesystem
  193. Normal NodeAllocatableEnforced 7m kubelet, k8s-worker-1 Updated Node Allocatable limit across pods
  194. Normal NodeHasSufficientDisk 7m (x2 over 7m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientDisk
  195. Normal NodeHasSufficientMemory 7m (x2 over 7m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientMemory
  196. Normal NodeHasNoDiskPressure 7m (x2 over 7m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasNoDiskPressure
  197. Normal NodeReady 7m kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeReady
  198. Normal Starting 7m kube-proxy, k8s-worker-1 Starting kube-proxy.
  199. Normal Starting 6m kubelet, k8s-worker-1 Starting kubelet.
  200. Normal NodeHasSufficientDisk 6m (x2 over 6m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientDisk
  201. Normal NodeHasSufficientMemory 6m (x2 over 6m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientMemory
  202. Normal NodeHasNoDiskPressure 6m (x2 over 6m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasNoDiskPressure
  203. Warning InvalidDiskCapacity 6m kubelet, k8s-worker-1 invalid capacity 0 on image filesystem
  204. Normal NodeAllocatableEnforced 6m kubelet, k8s-worker-1 Updated Node Allocatable limit across pods
  205. Normal NodeHasNoDiskPressure 6m kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasNoDiskPressure
  206. Normal NodeAllocatableEnforced 6m kubelet, k8s-worker-1 Updated Node Allocatable limit across pods
  207. Warning InvalidDiskCapacity 6m kubelet, k8s-worker-1 invalid capacity 0 on image filesystem
  208. Normal NodeHasSufficientDisk 6m kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientDisk
  209. Normal NodeHasSufficientMemory 6m kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientMemory
  210. Normal Starting 6m kubelet, k8s-worker-1 Starting kubelet.
  211. Normal Starting 6m kubelet, k8s-worker-1 Starting kubelet.
  212. Normal NodeHasSufficientDisk 6m (x2 over 6m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientDisk
  213. Normal NodeHasSufficientMemory 6m (x2 over 6m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasSufficientMemory
  214. Normal NodeHasNoDiskPressure 6m (x2 over 6m) kubelet, k8s-worker-1 Node k8s-worker-1 status is now: NodeHasNoDiskPressure
  215. Warning InvalidDiskCapacity 6m kubelet, k8s-worker-1 invalid capacity 0 on image filesystem
  216. Normal NodeAllocatableEnforced 6m kubelet, k8s-worker-1 Updated Node Allocatable limit across pods
  217. Normal Starting 6m kubelet, k8s-worker-1 Starting kubelet.
  218. Normal NodeAllocatableEnforced 6m kubelet, k8s-worker-1 Updated Node Allocatable limit across pods
  219. Warning InvalidDiskCapacity 6m kubelet, k8s-worker-1 invalid capacity 0 on image filesystem
  220. ```
  221.  
  222. containerd status on worker:
  223. ```
  224. ● containerd.service - containerd container runtime
  225. Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: disabled)
  226. Active: active (running) since Fri 2018-02-02 19:55:34 UTC; 8min ago
  227. Docs: https://containerd.io
  228. Process: 14527 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  229. Main PID: 14529 (containerd)
  230. Memory: 10.2M
  231. CGroup: /system.slice/containerd.service
  232. └─14529 /usr/local/bin/containerd
  233. ```
  234.  
  235.  
  236. kubelet status on worker:
  237. ```
  238. ● kubelet.service - Kubernetes Kubelet
  239. Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  240. Active: active (running) since Fri 2018-02-02 20:04:42 UTC; 5s ago
  241. Docs: https://github.com/kubernetes/kubernetes
  242. Main PID: 16694 (kubelet)
  243. Memory: 23.2M
  244. CGroup: /system.slice/kubelet.service
  245. └─16694 /usr/local/bin/kubelet --allow-privileged=true --anonymous-auth=false --authorization-mode=Webhook --client-ca-file=/var/lib/kubernetes/ca.pem --cloud-provider= --cluster-dns=10.32.0.10 --cluster-domain=cluster.local --container-runtime=remote --container-runtime-endpoint=unix:///var/run/cri-containerd.sock --image-pull-progress-deadline=2m --kubeconfig=/var/lib/kubelet/kubeconfig --network-plugin=cni --pod-cidr=10.200.1.0/24 --register-node=true --runtime-request-timeout=15m --tls-cert-file=/var/lib/kubelet/k8s-worker-1.pem --tls-private-key-file=/var/lib/kubelet/k8s-worker-1-key.pem --v=2
  246. ```
  247.  
  248.  
  249. containerd status for default:
  250. ```
  251. ● containerd.service - containerd container runtime
  252. Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: disabled)
  253. Active: active (running) since Fri 2018-02-02 19:55:34 UTC; 14min ago
  254. Docs: https://containerd.io
  255. Process: 14527 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  256. Main PID: 14529 (containerd)
  257. Memory: 10.2M
  258. CGroup: /system.slice/containerd.service
  259. └─14529 /usr/local/bin/containerd
  260. ```
  261.  
  262. containerd service file:
  263. ```
  264. [root@k8s-worker-1 vagrant]# cat /etc/systemd/system/containerd.service
  265. [Unit]
  266. Description=containerd container runtime
  267. Documentation=https://containerd.io
  268. After=network.target
  269.  
  270. [Service]
  271. ExecStartPre=/sbin/modprobe overlay
  272. ExecStart=/usr/local/bin/containerd
  273. Restart=always
  274. RestartSec=5
  275. Delegate=yes
  276. KillMode=process
  277. OOMScoreAdjust=-999
  278. LimitNOFILE=1048576
  279. # Having non-zero Limit*s causes performance problems due to accounting overhead
  280. # in the kernel. We recommend using cgroups to do container-local accounting.
  281. LimitNPROC=infinity
  282. LimitCORE=infinity
  283.  
  284. [Install]
  285. WantedBy=multi-user.target
  286. ```
  287.  
  288. kubelet service file:
  289. ```
  290. [root@k8s-worker-1 vagrant]# cat /etc/systemd/system/kubelet.service
  291. [Unit]
  292. Description=Kubernetes Kubelet
  293. Documentation=https://github.com/kubernetes/kubernetes
  294. After=cri-containerd.service
  295. Requires=cri-containerd.service
  296.  
  297. [Service]
  298. ExecStart=/usr/local/bin/kubelet \
  299. --allow-privileged=true \
  300. --anonymous-auth=false \
  301. --authorization-mode=Webhook \
  302. --client-ca-file=/var/lib/kubernetes/ca.pem \
  303. --cloud-provider= \
  304. --cluster-dns=10.32.0.10 \
  305. --cluster-domain=cluster.local \
  306. --container-runtime=remote \
  307. --container-runtime-endpoint=unix:///var/run/cri-containerd.sock \
  308. --image-pull-progress-deadline=2m \
  309. --kubeconfig=/var/lib/kubelet/kubeconfig \
  310. --network-plugin=cni \
  311. --pod-cidr=10.200.1.0/24 \
  312. --register-node=true \
  313. --runtime-request-timeout=15m \
  314. --tls-cert-file=/var/lib/kubelet/k8s-worker-1.pem \
  315. --tls-private-key-file=/var/lib/kubelet/k8s-worker-1-key.pem \
  316. --v=2
  317. Restart=on-failure
  318. RestartSec=5
  319.  
  320. [Install]
  321. WantedBy=multi-user.target
  322. ```
  323.  
  324.  
  325. containerd output to journallog:
  326. ```
  327. Feb 02 19:55:34 k8s-worker-1 systemd[1]: Starting containerd container runtime...
  328. Feb 02 19:55:34 k8s-worker-1 systemd[1]: Started containerd container runtime.
  329. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="starting containerd" module=containerd revision=6c7abf7c76c1973d4fb4b0bad51691de84869a51 version=v1.0.0-6-g6c7abf7
  330. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="setting subreaper..." module=containerd
  331. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." module=containerd type=io.containerd.content.v1
  332. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." module=containerd type=io.containerd.snapshotter.v1
  333. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" module=containerd
  334. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." module=containerd type=io.containerd.snapshotter.v1
  335. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." module=containerd type=io.containerd.metadata.v1
  336. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" module="containerd/io.containerd.metadata.v1.bolt"
  337. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." module=containerd type=io.containerd.differ.v1
  338. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." module=containerd type=io.containerd.gc.v1
  339. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." module=containerd type=io.containerd.grpc.v1
  340. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." module=containerd type=io.containerd.grpc.v1
  341. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." module=containerd type=io.containerd.grpc.v1
  342. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." module=containerd type=io.containerd.grpc.v1
  343. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." module=containerd type=io.containerd.grpc.v1
  344. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." module=containerd type=io.containerd.grpc.v1
  345. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." module=containerd type=io.containerd.grpc.v1
  346. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." module=containerd type=io.containerd.grpc.v1
  347. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." module=containerd type=io.containerd.grpc.v1
  348. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." module=containerd type=io.containerd.monitor.v1
  349. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." module=containerd type=io.containerd.runtime.v1
  350. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." module=containerd type=io.containerd.grpc.v1
  351. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." module=containerd type=io.containerd.grpc.v1
  352. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." module=containerd type=io.containerd.grpc.v1
  353. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg=serving... address="/run/containerd/debug.sock" module="containerd/debug"
  354. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg=serving... address="/run/containerd/containerd.sock" module="containerd/grpc"
  355. Feb 02 19:55:34 k8s-worker-1 containerd[14529]: time="2018-02-02T19:55:34Z" level=info msg="containerd successfully booted in 0.051302s" module=containerd
  356. ```
  357.  
  358.  
  359.  
  360. kubelet output to journallog:
  361. ```
  362. Feb 02 20:06:41 k8s-worker-1 systemd[1]: Started Kubernetes Kubelet.
  363. Feb 02 20:06:41 k8s-worker-1 systemd[1]: Starting Kubernetes Kubelet...
  364. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314625 17146 flags.go:52] FLAG: --address="0.0.0.0"
  365. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314667 17146 flags.go:52] FLAG: --allow-privileged="true"
  366. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314671 17146 flags.go:52] FLAG: --alsologtostderr="false"
  367. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314675 17146 flags.go:52] FLAG: --anonymous-auth="false"
  368. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314677 17146 flags.go:52] FLAG: --application-metrics-count-limit="100"
  369. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314680 17146 flags.go:52] FLAG: --authentication-token-webhook="false"
  370. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314682 17146 flags.go:52] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
  371. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314687 17146 flags.go:52] FLAG: --authorization-mode="Webhook"
  372. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314691 17146 flags.go:52] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
  373. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314694 17146 flags.go:52] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
  374. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314697 17146 flags.go:52] FLAG: --azure-container-registry-config=""
  375. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314700 17146 flags.go:52] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
  376. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314703 17146 flags.go:52] FLAG: --bootstrap-checkpoint-path=""
  377. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314705 17146 flags.go:52] FLAG: --bootstrap-kubeconfig=""
  378. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314708 17146 flags.go:52] FLAG: --cadvisor-port="4194"
  379. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314712 17146 flags.go:52] FLAG: --cert-dir="/var/lib/kubelet/pki"
  380. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314715 17146 flags.go:52] FLAG: --cgroup-driver="cgroupfs"
  381. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314717 17146 flags.go:52] FLAG: --cgroup-root=""
  382. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314720 17146 flags.go:52] FLAG: --cgroups-per-qos="true"
  383. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314722 17146 flags.go:52] FLAG: --chaos-chance="0"
  384. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314727 17146 flags.go:52] FLAG: --client-ca-file="/var/lib/kubernetes/ca.pem"
  385. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314730 17146 flags.go:52] FLAG: --cloud-config=""
  386. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314732 17146 flags.go:52] FLAG: --cloud-provider=""
  387. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314734 17146 flags.go:52] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22"
  388. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314739 17146 flags.go:52] FLAG: --cluster-dns="[10.32.0.10]"
  389. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314745 17146 flags.go:52] FLAG: --cluster-domain="cluster.local"
  390. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314748 17146 flags.go:52] FLAG: --cni-bin-dir=""
  391. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314750 17146 flags.go:52] FLAG: --cni-conf-dir=""
  392. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314753 17146 flags.go:52] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
  393. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314756 17146 flags.go:52] FLAG: --container-runtime="remote"
  394. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314758 17146 flags.go:52] FLAG: --container-runtime-endpoint="unix:///var/run/cri-containerd.sock"
  395. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314761 17146 flags.go:52] FLAG: --containerd="unix:///var/run/containerd.sock"
  396. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314763 17146 flags.go:52] FLAG: --containerized="false"
  397. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314766 17146 flags.go:52] FLAG: --contention-profiling="false"
  398. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314768 17146 flags.go:52] FLAG: --cpu-cfs-quota="true"
  399. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314771 17146 flags.go:52] FLAG: --cpu-manager-policy="none"
  400. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314774 17146 flags.go:52] FLAG: --cpu-manager-reconcile-period="10s"
  401. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314776 17146 flags.go:52] FLAG: --docker="unix:///var/run/docker.sock"
  402. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314779 17146 flags.go:52] FLAG: --docker-disable-shared-pid="true"
  403. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314782 17146 flags.go:52] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
  404. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314785 17146 flags.go:52] FLAG: --docker-env-metadata-whitelist=""
  405. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314787 17146 flags.go:52] FLAG: --docker-only="false"
  406. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314790 17146 flags.go:52] FLAG: --docker-root="/var/lib/docker"
  407. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314792 17146 flags.go:52] FLAG: --docker-tls="false"
  408. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314795 17146 flags.go:52] FLAG: --docker-tls-ca="ca.pem"
  409. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314798 17146 flags.go:52] FLAG: --docker-tls-cert="cert.pem"
  410. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314800 17146 flags.go:52] FLAG: --docker-tls-key="key.pem"
  411. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314803 17146 flags.go:52] FLAG: --dynamic-config-dir=""
  412. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314807 17146 flags.go:52] FLAG: --enable-controller-attach-detach="true"
  413. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314809 17146 flags.go:52] FLAG: --enable-custom-metrics="false"
  414. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314812 17146 flags.go:52] FLAG: --enable-debugging-handlers="true"
  415. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314814 17146 flags.go:52] FLAG: --enable-load-reader="false"
  416. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314817 17146 flags.go:52] FLAG: --enable-server="true"
  417. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314819 17146 flags.go:52] FLAG: --enforce-node-allocatable="[pods]"
  418. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314822 17146 flags.go:52] FLAG: --event-burst="10"
  419. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314825 17146 flags.go:52] FLAG: --event-qps="5"
  420. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314827 17146 flags.go:52] FLAG: --event-storage-age-limit="default=0"
  421. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314830 17146 flags.go:52] FLAG: --event-storage-event-limit="default=0"
  422. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314832 17146 flags.go:52] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
  423. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314842 17146 flags.go:52] FLAG: --eviction-max-pod-grace-period="0"
  424. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314844 17146 flags.go:52] FLAG: --eviction-minimum-reclaim=""
  425. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314848 17146 flags.go:52] FLAG: --eviction-pressure-transition-period="5m0s"
  426. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314851 17146 flags.go:52] FLAG: --eviction-soft=""
  427. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314854 17146 flags.go:52] FLAG: --eviction-soft-grace-period=""
  428. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314856 17146 flags.go:52] FLAG: --exit-on-lock-contention="false"
  429. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314859 17146 flags.go:52] FLAG: --experimental-allocatable-ignore-eviction="false"
  430. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314861 17146 flags.go:52] FLAG: --experimental-allowed-unsafe-sysctls="[]"
  431. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314893 17146 flags.go:52] FLAG: --experimental-bootstrap-kubeconfig=""
  432. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314896 17146 flags.go:52] FLAG: --experimental-check-node-capabilities-before-mount="false"
  433. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314898 17146 flags.go:52] FLAG: --experimental-dockershim="false"
  434. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314901 17146 flags.go:52] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
  435. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314903 17146 flags.go:52] FLAG: --experimental-fail-swap-on="true"
  436. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314906 17146 flags.go:52] FLAG: --experimental-kernel-memcg-notification="false"
  437. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314908 17146 flags.go:52] FLAG: --experimental-mounter-path=""
  438. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314911 17146 flags.go:52] FLAG: --experimental-qos-reserved=""
  439. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314914 17146 flags.go:52] FLAG: --fail-swap-on="true"
  440. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314916 17146 flags.go:52] FLAG: --feature-gates=""
  441. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314920 17146 flags.go:52] FLAG: --file-check-frequency="20s"
  442. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314923 17146 flags.go:52] FLAG: --global-housekeeping-interval="1m0s"
  443. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314925 17146 flags.go:52] FLAG: --google-json-key=""
  444. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314928 17146 flags.go:52] FLAG: --hairpin-mode="promiscuous-bridge"
  445. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314930 17146 flags.go:52] FLAG: --healthz-bind-address="127.0.0.1"
  446. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314933 17146 flags.go:52] FLAG: --healthz-port="10248"
  447. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314936 17146 flags.go:52] FLAG: --host-ipc-sources="[*]"
  448. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314939 17146 flags.go:52] FLAG: --host-network-sources="[*]"
  449. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314943 17146 flags.go:52] FLAG: --host-pid-sources="[*]"
  450. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314945 17146 flags.go:52] FLAG: --hostname-override=""
  451. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314948 17146 flags.go:52] FLAG: --housekeeping-interval="10s"
  452. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314951 17146 flags.go:52] FLAG: --http-check-frequency="20s"
  453. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314953 17146 flags.go:52] FLAG: --image-gc-high-threshold="85"
  454. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314956 17146 flags.go:52] FLAG: --image-gc-low-threshold="80"
  455. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314958 17146 flags.go:52] FLAG: --image-pull-progress-deadline="2m0s"
  456. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314961 17146 flags.go:52] FLAG: --image-service-endpoint=""
  457. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314963 17146 flags.go:52] FLAG: --init-config-dir=""
  458. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314966 17146 flags.go:52] FLAG: --iptables-drop-bit="15"
  459. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314968 17146 flags.go:52] FLAG: --iptables-masquerade-bit="14"
  460. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314971 17146 flags.go:52] FLAG: --keep-terminated-pod-volumes="false"
  461. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314973 17146 flags.go:52] FLAG: --kube-api-burst="10"
  462. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314976 17146 flags.go:52] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
  463. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314979 17146 flags.go:52] FLAG: --kube-api-qps="5"
  464. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314981 17146 flags.go:52] FLAG: --kube-reserved=""
  465. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314984 17146 flags.go:52] FLAG: --kube-reserved-cgroup=""
  466. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314986 17146 flags.go:52] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig"
  467. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314989 17146 flags.go:52] FLAG: --kubelet-cgroups=""
  468. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314991 17146 flags.go:52] FLAG: --lock-file=""
  469. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314994 17146 flags.go:52] FLAG: --log-backtrace-at=":0"
  470. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.314997 17146 flags.go:52] FLAG: --log-cadvisor-usage="false"
  471. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315000 17146 flags.go:52] FLAG: --log-dir=""
  472. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315002 17146 flags.go:52] FLAG: --log-flush-frequency="5s"
  473. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315005 17146 flags.go:52] FLAG: --logtostderr="true"
  474. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315007 17146 flags.go:52] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
  475. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315011 17146 flags.go:52] FLAG: --make-iptables-util-chains="true"
  476. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315013 17146 flags.go:52] FLAG: --manifest-url=""
  477. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315016 17146 flags.go:52] FLAG: --manifest-url-header=""
  478. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315020 17146 flags.go:52] FLAG: --master-service-namespace="default"
  479. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315023 17146 flags.go:52] FLAG: --max-open-files="1000000"
  480. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315027 17146 flags.go:52] FLAG: --max-pods="110"
  481. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315030 17146 flags.go:52] FLAG: --maximum-dead-containers="-1"
  482. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315032 17146 flags.go:52] FLAG: --maximum-dead-containers-per-container="1"
  483. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315035 17146 flags.go:52] FLAG: --minimum-container-ttl-duration="0s"
  484. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315037 17146 flags.go:52] FLAG: --minimum-image-ttl-duration="2m0s"
  485. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315040 17146 flags.go:52] FLAG: --network-plugin="cni"
  486. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315044 17146 flags.go:52] FLAG: --network-plugin-mtu="0"
  487. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315048 17146 flags.go:52] FLAG: --node-ip=""
  488. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315051 17146 flags.go:52] FLAG: --node-labels=""
  489. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315057 17146 flags.go:52] FLAG: --node-status-update-frequency="10s"
  490. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315062 17146 flags.go:52] FLAG: --non-masquerade-cidr="10.0.0.0/8"
  491. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315065 17146 flags.go:52] FLAG: --oom-score-adj="-999"
  492. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315068 17146 flags.go:52] FLAG: --pod-cidr="10.200.1.0/24"
  493. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315070 17146 flags.go:52] FLAG: --pod-infra-container-image="gcr.io/google_containers/pause-amd64:3.0"
  494. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315073 17146 flags.go:52] FLAG: --pod-manifest-path=""
  495. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315076 17146 flags.go:52] FLAG: --pods-per-core="0"
  496. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315079 17146 flags.go:52] FLAG: --port="10250"
  497. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315081 17146 flags.go:52] FLAG: --protect-kernel-defaults="false"
  498. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315084 17146 flags.go:52] FLAG: --provider-id=""
  499. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315086 17146 flags.go:52] FLAG: --read-only-port="10255"
  500. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315089 17146 flags.go:52] FLAG: --really-crash-for-testing="false"
  501. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315092 17146 flags.go:52] FLAG: --register-node="true"
  502. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315095 17146 flags.go:52] FLAG: --register-schedulable="true"
  503. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315097 17146 flags.go:52] FLAG: --register-with-taints=""
  504. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315101 17146 flags.go:52] FLAG: --registry-burst="10"
  505. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315103 17146 flags.go:52] FLAG: --registry-qps="5"
  506. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315106 17146 flags.go:52] FLAG: --require-kubeconfig="false"
  507. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315109 17146 flags.go:52] FLAG: --resolv-conf="/etc/resolv.conf"
  508. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315111 17146 flags.go:52] FLAG: --rkt-api-endpoint="localhost:15441"
  509. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315114 17146 flags.go:52] FLAG: --rkt-path=""
  510. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315117 17146 flags.go:52] FLAG: --rkt-stage1-image=""
  511. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315119 17146 flags.go:52] FLAG: --root-dir="/var/lib/kubelet"
  512. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315122 17146 flags.go:52] FLAG: --rotate-certificates="false"
  513. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315124 17146 flags.go:52] FLAG: --runonce="false"
  514. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315127 17146 flags.go:52] FLAG: --runtime-cgroups=""
  515. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315129 17146 flags.go:52] FLAG: --runtime-request-timeout="15m0s"
  516. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315132 17146 flags.go:52] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
  517. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315135 17146 flags.go:52] FLAG: --serialize-image-pulls="true"
  518. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315137 17146 flags.go:52] FLAG: --stderrthreshold="2"
  519. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315140 17146 flags.go:52] FLAG: --storage-driver-buffer-duration="1m0s"
  520. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315143 17146 flags.go:52] FLAG: --storage-driver-db="cadvisor"
  521. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315145 17146 flags.go:52] FLAG: --storage-driver-host="localhost:8086"
  522. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315148 17146 flags.go:52] FLAG: --storage-driver-password="root"
  523. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315150 17146 flags.go:52] FLAG: --storage-driver-secure="false"
  524. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315153 17146 flags.go:52] FLAG: --storage-driver-table="stats"
  525. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315155 17146 flags.go:52] FLAG: --storage-driver-user="root"
  526. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315158 17146 flags.go:52] FLAG: --streaming-connection-idle-timeout="4h0m0s"
  527. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315160 17146 flags.go:52] FLAG: --sync-frequency="1m0s"
  528. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315163 17146 flags.go:52] FLAG: --system-cgroups=""
  529. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315166 17146 flags.go:52] FLAG: --system-reserved=""
  530. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315168 17146 flags.go:52] FLAG: --system-reserved-cgroup=""
  531. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315171 17146 flags.go:52] FLAG: --tls-cert-file="/var/lib/kubelet/k8s-worker-1.pem"
  532. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315173 17146 flags.go:52] FLAG: --tls-private-key-file="/var/lib/kubelet/k8s-worker-1-key.pem"
  533. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315176 17146 flags.go:52] FLAG: --v="2"
  534. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315179 17146 flags.go:52] FLAG: --version="false"
  535. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315184 17146 flags.go:52] FLAG: --vmodule=""
  536. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315187 17146 flags.go:52] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
  537. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315190 17146 flags.go:52] FLAG: --volume-stats-agg-period="1m0s"
  538. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315209 17146 feature_gate.go:220] feature gates: &{{} map[]}
  539. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315227 17146 controller.go:114] kubelet config controller: starting controller
  540. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.315231 17146 controller.go:118] kubelet config controller: validating combination of defaults and flags
  541. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.352139 17146 mount_linux.go:202] Detected OS with systemd
  542. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.354285 17146 server.go:182] Version: v1.9.0
  543. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.354310 17146 feature_gate.go:220] feature gates: &{{} map[]}
  544. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.354364 17146 plugins.go:101] No cloud provider specified.
  545. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.354371 17146 server.go:303] No cloud provider specified: "" from the config file: ""
  546. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.357031 17146 manager.go:151] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
  547. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.357954 17146 fs.go:139] Filesystem UUIDs: map[3bb37c5d-8f9e-4ea6-a103-c8b4a03c99f9:/dev/dm-0 764c16e1-5712-4212-8b34-de5f2d6f039d:/dev/dm-2 7ca96e9b-437c-42ed-895c-7be12796c8a0:/dev/sda1 d5272035-3c33-4816-a127-d19febbe1b4c:/dev/dm-1]
  548. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.357966 17146 fs.go:140] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 minor:17 fsType:tmpfs blockSize:0} /dev/mapper/centos-root:{mountpoint:/ major:253 minor:0 fsType:xfs blockSize:0} /dev/sda1:{mountpoint:/boot major:8 minor:1 fsType:xfs blockSize:0} /dev/mapper/centos-home:{mountpoint:/home major:253 minor:2 fsType:xfs blockSize:0}]
  549. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.358882 17146 manager.go:225] Machine: {NumCores:1 CpuFrequency:2904002 MemoryCapacity:1928765440 HugePages:[{PageSize:2048 NumPages:0}] MachineID:996415857b5549c38c6cd6912af487f2 SystemUUID:B2E09B4B-9CD9-493C-A94A-6220D7761C47 BootID:a4c19123-637f-4ba2-a145-ab62fd458d16 Filesystems:[{Device:tmpfs DeviceMajor:0 DeviceMinor:17 Capacity:964382720 Type:vfs Inodes:235445 HasInodes:true} {Device:/dev/mapper/centos-root DeviceMajor:253 DeviceMinor:0 Capacity:43985149952 Type:vfs Inodes:21487616 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:1063256064 Type:vfs Inodes:524288 HasInodes:true} {Device:/dev/mapper/centos-home DeviceMajor:253 DeviceMinor:2 Capacity:21472735232 Type:vfs Inodes:10489856 HasInodes:true}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:44006637568 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:2147483648 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:21483225088 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:68719476736 Scheduler:cfq}] NetworkDevices:[{Name:enp0s3 MacAddress:08:00:27:16:98:18 Speed:1000 Mtu:1500} {Name:enp0s8 MacAddress:08:00:27:8e:54:25 Speed:100 Mtu:1500} {Name:enp0s9 MacAddress:08:00:27:38:71:d7 Speed:100 Mtu:1500}] Topology:[{Id:0 Memory:2147016704 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:4194304 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
  550. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359082 17146 manager.go:231] Version: {KernelVersion:3.10.0-693.11.1.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:Unknown DockerAPIVersion:Unknown CadvisorVersion: CadvisorRevision:}
  551. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359356 17146 server.go:428] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
  552. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359525 17146 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
  553. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359531 17146 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
  554. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359597 17146 container_manager_linux.go:266] Creating device plugin manager: false
  555. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359639 17146 server.go:693] Using root directory: /var/lib/kubelet
  556. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.359662 17146 kubelet.go:313] Watching apiserver
  557. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: W0202 20:06:41.398258 17146 kubelet_network.go:132] Hairpin mode set to "promiscuous-bridge" but container runtime is "remote", ignoring
  558. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.398282 17146 kubelet.go:571] Hairpin mode set to "none"
  559. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.398493 17146 plugins.go:190] Loaded network plugin "cni"
  560. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.398523 17146 remote_runtime.go:43] Connecting to runtime service unix:///var/run/cri-containerd.sock
  561. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.401934 17146 kuberuntime_manager.go:186] Container runtime cri-containerd initialized, version: 1.0.0-beta.1, apiVersion: 0.0.0
  562. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.401988 17146 kuberuntime_manager.go:918] updating runtime config through cri with podcidr 10.200.1.0/24
  563. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.404840 17146 kubelet_network.go:196] Setting Pod CIDR: -> 10.200.1.0/24
  564. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405108 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/aws-ebs"
  565. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405121 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/empty-dir"
  566. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405131 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/gce-pd"
  567. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405141 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/git-repo"
  568. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405149 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/host-path"
  569. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405157 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/nfs"
  570. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405167 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/secret"
  571. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405175 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/iscsi"
  572. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405184 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/glusterfs"
  573. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405193 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/rbd"
  574. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405203 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/cinder"
  575. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405212 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/quobyte"
  576. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405219 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/cephfs"
  577. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405230 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/downward-api"
  578. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405239 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/fc"
  579. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405247 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/flocker"
  580. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405256 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/azure-file"
  581. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405266 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/configmap"
  582. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405275 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/vsphere-volume"
  583. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405284 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/azure-disk"
  584. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405292 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/photon-pd"
  585. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405300 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/projected"
  586. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405309 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/portworx-volume"
  587. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405320 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/scaleio"
  588. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405369 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/local-volume"
  589. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405380 17146 plugins.go:453] Loaded volume plugin "kubernetes.io/storageos"
  590. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.405497 17146 server.go:755] Started kubelet
  591. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.417564 17146 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
  592. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.418569 17146 server.go:129] Starting to listen on 0.0.0.0:10250
  593. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.419068 17146 server.go:299] Adding debug handlers to kubelet server.
  594. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.419933 17146 server.go:149] Starting to listen read-only on 0.0.0.0:10255
  595. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.421515 17146 kubelet_node_status.go:431] Recording NodeHasSufficientDisk event message for node k8s-worker-1
  596. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.421535 17146 kubelet_node_status.go:431] Recording NodeHasSufficientMemory event message for node k8s-worker-1
  597. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.421541 17146 kubelet_node_status.go:431] Recording NodeHasNoDiskPressure event message for node k8s-worker-1
  598. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.421984 17146 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
  599. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.421999 17146 status_manager.go:140] Starting to sync pod status with apiserver
  600. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.422009 17146 kubelet.go:1767] Starting kubelet main sync loop.
  601. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.422026 17146 kubelet.go:1778] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
  602. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: E0202 20:06:41.422690 17146 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
  603. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.422711 17146 volume_manager.go:245] The desired_state_of_world populator starts
  604. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.422715 17146 volume_manager.go:247] Starting Kubelet Volume Manager
  605. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: E0202 20:06:41.437026 17146 cri_stats_provider.go:219] Failed to get the info of the filesystem with id "3bb37c5d-8f9e-4ea6-a103-c8b4a03c99f9": cannot find device "/dev/dm-0" in partitions.
  606. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: E0202 20:06:41.437044 17146 kubelet.go:1275] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
  607. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.439557 17146 factory.go:136] Registering containerd factory
  608. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.439705 17146 factory.go:54] Registering systemd factory
  609. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.439892 17146 factory.go:86] Registering Raw factory
  610. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.440042 17146 manager.go:1178] Started watching for new ooms in manager
  611. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.443180 17146 manager.go:329] Starting recovery of all containers
  612. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.488397 17146 manager.go:334] Recovery completed
  613. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.567123 17146 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
  614. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.569173 17146 kubelet_node_status.go:431] Recording NodeHasSufficientDisk event message for node k8s-worker-1
  615. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.569194 17146 kubelet_node_status.go:431] Recording NodeHasSufficientMemory event message for node k8s-worker-1
  616. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.569204 17146 kubelet_node_status.go:431] Recording NodeHasNoDiskPressure event message for node k8s-worker-1
  617. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.569218 17146 kubelet_node_status.go:82] Attempting to register node k8s-worker-1
  618. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.578360 17146 kubelet_node_status.go:127] Node k8s-worker-1 was previously registered
  619. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.578397 17146 kubelet_node_status.go:85] Successfully registered node k8s-worker-1
  620. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.580040 17146 kuberuntime_manager.go:918] updating runtime config through cri with podcidr
  621. Feb 02 20:06:41 k8s-worker-1 kubelet[17146]: I0202 20:06:41.581013 17146 kubelet_network.go:196] Setting Pod CIDR: 10.200.1.0/24 ->
  622. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.422608 17146 kubelet.go:1836] SyncLoop (ADD, "api"): "kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818), weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)"
  623. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523192 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/08e4b826-0853-11e8-8adc-080027169818-kube-dns-config") pod "kube-dns-6c857864fb-ljmf8" (UID: "08e4b826-0853-11e8-8adc-080027169818")
  624. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523228 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-token-hmv74" (UniqueName: "kubernetes.io/secret/08e4b826-0853-11e8-8adc-080027169818-kube-dns-token-hmv74") pod "kube-dns-6c857864fb-ljmf8" (UID: "08e4b826-0853-11e8-8adc-080027169818")
  625. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523252 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "weavedb" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-weavedb") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  626. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523270 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-lib-modules") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  627. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523290 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "weave-net-token-m2k9v" (UniqueName: "kubernetes.io/secret/09b34091-0853-11e8-8adc-080027169818-weave-net-token-m2k9v") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  628. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523307 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-bin") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  629. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523324 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin2" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-bin2") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  630. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523341 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-conf" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-conf") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  631. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523358 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "dbus" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-dbus") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  632. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.523376 17146 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-xtables-lock") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  633. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624100 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "dbus" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-dbus") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  634. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624202 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-xtables-lock") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  635. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624289 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "cni-bin" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-bin") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  636. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624350 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "cni-bin2" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-bin2") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  637. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624403 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "cni-conf" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-conf") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  638. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624567 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-lib-modules") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  639. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624742 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "weave-net-token-m2k9v" (UniqueName: "kubernetes.io/secret/09b34091-0853-11e8-8adc-080027169818-weave-net-token-m2k9v") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  640. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624812 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/08e4b826-0853-11e8-8adc-080027169818-kube-dns-config") pod "kube-dns-6c857864fb-ljmf8" (UID: "08e4b826-0853-11e8-8adc-080027169818")
  641. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624874 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "kube-dns-token-hmv74" (UniqueName: "kubernetes.io/secret/08e4b826-0853-11e8-8adc-080027169818-kube-dns-token-hmv74") pod "kube-dns-6c857864fb-ljmf8" (UID: "08e4b826-0853-11e8-8adc-080027169818")
  642. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.624929 17146 reconciler.go:262] operationExecutor.MountVolume started for volume "weavedb" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-weavedb") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  643. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.625086 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "weavedb" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-weavedb") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  644. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.626295 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "dbus" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-dbus") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  645. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.626411 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-xtables-lock") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  646. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.626479 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "cni-bin" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-bin") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  647. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.626744 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "cni-bin2" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-bin2") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  648. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.626838 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "cni-conf" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-cni-conf") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  649. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.626910 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/09b34091-0853-11e8-8adc-080027169818-lib-modules") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  650. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.654230 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "weave-net-token-m2k9v" (UniqueName: "kubernetes.io/secret/09b34091-0853-11e8-8adc-080027169818-weave-net-token-m2k9v") pod "weave-net-xrr2k" (UID: "09b34091-0853-11e8-8adc-080027169818")
  651. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.656284 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/08e4b826-0853-11e8-8adc-080027169818-kube-dns-config") pod "kube-dns-6c857864fb-ljmf8" (UID: "08e4b826-0853-11e8-8adc-080027169818")
  652. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.657987 17146 operation_generator.go:522] MountVolume.SetUp succeeded for volume "kube-dns-token-hmv74" (UniqueName: "kubernetes.io/secret/08e4b826-0853-11e8-8adc-080027169818-kube-dns-token-hmv74") pod "kube-dns-6c857864fb-ljmf8" (UID: "08e4b826-0853-11e8-8adc-080027169818")
  653. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.730960 17146 kuberuntime_manager.go:385] No sandbox for pod "kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818)" can be found. Need to start a new one
  654. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: I0202 20:06:46.747170 17146 kuberuntime_manager.go:385] No sandbox for pod "weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)" can be found. Need to start a new one
  655. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:46 transport: http2Client.notifyError got notified that the client transport was broken EOF.
  656. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:46 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
  657. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:46 transport: http2Client.notifyError got notified that the client transport was broken read unix @->/var/run/cri-containerd.sock: read: connection reset by peer.
  658. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:46 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
  659. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.763872 17146 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Internal desc = transport is closing
  660. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.763935 17146 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)" failed: rpc error: code = Internal desc = transport is closing
  661. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.763978 17146 kuberuntime_manager.go:647] createPodSandbox for pod "weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)" failed: rpc error: code = Internal desc = transport is closing
  662. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.764033 17146 pod_workers.go:186] Error syncing pod 09b34091-0853-11e8-8adc-080027169818 ("weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)"), skipping: failed to "CreatePodSandbox" for "weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)" with CreatePodSandboxError: "CreatePodSandbox for pod \"weave-net-xrr2k_kube-system(09b34091-0853-11e8-8adc-080027169818)\" failed: rpc error: code = Internal desc = transport is closing"
  663. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.764479 17146 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Internal desc = transport is closing
  664. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.764498 17146 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818)" failed: rpc error: code = Internal desc = transport is closing
  665. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.764504 17146 kuberuntime_manager.go:647] createPodSandbox for pod "kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818)" failed: rpc error: code = Internal desc = transport is closing
  666. Feb 02 20:06:46 k8s-worker-1 kubelet[17146]: E0202 20:06:46.764526 17146 pod_workers.go:186] Error syncing pod 08e4b826-0853-11e8-8adc-080027169818 ("kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818)"), skipping: failed to "CreatePodSandbox" for "kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-6c857864fb-ljmf8_kube-system(08e4b826-0853-11e8-8adc-080027169818)\" failed: rpc error: code = Internal desc = transport is closing"
  667. Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.429719 17146 remote_runtime.go:169] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  668. Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.429857 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  669. Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.429879 17146 kubelet_pods.go:1045] Error listing containers: &status.statusError{Code:14, Message:"grpc: the connection is unavailable", Details:[]*any.Any(nil)}
  670. Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.429931 17146 kubelet.go:1925] Failed cleaning pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  671. Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.562905 17146 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  672. Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.562979 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  673. Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: E0202 20:06:47.563006 17146 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  674. Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:47 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
  675. Feb 02 20:06:47 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:47 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
  676. Feb 02 20:06:48 k8s-worker-1 kubelet[17146]: E0202 20:06:48.563841 17146 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  677. Feb 02 20:06:48 k8s-worker-1 kubelet[17146]: E0202 20:06:48.563919 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  678. Feb 02 20:06:48 k8s-worker-1 kubelet[17146]: E0202 20:06:48.563949 17146 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  679. Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:49 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
  680. Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.432987 17146 remote_runtime.go:169] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  681. Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.433124 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  682. Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.433165 17146 kubelet_pods.go:1029] Error listing containers: &status.statusError{Code:14, Message:"grpc: the connection is unavailable", Details:[]*any.Any(nil)}
  683. Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.433220 17146 kubelet.go:1925] Failed cleaning pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  684. Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:49 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
  685. Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.564550 17146 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  686. Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.564660 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  687. Feb 02 20:06:49 k8s-worker-1 kubelet[17146]: E0202 20:06:49.564692 17146 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  688. Feb 02 20:06:50 k8s-worker-1 kubelet[17146]: E0202 20:06:50.565029 17146 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  689. Feb 02 20:06:50 k8s-worker-1 kubelet[17146]: E0202 20:06:50.565106 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  690. Feb 02 20:06:50 k8s-worker-1 kubelet[17146]: E0202 20:06:50.565136 17146 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  691. Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.429442 17146 remote_runtime.go:169] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  692. Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.429527 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  693. Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.429556 17146 kubelet_pods.go:1029] Error listing containers: &status.statusError{Code:14, Message:"grpc: the connection is unavailable", Details:[]*any.Any(nil)}
  694. Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.429595 17146 kubelet.go:1925] Failed cleaning pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  695. Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.546988 17146 remote_runtime.go:434] Status from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  696. Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.547070 17146 kubelet.go:2089] Container runtime sanity check failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  697. Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.565719 17146 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  698. Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.565808 17146 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  699. Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.565841 17146 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  700. Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.598123 17146 remote_runtime.go:69] Version from runtime service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  701. Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: E0202 20:06:51.598189 17146 kuberuntime_manager.go:245] Get remote runtime version failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
  702. Feb 02 20:06:51 k8s-worker-1 kubelet[17146]: 2018/02/02 20:06:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/cri-containerd.sock: connect: connection refused"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
  703. Feb 02 20:06:51 k8s-worker-1 systemd[1]: Stopping Kubernetes Kubelet...
  704. ```
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement