Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Jan 6 16:42:39 kubernetes-master2 systemd[1]: Stopping System Logging Service...
- Jan 6 16:42:39 kubernetes-master2 rsyslogd: [origin software="rsyslogd" swVersion="8.2001.0" x-pid="808" x-info="https://www.rsyslog.com"] exiting on signal 15.
- Jan 6 16:42:39 kubernetes-master2 systemd[1]: rsyslog.service: Succeeded.
- Jan 6 16:42:39 kubernetes-master2 systemd[1]: Stopped System Logging Service.
- Jan 6 16:42:39 kubernetes-master2 systemd[1]: Starting System Logging Service...
- Jan 6 16:42:39 kubernetes-master2 systemd[1]: Started System Logging Service.
- Jan 6 16:42:39 kubernetes-master2 rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd. [v8.2001.0]
- Jan 6 16:42:39 kubernetes-master2 rsyslogd: rsyslogd's groupid changed to 110
- Jan 6 16:42:39 kubernetes-master2 rsyslogd: rsyslogd's userid changed to 104
- Jan 6 16:42:39 kubernetes-master2 rsyslogd: [origin software="rsyslogd" swVersion="8.2001.0" x-pid="1466" x-info="https://www.rsyslog.com"] start
- Jan 6 16:42:49 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
- Jan 6 16:42:49 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:42:49 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: F0106 16:42:50.013400 1482 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: goroutine 1 [running]:
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc00089bf10, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc00005ef20, 0x1, 0x1)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000374dc0, 0xc000124010, 0x3, 0x3)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000374dc0, 0xc000124010, 0x3, 0x3, 0xc000374dc0, 0xc000124010)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000374dc0, 0x1657aef78a87b163, 0x70c9020, 0x409b25)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: main.main()
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: goroutine 19 [chan receive]:
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: goroutine 80 [select]:
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00056e5a0)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: goroutine 95 [select]:
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc00030b860, 0x1, 0xc0001000c0)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000a60301, 0xc0001000c0)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:42:50 kubernetes-master2 kubelet[1482]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:42:50 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:42:50 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:43:00 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
- Jan 6 16:43:00 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:00 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: F0106 16:43:00.248154 1515 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: goroutine 1 [running]:
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136b00, 0xfb, 0x14d)
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000c04000, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000be97f0, 0x1, 0x1)
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000bd6000, 0xc000124010, 0x3, 0x3)
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000bd6000, 0xc000124010, 0x3, 0x3, 0xc000bd6000, 0xc000124010)
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000bd6000, 0x1657aef9ec8296c8, 0x70c9020, 0x409b25)
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: main.main()
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: goroutine 19 [chan receive]:
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: goroutine 47 [runnable]:
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:80
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: goroutine 86 [select]:
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00007f1d0)
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:43:00 kubernetes-master2 kubelet[1515]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:43:00 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:43:00 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:43:10 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
- Jan 6 16:43:10 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:10 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: F0106 16:43:10.503302 1554 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: goroutine 1 [running]:
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0000be840, 0xfb, 0x14d)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0002081c0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0001ef990, 0x1, 0x1)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc00059cb00, 0xc00004e090, 0x3, 0x3)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00059cb00, 0xc00004e090, 0x3, 0x3, 0xc00059cb00, 0xc00004e090)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00059cb00, 0x1657aefc4fc7bdbf, 0x70c9020, 0x409b25)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: main.main()
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: goroutine 6 [chan receive]:
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: goroutine 82 [select]:
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0008c0370)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: goroutine 102 [select]:
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0003ae9c0, 0x1, 0xc0000a00c0)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000876101, 0xc0000a00c0)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:43:10 kubernetes-master2 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:43:10 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:43:10 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:43:20 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
- Jan 6 16:43:20 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:20 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: F0106 16:43:20.755973 1583 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: goroutine 1 [running]:
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000aeaee0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000bea060, 0x1, 0x1)
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000416840, 0xc000124010, 0x3, 0x3)
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000416840, 0xc000124010, 0x3, 0x3, 0xc000416840, 0xc000124010)
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000416840, 0x1657aefeb2e1add0, 0x70c9020, 0x409b25)
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: main.main()
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: goroutine 19 [chan receive]:
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: goroutine 78 [select]:
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000659130)
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: goroutine 97 [runnable]:
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:80
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:43:20 kubernetes-master2 kubelet[1583]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:43:20 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:43:20 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:43:30 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9.
- Jan 6 16:43:30 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:30 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: F0106 16:43:31.011442 1621 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: goroutine 1 [running]:
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0000be840, 0xfb, 0x14d)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000222b60, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0003fa870, 0x1, 0x1)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc00052bb80, 0xc00004e090, 0x3, 0x3)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00052bb80, 0xc00004e090, 0x3, 0x3, 0xc00052bb80, 0xc00004e090)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00052bb80, 0x1657af011632ebb3, 0x70c9020, 0x409b25)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: main.main()
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: goroutine 6 [chan receive]:
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: goroutine 85 [select]:
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000175950)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: goroutine 100 [select]:
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0002ac810, 0x1, 0xc0000a00c0)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000171c01, 0xc0000a00c0)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:43:31 kubernetes-master2 kubelet[1621]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:43:31 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:43:31 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:43:41 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10.
- Jan 6 16:43:41 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:41 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: F0106 16:43:41.261970 1652 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: goroutine 1 [running]:
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc0006ea000, 0xfb, 0x14d)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc00045b260, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000783330, 0x1, 0x1)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000ad7600, 0xc000124010, 0x3, 0x3)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000ad7600, 0xc000124010, 0x3, 0x3, 0xc000ad7600, 0xc000124010)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000ad7600, 0x1657af03792cf27c, 0x70c9020, 0x409b25)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: main.main()
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: goroutine 19 [chan receive]:
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: goroutine 49 [select]:
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0007942d0)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: goroutine 96 [select]:
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc00099e000, 0x1, 0xc0001000c0)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000178d01, 0xc0001000c0)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:43:41 kubernetes-master2 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:43:41 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:43:41 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:43:51 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11.
- Jan 6 16:43:51 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:51 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: F0106 16:43:51.510493 1685 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: goroutine 1 [running]:
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0000be840, 0xfb, 0x14d)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000828150, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0001ff250, 0x1, 0x1)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0007c02c0, 0xc00004e090, 0x3, 0x3)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0007c02c0, 0xc00004e090, 0x3, 0x3, 0xc0007c02c0, 0xc00004e090)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0007c02c0, 0x1657af05dc099e4a, 0x70c9020, 0x409b25)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: main.main()
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: goroutine 6 [chan receive]:
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: goroutine 81 [select]:
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc00031ea80, 0x1, 0xc0000a00c0)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000ad8901, 0xc0000a00c0)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: goroutine 93 [select]:
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00007e370)
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:43:51 kubernetes-master2 kubelet[1685]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:43:51 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:43:51 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:44:01 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12.
- Jan 6 16:44:01 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:01 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: F0106 16:44:01.740424 1717 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: goroutine 1 [running]:
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000c4001, 0xc0000d6840, 0xfb, 0x14d)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000b781c0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000b8d460, 0x1, 0x1)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc00069d8c0, 0xc0000c6010, 0x3, 0x3)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00069d8c0, 0xc0000c6010, 0x3, 0x3, 0xc00069d8c0, 0xc0000c6010)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00069d8c0, 0x1657af083dc90d5f, 0x70c9020, 0x409b25)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: main.main()
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: goroutine 19 [chan receive]:
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: goroutine 82 [select]:
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00075c0f0)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: goroutine 97 [select]:
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000402480, 0x1, 0xc0000a20c0)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000aace01, 0xc0000a20c0)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:44:01 kubernetes-master2 kubelet[1717]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:44:01 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:44:01 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:44:11 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13.
- Jan 6 16:44:11 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:11 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: F0106 16:44:12.016281 1750 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: goroutine 1 [running]:
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000c4001, 0xc0000d6840, 0xfb, 0x14d)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc00090c690, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0002928c0, 0x1, 0x1)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0005a7080, 0xc0000c6010, 0x3, 0x3)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0005a7080, 0xc0000c6010, 0x3, 0x3, 0xc0005a7080, 0xc0000c6010)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0005a7080, 0x1657af0aa24855b9, 0x70c9020, 0x409b25)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: main.main()
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: goroutine 19 [chan receive]:
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: goroutine 101 [select]:
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000a59d40, 0x1, 0xc0000a20c0)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000aa2001, 0xc0000a20c0)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: goroutine 92 [select]:
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000277810)
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:44:12 kubernetes-master2 kubelet[1750]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:44:12 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:44:12 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:44:22 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14.
- Jan 6 16:44:22 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:22 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: F0106 16:44:22.262328 1783 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: goroutine 1 [running]:
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000c4001, 0xc0000d6840, 0xfb, 0x14d)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000247c70, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc00052d890, 0x1, 0x1)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc00023fb80, 0xc0000c6010, 0x3, 0x3)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00023fb80, 0xc0000c6010, 0x3, 0x3, 0xc00023fb80, 0xc0000c6010)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00023fb80, 0x1657af0d04fd3106, 0x70c9020, 0x409b25)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: main.main()
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: goroutine 19 [chan receive]:
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: goroutine 89 [select]:
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0000a0cd0)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: goroutine 103 [select]:
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0007c6000, 0x1, 0xc0000a20c0)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000184701, 0xc0000a20c0)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:44:22 kubernetes-master2 kubelet[1783]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:44:22 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:44:22 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:44:32 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15.
- Jan 6 16:44:32 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:32 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: F0106 16:44:32.516344 1815 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: goroutine 1 [running]:
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000c4001, 0xc0000d6840, 0xfb, 0x14d)
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000bbdd50, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000bfa010, 0x1, 0x1)
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0002cb080, 0xc0000c6010, 0x3, 0x3)
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0002cb080, 0xc0000c6010, 0x3, 0x3, 0xc0002cb080, 0xc0000c6010)
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0002cb080, 0x1657af0f681dd65e, 0x70c9020, 0x409b25)
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: main.main()
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: goroutine 19 [chan receive]:
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: goroutine 101 [runnable]:
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:80
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: goroutine 86 [select]:
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00007efa0)
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:44:32 kubernetes-master2 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:44:32 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:44:32 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:44:42 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16.
- Jan 6 16:44:42 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:42 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: F0106 16:44:42.757367 1849 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: goroutine 1 [running]:
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0006bccb0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0002dba50, 0x1, 0x1)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0000cadc0, 0xc000124010, 0x3, 0x3)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0000cadc0, 0xc000124010, 0x3, 0x3, 0xc0000cadc0, 0xc000124010)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0000cadc0, 0x1657af11ca96fa88, 0x70c9020, 0x409b25)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: main.main()
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: goroutine 19 [chan receive]:
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: goroutine 88 [select]:
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00059e320)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: goroutine 12 [select]:
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0003d78f0, 0x1, 0xc0001000c0)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc00029e201, 0xc0001000c0)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:44:42 kubernetes-master2 kubelet[1849]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:44:42 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:44:42 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:44:52 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17.
- Jan 6 16:44:52 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:52 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: F0106 16:44:53.004587 1887 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: goroutine 1 [running]:
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000c4001, 0xc0000d6840, 0xfb, 0x14d)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000260e70, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0002759a0, 0x1, 0x1)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000738dc0, 0xc0000c6010, 0x3, 0x3)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000738dc0, 0xc0000c6010, 0x3, 0x3, 0xc000738dc0, 0xc0000c6010)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000738dc0, 0x1657af142d5d1391, 0x70c9020, 0x409b25)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: main.main()
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: goroutine 19 [chan receive]:
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: goroutine 92 [select]:
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00047b630)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: goroutine 97 [select]:
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000a3ecf0, 0x1, 0xc0000a20c0)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000051701, 0xc0000a20c0)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:44:53 kubernetes-master2 kubelet[1887]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:44:53 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:44:53 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:03 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18.
- Jan 6 16:45:03 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:03 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: F0106 16:45:03.250172 1924 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: goroutine 1 [running]:
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000c4001, 0xc0000d6840, 0xfb, 0x14d)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000962310, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0002d2ad0, 0x1, 0x1)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc00028a000, 0xc0000c6010, 0x3, 0x3)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00028a000, 0xc0000c6010, 0x3, 0x3, 0xc00028a000, 0xc0000c6010)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00028a000, 0x1657af16900f7a40, 0x70c9020, 0x409b25)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: main.main()
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: goroutine 19 [chan receive]:
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: goroutine 82 [select]:
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0000a09b0)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: goroutine 94 [select]:
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc00063c210, 0x1, 0xc0000a20c0)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000a42001, 0xc0000a20c0)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:45:03 kubernetes-master2 kubelet[1924]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:45:03 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:45:03 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:13 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19.
- Jan 6 16:45:13 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:13 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: F0106 16:45:13.513644 1956 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: goroutine 1 [running]:
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000c4001, 0xc000aba000, 0xfb, 0x14d)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0008f58f0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000371a00, 0x1, 0x1)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0008ed8c0, 0xc0000c6010, 0x3, 0x3)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0008ed8c0, 0xc0000c6010, 0x3, 0x3, 0xc0008ed8c0, 0xc0000c6010)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0008ed8c0, 0x1657af18f3cd72c1, 0x70c9020, 0x409b25)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: main.main()
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: goroutine 19 [chan receive]:
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: goroutine 83 [select]:
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0000a0c80)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: goroutine 93 [select]:
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0008d1e30, 0x1, 0xc0000a20c0)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000824301, 0xc0000a20c0)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:45:13 kubernetes-master2 kubelet[1956]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:45:13 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:45:13 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:23 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
- Jan 6 16:45:23 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:23 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: F0106 16:45:23.750633 1992 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: goroutine 1 [running]:
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000cc001, 0xc0000de840, 0xfb, 0x14d)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0009023f0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc00066b8f0, 0x1, 0x1)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000243080, 0xc0000ce010, 0x3, 0x3)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000243080, 0xc0000ce010, 0x3, 0x3, 0xc000243080, 0xc0000ce010)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000243080, 0x1657af1b55fa601a, 0x70c9020, 0x409b25)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: main.main()
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: goroutine 19 [chan receive]:
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: goroutine 85 [select]:
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000594690)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: goroutine 65 [semacquire]:
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: sync.runtime_SemacquireMutex(0x70c947c, 0x409800, 0x1)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/usr/local/go/src/runtime/sema.go:71 +0x47
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: sync.(*Mutex).lockSlow(0x70c9478)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/usr/local/go/src/sync/mutex.go:138 +0x105
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: sync.(*Mutex).Lock(...)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/usr/local/go/src/sync/mutex.go:81
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).lockAndFlushAll(0x70c9460)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1176 +0x85
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Flush()
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:445 +0x2d
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x4a721f0)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000926420, 0x1, 0xc0000b00c0)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000abc001, 0xc0000b00c0)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:45:23 kubernetes-master2 kubelet[1992]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:45:23 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:45:23 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:33 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21.
- Jan 6 16:45:33 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:33 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: F0106 16:45:34.006907 2028 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: goroutine 1 [running]:
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0000be840, 0xfb, 0x14d)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc00090a230, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0002fc600, 0x1, 0x1)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc00048edc0, 0xc00004e090, 0x3, 0x3)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00048edc0, 0xc00004e090, 0x3, 0x3, 0xc00048edc0, 0xc00004e090)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00048edc0, 0x1657af1db94dfef5, 0x70c9020, 0x409b25)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: main.main()
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: goroutine 6 [chan receive]:
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: goroutine 80 [select]:
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000923680)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: goroutine 109 [select]:
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc00085c7b0, 0x1, 0xc0000a00c0)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000ab0101, 0xc0000a00c0)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:45:34 kubernetes-master2 kubelet[2028]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:45:34 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:45:34 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:44 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22.
- Jan 6 16:45:44 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:44 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: F0106 16:45:44.258804 2055 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: goroutine 1 [running]:
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0008c6c40, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0004927e0, 0x1, 0x1)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000416dc0, 0xc000124010, 0x3, 0x3)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000416dc0, 0xc000124010, 0x3, 0x3, 0xc000416dc0, 0xc000124010)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000416dc0, 0x1657af201c5c62e1, 0x70c9020, 0x409b25)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: main.main()
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: goroutine 19 [chan receive]:
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: goroutine 98 [select]:
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000a5f860, 0x1, 0xc0001000c0)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000a86001, 0xc0001000c0)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: goroutine 83 [select]:
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0008cb4a0)
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:45:44 kubernetes-master2 kubelet[2055]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:45:44 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:45:44 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:54 kubernetes-master2 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 23.
- Jan 6 16:45:54 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:54 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: F0106 16:45:54.511915 2134 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: goroutine 1 [running]:
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0008d4690, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0002bff10, 0x1, 0x1)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0000d9b80, 0xc000124010, 0x3, 0x3)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0000d9b80, 0xc000124010, 0x3, 0x3, 0xc0000d9b80, 0xc000124010)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0000d9b80, 0x1657af227f7f6711, 0x70c9020, 0x409b25)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: main.main()
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: goroutine 19 [chan receive]:
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: goroutine 94 [select]:
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000579220)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: goroutine 104 [select]:
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000a1f860, 0x1, 0xc0001000c0)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc0001fc901, 0xc0001000c0)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:45:54 kubernetes-master2 kubelet[2134]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:45:54 kubernetes-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:45:54 kubernetes-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:46:03 kubernetes-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:51:30 kubernetes-master2 systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:51:30 kubernetes-master2 systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:51:30 kubernetes-master2 systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:51:30 kubernetes-master2 systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:51:33 kubernetes-master2 systemd[1]: Reloading.
- Jan 6 16:51:33 kubernetes-master2 systemd[1]: /lib/systemd/system/dbus.socket:5: ListenStream= references a path below legacy directory /var/run/, updating /var/run/dbus/system_bus_socket → /run/dbus/system_bus_socket; please update the unit file accordingly.
- Jan 6 16:51:33 kubernetes-master2 systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:51:33 kubernetes-master2 systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:51:33 kubernetes-master2 systemd[1]: /lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
- Jan 6 16:51:34 kubernetes-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:51:35 kubernetes-master2 systemd[1]: Started Kubernetes systemd probe.
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.368273 2785 server.go:416] Version: v1.20.1
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.370196 2785 server.go:837] Client rotation is on, will bootstrap in background
- Jan 6 16:51:35 kubernetes-master2 systemd[1]: run-r91747833257740c6bb67bbadee1fbaea.scope: Succeeded.
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.384168 2785 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.488151 2785 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.488682 2785 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.488735 2785 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.488950 2785 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.488986 2785 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.488995 2785 container_manager_linux.go:315] Creating device plugin manager: true
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: W0106 16:51:35.489189 2785 kubelet.go:297] Using dockershim is deprecated, please consider using a full-fledged CRI implementation
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.491011 2785 client.go:77] Connecting to docker on unix:///var/run/docker.sock
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.491039 2785 client.go:94] Start docker client with request timeout=2m0s
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: W0106 16:51:35.501611 2785 docker_service.go:559] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.501662 2785 docker_service.go:240] Hairpin mode set to "hairpin-veth"
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: W0106 16:51:35.501789 2785 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: W0106 16:51:35.504500 2785 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: W0106 16:51:35.504625 2785 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.504649 2785 docker_service.go:255] Docker cri networking managed by cni
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.515391 2785 docker_service.go:260] Docker Info: &{ID:ZJPZ:5PX5:H3KS:7XMT:AT4K:IYXO:V2DE:K6Q5:RRDJ:46NY:7ZZV:U2TQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2021-01-06T16:51:35.505737405+01:00 LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.4.0-59-generic OperatingSystem:Ubuntu 20.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0007c9e30 NCPU:2 MemTotal:4127334400 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes-master2 Labels:[] ExperimentalBuild:false ServerVersion:19.03.11 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support]}
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.515492 2785 docker_service.go:273] Setting cgroupDriver to systemd
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.528281 2785 remote_runtime.go:62] parsed scheme: ""
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.528306 2785 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.528374 2785 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.528417 2785 clientconn.go:948] ClientConn switching balancer to "pick_first"
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.528459 2785 remote_image.go:50] parsed scheme: ""
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.528467 2785 remote_image.go:50] scheme "" not registered, fallback to default scheme
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.528478 2785 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.528484 2785 clientconn.go:948] ClientConn switching balancer to "pick_first"
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.528522 2785 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.529092 2785 kubelet.go:273] Watching apiserver
- Jan 6 16:51:35 kubernetes-master2 kubelet[2785]: I0106 16:51:35.558291 2785 kuberuntime_manager.go:216] Container runtime docker initialized, version: 19.03.11, apiVersion: 1.40.0
- Jan 6 16:51:40 kubernetes-master2 kubelet[2785]: W0106 16:51:40.505032 2785 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.797843 2785 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: #011For verbose messaging see aws.Config.CredentialsChainVerboseErrors
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.803517 2785 kubelet.go:1271] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.804206 2785 server.go:1176] Started kubelet
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.805853 2785 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.809265 2785 server.go:148] Starting to listen on 0.0.0.0:10250
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.816413 2785 server.go:409] Adding debug handlers to kubelet server.
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.830111 2785 volume_manager.go:271] Starting Kubelet Volume Manager
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.834165 2785 desired_state_of_world_populator.go:142] Desired state populator starts to run
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.845815 2785 nodelease.go:49] failed to get node "kubernetes-master2" when trying to set owner ref to the node lease: nodes "kubernetes-master2" not found
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.848891 2785 remote_runtime.go:332] ContainerStatus "137678657b9cc5778c0ae1f9314ea6e13bf3347f251422fa01043c6ec24113df" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 137678657b9cc5778c0ae1f9314ea6e13bf3347f251422fa01043c6ec24113df
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.849439 2785 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "137678657b9cc5778c0ae1f9314ea6e13bf3347f251422fa01043c6ec24113df": rpc error: code = Unknown desc = Error: No such container: 137678657b9cc5778c0ae1f9314ea6e13bf3347f251422fa01043c6ec24113df
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.854017 2785 remote_runtime.go:332] ContainerStatus "3cbee2db6031750923880748c7d7d1cd93aaf23e9ef3e98787e906af7d463d43" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 3cbee2db6031750923880748c7d7d1cd93aaf23e9ef3e98787e906af7d463d43
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.855246 2785 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "3cbee2db6031750923880748c7d7d1cd93aaf23e9ef3e98787e906af7d463d43": rpc error: code = Unknown desc = Error: No such container: 3cbee2db6031750923880748c7d7d1cd93aaf23e9ef3e98787e906af7d463d43
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.856852 2785 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.868318 2785 remote_runtime.go:332] ContainerStatus "564086979f56f211c09973f14dbb52857e964f7a89c06f30abdca2dc8ded7630" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 564086979f56f211c09973f14dbb52857e964f7a89c06f30abdca2dc8ded7630
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.868845 2785 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "564086979f56f211c09973f14dbb52857e964f7a89c06f30abdca2dc8ded7630": rpc error: code = Unknown desc = Error: No such container: 564086979f56f211c09973f14dbb52857e964f7a89c06f30abdca2dc8ded7630
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.870957 2785 remote_runtime.go:332] ContainerStatus "6a891a6e9c1e4975d64a2812d9554b5dbea4b5ffa6c476ffd95517adafa25c09" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 6a891a6e9c1e4975d64a2812d9554b5dbea4b5ffa6c476ffd95517adafa25c09
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.871360 2785 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "6a891a6e9c1e4975d64a2812d9554b5dbea4b5ffa6c476ffd95517adafa25c09": rpc error: code = Unknown desc = Error: No such container: 6a891a6e9c1e4975d64a2812d9554b5dbea4b5ffa6c476ffd95517adafa25c09
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.876038 2785 remote_runtime.go:332] ContainerStatus "693c224aeed4c119906c6dc661d69bbb3d363872e3442c89ff23414ac8642f8c" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 693c224aeed4c119906c6dc661d69bbb3d363872e3442c89ff23414ac8642f8c
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.876738 2785 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "693c224aeed4c119906c6dc661d69bbb3d363872e3442c89ff23414ac8642f8c": rpc error: code = Unknown desc = Error: No such container: 693c224aeed4c119906c6dc661d69bbb3d363872e3442c89ff23414ac8642f8c
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.878752 2785 remote_runtime.go:332] ContainerStatus "5ac8136bd05da78a51ce186c7c62a54bf5e211b60f2d0794414281e2ef5b348a" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 5ac8136bd05da78a51ce186c7c62a54bf5e211b60f2d0794414281e2ef5b348a
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.879071 2785 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "5ac8136bd05da78a51ce186c7c62a54bf5e211b60f2d0794414281e2ef5b348a": rpc error: code = Unknown desc = Error: No such container: 5ac8136bd05da78a51ce186c7c62a54bf5e211b60f2d0794414281e2ef5b348a
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.880680 2785 remote_runtime.go:332] ContainerStatus "b9c92a6419d41d29a65505cf1c45211048ae5d5e6bbb83c80e0ceb3053b99fb9" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: b9c92a6419d41d29a65505cf1c45211048ae5d5e6bbb83c80e0ceb3053b99fb9
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.880991 2785 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "b9c92a6419d41d29a65505cf1c45211048ae5d5e6bbb83c80e0ceb3053b99fb9": rpc error: code = Unknown desc = Error: No such container: b9c92a6419d41d29a65505cf1c45211048ae5d5e6bbb83c80e0ceb3053b99fb9
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.883157 2785 remote_runtime.go:332] ContainerStatus "d43483a93c85452bee57ae453c88804a959f86d4dfe4e40bc0433764895527c1" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: d43483a93c85452bee57ae453c88804a959f86d4dfe4e40bc0433764895527c1
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.883534 2785 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "d43483a93c85452bee57ae453c88804a959f86d4dfe4e40bc0433764895527c1": rpc error: code = Unknown desc = Error: No such container: d43483a93c85452bee57ae453c88804a959f86d4dfe4e40bc0433764895527c1
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.887139 2785 remote_runtime.go:332] ContainerStatus "5c343b0b0ff5e032288155aefe41658581b322df3d6bef57f1707d36c1a3695a" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 5c343b0b0ff5e032288155aefe41658581b322df3d6bef57f1707d36c1a3695a
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.887472 2785 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "5c343b0b0ff5e032288155aefe41658581b322df3d6bef57f1707d36c1a3695a": rpc error: code = Unknown desc = Error: No such container: 5c343b0b0ff5e032288155aefe41658581b322df3d6bef57f1707d36c1a3695a
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.889648 2785 remote_runtime.go:332] ContainerStatus "6a1523a477813a2e56e608b9146d2e03c197651e50c54d52614ec10aa9f3a5f3" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 6a1523a477813a2e56e608b9146d2e03c197651e50c54d52614ec10aa9f3a5f3
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.889992 2785 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "6a1523a477813a2e56e608b9146d2e03c197651e50c54d52614ec10aa9f3a5f3": rpc error: code = Unknown desc = Error: No such container: 6a1523a477813a2e56e608b9146d2e03c197651e50c54d52614ec10aa9f3a5f3
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.902497 2785 client.go:86] parsed scheme: "unix"
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.902893 2785 client.go:86] scheme "unix" not registered, fallback to default scheme
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.903098 2785 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.903284 2785 clientconn.go:948] ClientConn switching balancer to "pick_first"
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.931227 2785 kubelet.go:2240] node "kubernetes-master2" not found
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.971615 2785 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.971685 2785 status_manager.go:158] Starting to sync pod status with apiserver
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.971718 2785 kubelet.go:1799] Starting kubelet main sync loop.
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: E0106 16:51:41.971793 2785 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
- Jan 6 16:51:41 kubernetes-master2 kubelet[2785]: I0106 16:51:41.995041 2785 kubelet_node_status.go:71] Attempting to register node kubernetes-master2
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.003568 2785 kubelet_node_status.go:74] Successfully registered node kubernetes-master2
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.045955 2785 cpu_manager.go:193] [cpumanager] starting with none policy
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.046266 2785 cpu_manager.go:194] [cpumanager] reconciling every 10s
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.046466 2785 state_mem.go:36] [cpumanager] initializing new in-memory state store
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.048485 2785 policy_none.go:43] [cpumanager] none policy: Start
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: Created slice libcontainer container kubepods.slice.
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: Created slice libcontainer container kubepods-burstable.slice.
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: E0106 16:51:42.073448 2785 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: Created slice libcontainer container kubepods-besteffort.slice.
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: W0106 16:51:42.096308 2785 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.097211 2785 plugin_manager.go:114] Starting Kubelet Plugin Manager
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: E0106 16:51:42.200992 2785 file.go:108] Unable to process watch event: can't process config file "/etc/kubernetes/manifests/etcd.yaml": /etc/kubernetes/manifests/etcd.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.273628 2785 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.274098 2785 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.274518 2785 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.276205 2785 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.276623 2785 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.276966 2785 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: W0106 16:51:42.277333 2785 predicate.go:113] Failed to admit pod etcd-kubernetes-master2_kube-system(2850093eb023d5a9a20d573fb918b641) - Unexpected error while attempting to recover from admission failure: preemption: error finding a set of pods to preempt: no set of running pods found to reclaim resources: [(res: ephemeral-storage, q: 104857600), ]
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: Created slice libcontainer container kubepods-burstable-pod7a2e5860052277dd81c04d93d6d3fde5.slice.
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: Created slice libcontainer container kubepods-burstable-podc61f75a63a6b7c302751a6cc76c53045.slice.
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: Created slice libcontainer container kubepods-burstable-pod9be8cb4627e7e5ad4c3f8acabd4b49b3.slice.
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: Created slice libcontainer container kubepods-besteffort-poda20032bf_7e4b_4a76_904f_fee2ccd8130b.slice.
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: Created slice libcontainer container kubepods-burstable-pod43469c7e_85dc_4d10_939e_fe93c0455c42.slice.
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.362049 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/a20032bf-7e4b-4a76-904f-fee2ccd8130b-kube-proxy") pod "kube-proxy-c89x6" (UID: "a20032bf-7e4b-4a76-904f-fee2ccd8130b")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.362893 2785 reconciler.go:157] Reconciler: start to sync state
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463231 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/43469c7e-85dc-4d10-939e-fe93c0455c42-var-lib-calico") pod "calico-node-9xz2c" (UID: "43469c7e-85dc-4d10-939e-fe93c0455c42")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463281 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/43469c7e-85dc-4d10-939e-fe93c0455c42-cni-net-dir") pod "calico-node-9xz2c" (UID: "43469c7e-85dc-4d10-939e-fe93c0455c42")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463305 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/7a2e5860052277dd81c04d93d6d3fde5-k8s-certs") pod "kube-apiserver-kubernetes-master2" (UID: "7a2e5860052277dd81c04d93d6d3fde5")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463347 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/a20032bf-7e4b-4a76-904f-fee2ccd8130b-lib-modules") pod "kube-proxy-c89x6" (UID: "a20032bf-7e4b-4a76-904f-fee2ccd8130b")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463365 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "host-local-net-dir" (UniqueName: "kubernetes.io/host-path/43469c7e-85dc-4d10-939e-fe93c0455c42-host-local-net-dir") pod "calico-node-9xz2c" (UID: "43469c7e-85dc-4d10-939e-fe93c0455c42")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463383 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-ca-certs") pod "kube-controller-manager-kubernetes-master2" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463402 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-etc-ca-certificates") pod "kube-controller-manager-kubernetes-master2" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463430 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-etc-pki") pod "kube-controller-manager-kubernetes-master2" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463448 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-kubeconfig") pod "kube-controller-manager-kubernetes-master2" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463469 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-bszcf" (UniqueName: "kubernetes.io/secret/a20032bf-7e4b-4a76-904f-fee2ccd8130b-kube-proxy-token-bszcf") pod "kube-proxy-c89x6" (UID: "a20032bf-7e4b-4a76-904f-fee2ccd8130b")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463488 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-bsmvn" (UniqueName: "kubernetes.io/secret/43469c7e-85dc-4d10-939e-fe93c0455c42-calico-node-token-bsmvn") pod "calico-node-9xz2c" (UID: "43469c7e-85dc-4d10-939e-fe93c0455c42")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463507 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/7a2e5860052277dd81c04d93d6d3fde5-usr-local-share-ca-certificates") pod "kube-apiserver-kubernetes-master2" (UID: "7a2e5860052277dd81c04d93d6d3fde5")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463524 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/9be8cb4627e7e5ad4c3f8acabd4b49b3-kubeconfig") pod "kube-scheduler-kubernetes-master2" (UID: "9be8cb4627e7e5ad4c3f8acabd4b49b3")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463543 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/43469c7e-85dc-4d10-939e-fe93c0455c42-var-run-calico") pod "calico-node-9xz2c" (UID: "43469c7e-85dc-4d10-939e-fe93c0455c42")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463582 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/7a2e5860052277dd81c04d93d6d3fde5-etc-ca-certificates") pod "kube-apiserver-kubernetes-master2" (UID: "7a2e5860052277dd81c04d93d6d3fde5")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463602 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/a20032bf-7e4b-4a76-904f-fee2ccd8130b-xtables-lock") pod "kube-proxy-c89x6" (UID: "a20032bf-7e4b-4a76-904f-fee2ccd8130b")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463623 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-usr-share-ca-certificates") pod "kube-controller-manager-kubernetes-master2" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463640 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/43469c7e-85dc-4d10-939e-fe93c0455c42-xtables-lock") pod "calico-node-9xz2c" (UID: "43469c7e-85dc-4d10-939e-fe93c0455c42")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463659 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "policysync" (UniqueName: "kubernetes.io/host-path/43469c7e-85dc-4d10-939e-fe93c0455c42-policysync") pod "calico-node-9xz2c" (UID: "43469c7e-85dc-4d10-939e-fe93c0455c42")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463677 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvol-driver-host" (UniqueName: "kubernetes.io/host-path/43469c7e-85dc-4d10-939e-fe93c0455c42-flexvol-driver-host") pod "calico-node-9xz2c" (UID: "43469c7e-85dc-4d10-939e-fe93c0455c42")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463695 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/7a2e5860052277dd81c04d93d6d3fde5-etc-pki") pod "kube-apiserver-kubernetes-master2" (UID: "7a2e5860052277dd81c04d93d6d3fde5")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463760 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/43469c7e-85dc-4d10-939e-fe93c0455c42-lib-modules") pod "calico-node-9xz2c" (UID: "43469c7e-85dc-4d10-939e-fe93c0455c42")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463791 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/43469c7e-85dc-4d10-939e-fe93c0455c42-cni-bin-dir") pod "calico-node-9xz2c" (UID: "43469c7e-85dc-4d10-939e-fe93c0455c42")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463821 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-log-dir" (UniqueName: "kubernetes.io/host-path/43469c7e-85dc-4d10-939e-fe93c0455c42-cni-log-dir") pod "calico-node-9xz2c" (UID: "43469c7e-85dc-4d10-939e-fe93c0455c42")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463844 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/7a2e5860052277dd81c04d93d6d3fde5-ca-certs") pod "kube-apiserver-kubernetes-master2" (UID: "7a2e5860052277dd81c04d93d6d3fde5")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463888 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/7a2e5860052277dd81c04d93d6d3fde5-usr-share-ca-certificates") pod "kube-apiserver-kubernetes-master2" (UID: "7a2e5860052277dd81c04d93d6d3fde5")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463910 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-flexvolume-dir") pod "kube-controller-manager-kubernetes-master2" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463931 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-k8s-certs") pod "kube-controller-manager-kubernetes-master2" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463950 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-usr-local-share-ca-certificates") pod "kube-controller-manager-kubernetes-master2" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:51:42 kubernetes-master2 kubelet[2785]: I0106 16:51:42.463968 2785 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sysfs" (UniqueName: "kubernetes.io/host-path/43469c7e-85dc-4d10-939e-fe93c0455c42-sysfs") pod "calico-node-9xz2c" (UID: "43469c7e-85dc-4d10-939e-fe93c0455c42")
- Jan 6 16:51:42 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-66415c5ad19d3ff0a173f64150c1ff186dc7d17cad1d64c190e7d9818fdcd962\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-66415c5ad19d3ff0a173f64150c1ff186dc7d17cad1d64c190e7d9818fdcd962\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:42 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-66415c5ad19d3ff0a173f64150c1ff186dc7d17cad1d64c190e7d9818fdcd962-merged.mount: Succeeded.
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-66415c5ad19d3ff0a173f64150c1ff186dc7d17cad1d64c190e7d9818fdcd962-merged.mount: Succeeded.
- Jan 6 16:51:42 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:42.780327932+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1ba3b376c91d33610365b0ed5c40dc08a5d31058977c6ac40c648f253db105c4/shim.sock" debug=false pid=3028
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: Started libcontainer container 1ba3b376c91d33610365b0ed5c40dc08a5d31058977c6ac40c648f253db105c4.
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-1ba3b376c91d33610365b0ed5c40dc08a5d31058977c6ac40c648f253db105c4-runc.JfUYoo.mount: Succeeded.
- Jan 6 16:51:42 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-1ba3b376c91d33610365b0ed5c40dc08a5d31058977c6ac40c648f253db105c4-runc.JfUYoo.mount: Succeeded.
- Jan 6 16:51:42 kubernetes-master2 kernel: [ 593.987319] cgroup: cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation
- Jan 6 16:51:42 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-c5a1e96b28b6f9da5a20a550c5c4318635fdd3e814147d60d42a343a3345bf27\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:42 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-c5a1e96b28b6f9da5a20a550c5c4318635fdd3e814147d60d42a343a3345bf27\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-c5a1e96b28b6f9da5a20a550c5c4318635fdd3e814147d60d42a343a3345bf27-merged.mount: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-c5a1e96b28b6f9da5a20a550c5c4318635fdd3e814147d60d42a343a3345bf27-merged.mount: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:43.036845095+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/93d55c166937abd39e8219db7b4b2c902c817c338d334b51c73adac08dfecb66/shim.sock" debug=false pid=3079
- Jan 6 16:51:43 kubernetes-master2 systemd[1]: Started libcontainer container 93d55c166937abd39e8219db7b4b2c902c817c338d334b51c73adac08dfecb66.
- Jan 6 16:51:43 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-e00343a63667f8555a209d0b693fd4ac19be49f6d01b338f324fd2e717e29d58\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-e00343a63667f8555a209d0b693fd4ac19be49f6d01b338f324fd2e717e29d58\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-e00343a63667f8555a209d0b693fd4ac19be49f6d01b338f324fd2e717e29d58-merged.mount: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-e00343a63667f8555a209d0b693fd4ac19be49f6d01b338f324fd2e717e29d58-merged.mount: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:43.316698800+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/de03810dbb732ebb79c26a7e6e82229a245471247dee0470b7c7704fe6380514/shim.sock" debug=false pid=3121
- Jan 6 16:51:43 kubernetes-master2 systemd[1]: Started libcontainer container de03810dbb732ebb79c26a7e6e82229a245471247dee0470b7c7704fe6380514.
- Jan 6 16:51:43 kubernetes-master2 kubelet[2785]: I0106 16:51:43.403132 2785 request.go:655] Throttling request took 1.089953449s, request: POST:https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods
- Jan 6 16:51:43 kubernetes-master2 kernel: [ 594.591742] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
- Jan 6 16:51:43 kubernetes-master2 kernel: [ 594.591761] IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
- Jan 6 16:51:43 kubernetes-master2 kernel: [ 594.591961] IPVS: ipvs loaded.
- Jan 6 16:51:43 kubernetes-master2 kernel: [ 594.598293] IPVS: [rr] scheduler registered.
- Jan 6 16:51:43 kubernetes-master2 kernel: [ 594.602966] IPVS: [wrr] scheduler registered.
- Jan 6 16:51:43 kubernetes-master2 kernel: [ 594.612923] IPVS: [sh] scheduler registered.
- Jan 6 16:51:43 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-364492883f6c0042bbd38ed06237f266baa0a818428de98d2065b2022a11872f\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-364492883f6c0042bbd38ed06237f266baa0a818428de98d2065b2022a11872f\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-364492883f6c0042bbd38ed06237f266baa0a818428de98d2065b2022a11872f-merged.mount: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-364492883f6c0042bbd38ed06237f266baa0a818428de98d2065b2022a11872f-merged.mount: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:43.558562206+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e28f4d8de19b16bd823d17c59bb6d3d8121f5fb4b78319ec77e057eef378fa4d/shim.sock" debug=false pid=3164
- Jan 6 16:51:43 kubernetes-master2 systemd[1]: Started libcontainer container e28f4d8de19b16bd823d17c59bb6d3d8121f5fb4b78319ec77e057eef378fa4d.
- Jan 6 16:51:43 kubernetes-master2 systemd[1]: docker-e28f4d8de19b16bd823d17c59bb6d3d8121f5fb4b78319ec77e057eef378fa4d.scope: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:43.936665719+01:00" level=info msg="shim reaped" id=e28f4d8de19b16bd823d17c59bb6d3d8121f5fb4b78319ec77e057eef378fa4d
- Jan 6 16:51:43 kubernetes-master2 dockerd[827]: time="2021-01-06T16:51:43.947521304+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:51:43 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-364492883f6c0042bbd38ed06237f266baa0a818428de98d2065b2022a11872f-merged.mount: Succeeded.
- Jan 6 16:51:43 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-364492883f6c0042bbd38ed06237f266baa0a818428de98d2065b2022a11872f-merged.mount: Succeeded.
- Jan 6 16:51:44 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-8135f6da2b1ac5cef469618383193f2e1fd7109bc92f357f5fa03c6a8217a375\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:44 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-8135f6da2b1ac5cef469618383193f2e1fd7109bc92f357f5fa03c6a8217a375\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:44 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-8135f6da2b1ac5cef469618383193f2e1fd7109bc92f357f5fa03c6a8217a375-merged.mount: Succeeded.
- Jan 6 16:51:44 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-8135f6da2b1ac5cef469618383193f2e1fd7109bc92f357f5fa03c6a8217a375-merged.mount: Succeeded.
- Jan 6 16:51:44 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:44.088180582+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/aab703a31b92c855618ab76101e5ac4ca51b93c46876a162d75c7927d748199e/shim.sock" debug=false pid=3218
- Jan 6 16:51:44 kubernetes-master2 systemd[1]: Started libcontainer container aab703a31b92c855618ab76101e5ac4ca51b93c46876a162d75c7927d748199e.
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: docker-aab703a31b92c855618ab76101e5ac4ca51b93c46876a162d75c7927d748199e.scope: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:45.368814893+01:00" level=info msg="shim reaped" id=aab703a31b92c855618ab76101e5ac4ca51b93c46876a162d75c7927d748199e
- Jan 6 16:51:45 kubernetes-master2 kubelet[2785]: I0106 16:51:45.378277 2785 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials
- Jan 6 16:51:45 kubernetes-master2 kubelet[2785]: E0106 16:51:45.378893 2785 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-kubernetes-master2.1657af7377fed64b", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-kubernetes-master2", UID:"2850093eb023d5a9a20d573fb918b641", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"UnexpectedAdmissionError", Message:"Unexpected error while attempting to recover from admission failure: preemption: error finding a set of pods to preempt: no set of running pods found to reclaim resources: [(res: ephemeral-storage, q: 104857600), ]", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503908aca4b, ext:8162292066, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503908aca4b, ext:8162292066, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events": read tcp 192.168.255.202:59672->192.168.255.200:8443: use of closed network connection'(may retry after sleeping)
- Jan 6 16:51:45 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-8135f6da2b1ac5cef469618383193f2e1fd7109bc92f357f5fa03c6a8217a375-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 dockerd[827]: time="2021-01-06T16:51:45.379074287+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:51:45 kubernetes-master2 kubelet[2785]: E0106 16:51:45.379528 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-scheduler-kubernetes-master2_kube-system(9be8cb4627e7e5ad4c3f8acabd4b49b3)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": read tcp 192.168.255.202:59672->192.168.255.200:8443: use of closed network connection
- Jan 6 16:51:45 kubernetes-master2 kubelet[2785]: E0106 16:51:45.384558 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-controller-manager-kubernetes-master2_kube-system(c61f75a63a6b7c302751a6cc76c53045)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": read tcp 192.168.255.202:59672->192.168.255.200:8443: use of closed network connection
- Jan 6 16:51:45 kubernetes-master2 kubelet[2785]: E0106 16:51:45.384960 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": read tcp 192.168.255.202:59672->192.168.255.200:8443: use of closed network connection
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-8135f6da2b1ac5cef469618383193f2e1fd7109bc92f357f5fa03c6a8217a375-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-f766ee4bf42c0939c99a787afc41bb4064e421fa4014ccfb59706adc8afad6bb\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-f766ee4bf42c0939c99a787afc41bb4064e421fa4014ccfb59706adc8afad6bb\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-8fb2d14f8b1853733b15293f26f2cd449161eda5f883c5b57ed7eda657792dda\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-8fb2d14f8b1853733b15293f26f2cd449161eda5f883c5b57ed7eda657792dda\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-f766ee4bf42c0939c99a787afc41bb4064e421fa4014ccfb59706adc8afad6bb-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-f766ee4bf42c0939c99a787afc41bb4064e421fa4014ccfb59706adc8afad6bb-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-8fb2d14f8b1853733b15293f26f2cd449161eda5f883c5b57ed7eda657792dda-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-8fb2d14f8b1853733b15293f26f2cd449161eda5f883c5b57ed7eda657792dda-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-7c79942aa39d3d704090e72d1cbc9f57e632da4421883a74c07c8dc7cfdb3a3b\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-7c79942aa39d3d704090e72d1cbc9f57e632da4421883a74c07c8dc7cfdb3a3b\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-7c79942aa39d3d704090e72d1cbc9f57e632da4421883a74c07c8dc7cfdb3a3b-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-7c79942aa39d3d704090e72d1cbc9f57e632da4421883a74c07c8dc7cfdb3a3b-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:45.557322795+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8c957e0fb8960f6f23c5faaff4cf193dc0899b13526eac63ef2613318741298d/shim.sock" debug=false pid=3285
- Jan 6 16:51:45 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:45.580630556+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c241ddf880eab463cf035c6a864d3282f960662a0bdeb60eb8dc9cc09b619432/shim.sock" debug=false pid=3297
- Jan 6 16:51:45 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:45.595683217+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d670fa8865a8e85daa56a1f87038089e173535316af72526c43159a43bc5ff59/shim.sock" debug=false pid=3317
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: Started libcontainer container 8c957e0fb8960f6f23c5faaff4cf193dc0899b13526eac63ef2613318741298d.
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: Started libcontainer container d670fa8865a8e85daa56a1f87038089e173535316af72526c43159a43bc5ff59.
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: Started libcontainer container c241ddf880eab463cf035c6a864d3282f960662a0bdeb60eb8dc9cc09b619432.
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-41cdd767eb94f8f11f7b55a2803300648f4c6c36b56027ffc7742f1ac1eb8501\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-41cdd767eb94f8f11f7b55a2803300648f4c6c36b56027ffc7742f1ac1eb8501\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-2b325328b711a9455a06b51cfee3ddb21ae72094d54d8973b15d4ea432ae0c25\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-2b325328b711a9455a06b51cfee3ddb21ae72094d54d8973b15d4ea432ae0c25\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-41cdd767eb94f8f11f7b55a2803300648f4c6c36b56027ffc7742f1ac1eb8501-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-41cdd767eb94f8f11f7b55a2803300648f4c6c36b56027ffc7742f1ac1eb8501-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:45.946546830+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8c48297799b99fd39601f82fc757456e7b139ff6084fd5c5b12c991365a3ea84/shim.sock" debug=false pid=3408
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-01b5ead2a8ace926efc25928389ee7894a6eaf9e42fbfd880a0d4bd8d4f5c661\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-01b5ead2a8ace926efc25928389ee7894a6eaf9e42fbfd880a0d4bd8d4f5c661\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:45 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:45.954589741+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/043a4e01d3f3e50f64c0c40628339715ab7a26ac1ad86ac2bbddb06fd4ba19d4/shim.sock" debug=false pid=3417
- Jan 6 16:51:45 kubernetes-master2 systemd[1]: Started libcontainer container 8c48297799b99fd39601f82fc757456e7b139ff6084fd5c5b12c991365a3ea84.
- Jan 6 16:51:46 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:46.016860643+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2125a32a578eaf108c65af807018460e51a2909c6a131a261d5ef8e374d08f4b/shim.sock" debug=false pid=3437
- Jan 6 16:51:46 kubernetes-master2 systemd[1]: Started libcontainer container 043a4e01d3f3e50f64c0c40628339715ab7a26ac1ad86ac2bbddb06fd4ba19d4.
- Jan 6 16:51:46 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-043a4e01d3f3e50f64c0c40628339715ab7a26ac1ad86ac2bbddb06fd4ba19d4-runc.59PfuP.mount: Succeeded.
- Jan 6 16:51:46 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-043a4e01d3f3e50f64c0c40628339715ab7a26ac1ad86ac2bbddb06fd4ba19d4-runc.59PfuP.mount: Succeeded.
- Jan 6 16:51:46 kubernetes-master2 systemd[1]: Started libcontainer container 2125a32a578eaf108c65af807018460e51a2909c6a131a261d5ef8e374d08f4b.
- Jan 6 16:51:46 kubernetes-master2 kubelet[2785]: W0106 16:51:46.249642 2785 docker_container.go:245] Deleted previously existing symlink file: "/var/log/pods/kube-system_kube-scheduler-kubernetes-master2_9be8cb4627e7e5ad4c3f8acabd4b49b3/kube-scheduler/0.log"
- Jan 6 16:51:46 kubernetes-master2 kubelet[2785]: W0106 16:51:46.275621 2785 docker_container.go:245] Deleted previously existing symlink file: "/var/log/pods/kube-system_kube-controller-manager-kubernetes-master2_c61f75a63a6b7c302751a6cc76c53045/kube-controller-manager/0.log"
- Jan 6 16:51:46 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-ddd8f224ccb012218b056c69bbb20069474e305cadac43c42125a3fbe5ac1d68\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:46 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-ddd8f224ccb012218b056c69bbb20069474e305cadac43c42125a3fbe5ac1d68\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:46 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-ddd8f224ccb012218b056c69bbb20069474e305cadac43c42125a3fbe5ac1d68-merged.mount: Succeeded.
- Jan 6 16:51:46 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-ddd8f224ccb012218b056c69bbb20069474e305cadac43c42125a3fbe5ac1d68-merged.mount: Succeeded.
- Jan 6 16:51:46 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:46.449105885+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e9215efaa76655735f86c405f32adc0b188abf4ee500a9af32b3102550144073/shim.sock" debug=false pid=3521
- Jan 6 16:51:46 kubernetes-master2 systemd[1]: Started libcontainer container e9215efaa76655735f86c405f32adc0b188abf4ee500a9af32b3102550144073.
- Jan 6 16:51:46 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-e9215efaa76655735f86c405f32adc0b188abf4ee500a9af32b3102550144073-runc.mCrlKE.mount: Succeeded.
- Jan 6 16:51:46 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-e9215efaa76655735f86c405f32adc0b188abf4ee500a9af32b3102550144073-runc.mCrlKE.mount: Succeeded.
- Jan 6 16:51:46 kubernetes-master2 systemd[1]: docker-e9215efaa76655735f86c405f32adc0b188abf4ee500a9af32b3102550144073.scope: Succeeded.
- Jan 6 16:51:46 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:46.907876516+01:00" level=info msg="shim reaped" id=e9215efaa76655735f86c405f32adc0b188abf4ee500a9af32b3102550144073
- Jan 6 16:51:46 kubernetes-master2 dockerd[827]: time="2021-01-06T16:51:46.917850939+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:51:46 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-ddd8f224ccb012218b056c69bbb20069474e305cadac43c42125a3fbe5ac1d68-merged.mount: Succeeded.
- Jan 6 16:51:46 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-ddd8f224ccb012218b056c69bbb20069474e305cadac43c42125a3fbe5ac1d68-merged.mount: Succeeded.
- Jan 6 16:51:47 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-5038f08ae13513c5f6b1b97507a26434bfe7f0a90b1d9d5d25329b8021ef0c2b\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:47 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-5038f08ae13513c5f6b1b97507a26434bfe7f0a90b1d9d5d25329b8021ef0c2b\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:47 kubernetes-master2 containerd[826]: time="2021-01-06T16:51:47.483983564+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde/shim.sock" debug=false pid=3598
- Jan 6 16:51:47 kubernetes-master2 systemd[1]: Started libcontainer container ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde.
- Jan 6 16:51:47 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.VmDtvt.mount: Succeeded.
- Jan 6 16:51:47 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.VmDtvt.mount: Succeeded.
- Jan 6 16:51:50 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.lxOPD2.mount: Succeeded.
- Jan 6 16:51:50 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.lxOPD2.mount: Succeeded.
- Jan 6 16:51:53 kubernetes-master2 kubelet[2785]: E0106 16:51:53.304772 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-scheduler-kubernetes-master2_kube-system(9be8cb4627e7e5ad4c3f8acabd4b49b3)": rpc error: code = Unknown desc = context deadline exceeded
- Jan 6 16:51:54 kubernetes-master2 kubelet[2785]: E0106 16:51:54.390619 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": rpc error: code = Unknown desc = context deadline exceeded
- Jan 6 16:51:54 kubernetes-master2 kubelet[2785]: E0106 16:51:54.405092 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-controller-manager-kubernetes-master2_kube-system(c61f75a63a6b7c302751a6cc76c53045)": rpc error: code = Unknown desc = context deadline exceeded
- Jan 6 16:51:56 kubernetes-master2 kubelet[2785]: W0106 16:51:56.248604 2785 status_manager.go:550] Failed to get status for pod "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": etcdserver: request timed out
- Jan 6 16:51:56 kubernetes-master2 kubelet[2785]: E0106 16:51:56.248834 2785 csi_plugin.go:293] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: etcdserver: request timed out
- Jan 6 16:51:56 kubernetes-master2 kubelet[2785]: E0106 16:51:56.646620 2785 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-kubernetes-master2.1657af7377fed64b", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-kubernetes-master2", UID:"2850093eb023d5a9a20d573fb918b641", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"UnexpectedAdmissionError", Message:"Unexpected error while attempting to recover from admission failure: preemption: error finding a set of pods to preempt: no set of running pods found to reclaim resources: [(res: ephemeral-storage, q: 104857600), ]", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503908aca4b, ext:8162292066, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503908aca4b, ext:8162292066, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = context deadline exceeded' (will not retry!)
- Jan 6 16:51:58 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.eJvpyk.mount: Succeeded.
- Jan 6 16:51:58 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.eJvpyk.mount: Succeeded.
- Jan 6 16:52:00 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.xNt0Vo.mount: Succeeded.
- Jan 6 16:52:00 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.xNt0Vo.mount: Succeeded.
- Jan 6 16:52:00 kubernetes-master2 kubelet[2785]: E0106 16:52:00.460136 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-scheduler-kubernetes-master2_kube-system(9be8cb4627e7e5ad4c3f8acabd4b49b3)": rpc error: code = Unknown desc = context deadline exceeded
- Jan 6 16:52:01 kubernetes-master2 kubelet[2785]: E0106 16:52:01.471918 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-controller-manager-kubernetes-master2_kube-system(c61f75a63a6b7c302751a6cc76c53045)": rpc error: code = Unknown desc = context deadline exceeded
- Jan 6 16:52:01 kubernetes-master2 kubelet[2785]: E0106 16:52:01.472503 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": rpc error: code = Unknown desc = context deadline exceeded
- Jan 6 16:52:01 kubernetes-master2 kubelet[2785]: E0106 16:52:01.846907 2785 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master2?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
- Jan 6 16:52:02 kubernetes-master2 kubelet[2785]: E0106 16:52:02.067036 2785 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master2": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master2?resourceVersion=0&timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
- Jan 6 16:52:03 kubernetes-master2 kubelet[2785]: E0106 16:52:03.658288 2785 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-c89x6.1657af73a0d48d2a", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-c89x6", UID:"a20032bf-7e4b-4a76-904f-fee2ccd8130b", APIVersion:"v1", ResourceVersion:"921", FieldPath:"spec.containers{kube-proxy}"}, Reason:"Pulled", Message:"Container image \"k8s.gcr.io/kube-proxy:v1.20.1\" already present on machine", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503b960812a, ext:8847386676, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503b960812a, ext:8847386676, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = context deadline exceeded' (will not retry!)
- Jan 6 16:52:07 kubernetes-master2 systemd[1]: docker-8c48297799b99fd39601f82fc757456e7b139ff6084fd5c5b12c991365a3ea84.scope: Succeeded.
- Jan 6 16:52:07 kubernetes-master2 containerd[826]: time="2021-01-06T16:52:07.759950066+01:00" level=info msg="shim reaped" id=8c48297799b99fd39601f82fc757456e7b139ff6084fd5c5b12c991365a3ea84
- Jan 6 16:52:07 kubernetes-master2 dockerd[827]: time="2021-01-06T16:52:07.770201175+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:52:07 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-2b325328b711a9455a06b51cfee3ddb21ae72094d54d8973b15d4ea432ae0c25-merged.mount: Succeeded.
- Jan 6 16:52:07 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-2b325328b711a9455a06b51cfee3ddb21ae72094d54d8973b15d4ea432ae0c25-merged.mount: Succeeded.
- Jan 6 16:52:08 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.DjSlW3.mount: Succeeded.
- Jan 6 16:52:08 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.DjSlW3.mount: Succeeded.
- Jan 6 16:52:10 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.db87qC.mount: Succeeded.
- Jan 6 16:52:10 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.db87qC.mount: Succeeded.
- Jan 6 16:52:10 kubernetes-master2 kubelet[2785]: E0106 16:52:10.245645 2785 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: etcdserver: request timed out
- Jan 6 16:52:10 kubernetes-master2 kubelet[2785]: E0106 16:52:10.246023 2785 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master2": etcdserver: request timed out
- Jan 6 16:52:10 kubernetes-master2 kubelet[2785]: E0106 16:52:10.246556 2785 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-c89x6.1657af73a40b75de", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-c89x6", UID:"a20032bf-7e4b-4a76-904f-fee2ccd8130b", APIVersion:"v1", ResourceVersion:"921", FieldPath:"spec.containers{kube-proxy}"}, Reason:"Created", Message:"Created container kube-proxy", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503c0fc9fde, ext:8901316842, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503c0fc9fde, ext:8901316842, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events": read tcp 192.168.255.202:59766->192.168.255.200:8443: use of closed network connection'(may retry after sleeping)
- Jan 6 16:52:10 kubernetes-master2 kubelet[2785]: E0106 16:52:10.247317 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": read tcp 192.168.255.202:59766->192.168.255.200:8443: use of closed network connection
- Jan 6 16:52:10 kubernetes-master2 kubelet[2785]: I0106 16:52:10.247453 2785 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8c48297799b99fd39601f82fc757456e7b139ff6084fd5c5b12c991365a3ea84
- Jan 6 16:52:10 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-033d8fdf61ee82fc030e67b0ca4ac65090483c70b2d6a45d6d49a0a68b1ab960\x2dinit-merged.mount: Succeeded.
- Jan 6 16:52:10 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-033d8fdf61ee82fc030e67b0ca4ac65090483c70b2d6a45d6d49a0a68b1ab960\x2dinit-merged.mount: Succeeded.
- Jan 6 16:52:10 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-033d8fdf61ee82fc030e67b0ca4ac65090483c70b2d6a45d6d49a0a68b1ab960-merged.mount: Succeeded.
- Jan 6 16:52:10 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-033d8fdf61ee82fc030e67b0ca4ac65090483c70b2d6a45d6d49a0a68b1ab960-merged.mount: Succeeded.
- Jan 6 16:52:10 kubernetes-master2 containerd[826]: time="2021-01-06T16:52:10.355012403+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/abed8faa123e031d19c85424e8f656edea88145fb6dfc109422a2c899a2368cb/shim.sock" debug=false pid=3928
- Jan 6 16:52:10 kubernetes-master2 systemd[1]: Started libcontainer container abed8faa123e031d19c85424e8f656edea88145fb6dfc109422a2c899a2368cb.
- Jan 6 16:52:17 kubernetes-master2 kubelet[2785]: E0106 16:52:17.591678 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": rpc error: code = Unknown desc = context deadline exceeded
- Jan 6 16:52:18 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.pkEFLQ.mount: Succeeded.
- Jan 6 16:52:18 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.pkEFLQ.mount: Succeeded.
- Jan 6 16:52:20 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.k9rKAe.mount: Succeeded.
- Jan 6 16:52:20 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.k9rKAe.mount: Succeeded.
- Jan 6 16:52:20 kubernetes-master2 kubelet[2785]: E0106 16:52:20.247137 2785 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master2": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master2?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
- Jan 6 16:52:20 kubernetes-master2 kubelet[2785]: E0106 16:52:20.247855 2785 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-c89x6.1657af73a40b75de", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-c89x6", UID:"a20032bf-7e4b-4a76-904f-fee2ccd8130b", APIVersion:"v1", ResourceVersion:"921", FieldPath:"spec.containers{kube-proxy}"}, Reason:"Created", Message:"Created container kube-proxy", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503c0fc9fde, ext:8901316842, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503c0fc9fde, ext:8901316842, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events": read tcp 192.168.255.202:60054->192.168.255.200:8443: use of closed network connection'(may retry after sleeping)
- Jan 6 16:52:20 kubernetes-master2 kubelet[2785]: E0106 16:52:20.248442 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": read tcp 192.168.255.202:60054->192.168.255.200:8443: use of closed network connection
- Jan 6 16:52:21 kubernetes-master2 kubelet[2785]: E0106 16:52:21.248200 2785 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master2?timeout=10s": context deadline exceeded
- Jan 6 16:52:28 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.VdehBc.mount: Succeeded.
- Jan 6 16:52:28 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.VdehBc.mount: Succeeded.
- Jan 6 16:52:30 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.praxLg.mount: Succeeded.
- Jan 6 16:52:30 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.praxLg.mount: Succeeded.
- Jan 6 16:52:30 kubernetes-master2 kubelet[2785]: E0106 16:52:30.247566 2785 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master2": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master2?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
- Jan 6 16:52:31 kubernetes-master2 systemd[1]: docker-abed8faa123e031d19c85424e8f656edea88145fb6dfc109422a2c899a2368cb.scope: Succeeded.
- Jan 6 16:52:31 kubernetes-master2 systemd[1]: docker-abed8faa123e031d19c85424e8f656edea88145fb6dfc109422a2c899a2368cb.scope: Consumed 1.091s CPU time.
- Jan 6 16:52:31 kubernetes-master2 containerd[826]: time="2021-01-06T16:52:31.520050204+01:00" level=info msg="shim reaped" id=abed8faa123e031d19c85424e8f656edea88145fb6dfc109422a2c899a2368cb
- Jan 6 16:52:31 kubernetes-master2 dockerd[827]: time="2021-01-06T16:52:31.530651327+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:52:31 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-033d8fdf61ee82fc030e67b0ca4ac65090483c70b2d6a45d6d49a0a68b1ab960-merged.mount: Succeeded.
- Jan 6 16:52:31 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-033d8fdf61ee82fc030e67b0ca4ac65090483c70b2d6a45d6d49a0a68b1ab960-merged.mount: Succeeded.
- Jan 6 16:52:31 kubernetes-master2 kubelet[2785]: I0106 16:52:31.727079 2785 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8c48297799b99fd39601f82fc757456e7b139ff6084fd5c5b12c991365a3ea84
- Jan 6 16:52:32 kubernetes-master2 kubelet[2785]: E0106 16:52:32.048776 2785 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master2?timeout=10s": context deadline exceeded
- Jan 6 16:52:37 kubernetes-master2 kubelet[2785]: E0106 16:52:37.316769 2785 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-c89x6.1657af73a40b75de", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-c89x6", UID:"a20032bf-7e4b-4a76-904f-fee2ccd8130b", APIVersion:"v1", ResourceVersion:"921", FieldPath:"spec.containers{kube-proxy}"}, Reason:"Created", Message:"Created container kube-proxy", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503c0fc9fde, ext:8901316842, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503c0fc9fde, ext:8901316842, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = context deadline exceeded' (will not retry!)
- Jan 6 16:52:38 kubernetes-master2 kubelet[2785]: E0106 16:52:38.234055 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": read tcp 192.168.255.202:60268->192.168.255.200:8443: use of closed network connection
- Jan 6 16:52:38 kubernetes-master2 kubelet[2785]: I0106 16:52:38.234655 2785 scope.go:95] [topologymanager] RemoveContainer - Container ID: abed8faa123e031d19c85424e8f656edea88145fb6dfc109422a2c899a2368cb
- Jan 6 16:52:38 kubernetes-master2 kubelet[2785]: E0106 16:52:38.235067 2785 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master2": etcdserver: request timed out
- Jan 6 16:52:38 kubernetes-master2 kubelet[2785]: E0106 16:52:38.239291 2785 kubelet_node_status.go:434] Unable to update node status: update node status exceeds retry count
- Jan 6 16:52:38 kubernetes-master2 kubelet[2785]: E0106 16:52:38.236958 2785 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-c89x6.1657af73ac7ff243", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-c89x6", UID:"a20032bf-7e4b-4a76-904f-fee2ccd8130b", APIVersion:"v1", ResourceVersion:"921", FieldPath:"spec.containers{kube-proxy}"}, Reason:"Started", Message:"Started container kube-proxy", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503c9711c43, ext:9043168590, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503c9711c43, ext:9043168590, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events": read tcp 192.168.255.202:60268->192.168.255.200:8443: use of closed network connection'(may retry after sleeping)
- Jan 6 16:52:38 kubernetes-master2 kubelet[2785]: E0106 16:52:38.237474 2785 pod_workers.go:191] Error syncing pod 7a2e5860052277dd81c04d93d6d3fde5 ("kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)"
- Jan 6 16:52:38 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.bDRrgq.mount: Succeeded.
- Jan 6 16:52:38 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.bDRrgq.mount: Succeeded.
- Jan 6 16:52:40 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.A0Y21U.mount: Succeeded.
- Jan 6 16:52:40 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.A0Y21U.mount: Succeeded.
- Jan 6 16:52:43 kubernetes-master2 kubelet[2785]: E0106 16:52:43.649397 2785 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master2?timeout=10s": context deadline exceeded
- Jan 6 16:52:48 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.Yc5457.mount: Succeeded.
- Jan 6 16:52:48 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.Yc5457.mount: Succeeded.
- Jan 6 16:52:50 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.lVu2S6.mount: Succeeded.
- Jan 6 16:52:50 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde-runc.lVu2S6.mount: Succeeded.
- Jan 6 16:52:50 kubernetes-master2 kubelet[2785]: E0106 16:52:50.401791 2785 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy-c89x6.1657af73ac7ff243", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-c89x6", UID:"a20032bf-7e4b-4a76-904f-fee2ccd8130b", APIVersion:"v1", ResourceVersion:"921", FieldPath:"spec.containers{kube-proxy}"}, Reason:"Started", Message:"Started container kube-proxy", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503c9711c43, ext:9043168590, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503c9711c43, ext:9043168590, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = context deadline exceeded' (will not retry!)
- Jan 6 16:52:50 kubernetes-master2 dockerd[827]: time="2021-01-06T16:52:50.613925729+01:00" level=info msg="Container ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde failed to exit within 2 seconds of signal 15 - using the force"
- Jan 6 16:52:50 kubernetes-master2 systemd[1]: docker-ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde.scope: Succeeded.
- Jan 6 16:52:50 kubernetes-master2 containerd[826]: time="2021-01-06T16:52:50.713312861+01:00" level=info msg="shim reaped" id=ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde
- Jan 6 16:52:50 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-5038f08ae13513c5f6b1b97507a26434bfe7f0a90b1d9d5d25329b8021ef0c2b-merged.mount: Succeeded.
- Jan 6 16:52:50 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-5038f08ae13513c5f6b1b97507a26434bfe7f0a90b1d9d5d25329b8021ef0c2b-merged.mount: Succeeded.
- Jan 6 16:52:50 kubernetes-master2 dockerd[827]: time="2021-01-06T16:52:50.727016577+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:52:50 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-e8d352f63c9020d3b5d1c5d606c0b0e96aff63197ae3b37b94fbc7aedfb6bb04\x2dinit-merged.mount: Succeeded.
- Jan 6 16:52:50 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-e8d352f63c9020d3b5d1c5d606c0b0e96aff63197ae3b37b94fbc7aedfb6bb04\x2dinit-merged.mount: Succeeded.
- Jan 6 16:52:50 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-e8d352f63c9020d3b5d1c5d606c0b0e96aff63197ae3b37b94fbc7aedfb6bb04-merged.mount: Succeeded.
- Jan 6 16:52:50 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-e8d352f63c9020d3b5d1c5d606c0b0e96aff63197ae3b37b94fbc7aedfb6bb04-merged.mount: Succeeded.
- Jan 6 16:52:50 kubernetes-master2 containerd[826]: time="2021-01-06T16:52:50.818434937+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3/shim.sock" debug=false pid=4491
- Jan 6 16:52:50 kubernetes-master2 systemd[1]: Started libcontainer container a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3.
- Jan 6 16:52:52 kubernetes-master2 kubelet[2785]: W0106 16:52:52.240259 2785 status_manager.go:550] Failed to get status for pod "kube-proxy-c89x6_kube-system(a20032bf-7e4b-4a76-904f-fee2ccd8130b)": etcdserver: request timed out
- Jan 6 16:52:52 kubernetes-master2 kubelet[2785]: E0106 16:52:52.243505 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": pods "kube-apiserver-kubernetes-master2" is forbidden: etcdserver: request timed out
- Jan 6 16:52:52 kubernetes-master2 kubelet[2785]: I0106 16:52:52.243804 2785 scope.go:95] [topologymanager] RemoveContainer - Container ID: abed8faa123e031d19c85424e8f656edea88145fb6dfc109422a2c899a2368cb
- Jan 6 16:52:52 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-f8c3bd64c550dd6c9ebf61fe27c7f06c727dd2280764e976ed1f8f43a1f3e162\x2dinit-merged.mount: Succeeded.
- Jan 6 16:52:52 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-f8c3bd64c550dd6c9ebf61fe27c7f06c727dd2280764e976ed1f8f43a1f3e162\x2dinit-merged.mount: Succeeded.
- Jan 6 16:52:52 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-f8c3bd64c550dd6c9ebf61fe27c7f06c727dd2280764e976ed1f8f43a1f3e162-merged.mount: Succeeded.
- Jan 6 16:52:52 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-f8c3bd64c550dd6c9ebf61fe27c7f06c727dd2280764e976ed1f8f43a1f3e162-merged.mount: Succeeded.
- Jan 6 16:52:52 kubernetes-master2 containerd[826]: time="2021-01-06T16:52:52.368276147+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6fa73a1f06d65e7163b9d94ceee56684106ac4fe0381265698e46714ffb2785d/shim.sock" debug=false pid=4559
- Jan 6 16:52:52 kubernetes-master2 systemd[1]: Started libcontainer container 6fa73a1f06d65e7163b9d94ceee56684106ac4fe0381265698e46714ffb2785d.
- Jan 6 16:52:52 kubernetes-master2 kubelet[2785]: E0106 16:52:52.888829 2785 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
- Jan 6 16:52:52 kubernetes-master2 kubelet[2785]: E0106 16:52:52.894270 2785 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node-9xz2c.1657af73c07011af", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"calico-node-9xz2c", UID:"43469c7e-85dc-4d10-939e-fe93c0455c42", APIVersion:"v1", ResourceVersion:"923", FieldPath:"spec.initContainers{upgrade-ipam}"}, Reason:"Pulled", Message:"Container image \"docker.io/calico/cni:v3.17.1\" already present on machine", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503dd613baf, ext:9377672377, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503dd613baf, ext:9377672377, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field' (will not retry!)
- Jan 6 16:52:52 kubernetes-master2 kubelet[2785]: E0106 16:52:52.909098 2785 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master2": rpc error: code = Unavailable desc = transport is closing
- Jan 6 16:52:52 kubernetes-master2 kubelet[2785]: W0106 16:52:52.909456 2785 status_manager.go:550] Failed to get status for pod "calico-node-9xz2c_kube-system(43469c7e-85dc-4d10-939e-fe93c0455c42)": rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
- Jan 6 16:52:59 kubernetes-master2 kubelet[2785]: E0106 16:52:59.803295 2785 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node-9xz2c.1657af73c342fd06", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"calico-node-9xz2c", UID:"43469c7e-85dc-4d10-939e-fe93c0455c42", APIVersion:"v1", ResourceVersion:"923", FieldPath:"spec.initContainers{upgrade-ipam}"}, Reason:"Created", Message:"Created container upgrade-ipam", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events": http2: server sent GOAWAY and closed the connection; LastStreamID=51, ErrCode=NO_ERROR, debug=""'(may retry after sleeping)
- Jan 6 16:52:59 kubernetes-master2 kubelet[2785]: E0106 16:52:59.805199 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": http2: server sent GOAWAY and closed the connection; LastStreamID=51, ErrCode=NO_ERROR, debug=""
- Jan 6 16:53:00 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.VYnDMo.mount: Succeeded.
- Jan 6 16:53:00 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.VYnDMo.mount: Succeeded.
- Jan 6 16:53:01 kubernetes-master2 kubelet[2785]: E0106 16:53:01.074528 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": EOF
- Jan 6 16:53:02 kubernetes-master2 kubelet[2785]: W0106 16:53:02.909645 2785 reflector.go:436] object-"kube-system"/"kube-proxy-token-bszcf": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"kube-proxy-token-bszcf": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master2 kubelet[2785]: W0106 16:53:02.909879 2785 reflector.go:436] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"kube-proxy": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master2 kubelet[2785]: W0106 16:53:02.910160 2785 reflector.go:436] object-"kube-system"/"kubernetes-services-endpoint": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"kubernetes-services-endpoint": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master2 kubelet[2785]: W0106 16:53:02.910259 2785 reflector.go:436] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: watch of *v1.Node ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master2 kubelet[2785]: W0106 16:53:02.910360 2785 reflector.go:436] object-"kube-system"/"calico-config": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"calico-config": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master2 kubelet[2785]: W0106 16:53:02.910450 2785 reflector.go:436] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: watch of *v1.Pod ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master2 kubelet[2785]: W0106 16:53:02.910536 2785 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master2 kubelet[2785]: W0106 16:53:02.910617 2785 reflector.go:436] object-"kube-system"/"calico-node-token-bsmvn": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"calico-node-token-bsmvn": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master2 kubelet[2785]: W0106 16:53:02.910754 2785 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master2 kubelet[2785]: W0106 16:53:02.910848 2785 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master2 kubelet[2785]: E0106 16:53:02.910867 2785 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master2": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master2?timeout=10s": context deadline exceeded
- Jan 6 16:53:07 kubernetes-master2 kubelet[2785]: E0106 16:53:07.855854 2785 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node-9xz2c.1657af73c342fd06", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"calico-node-9xz2c", UID:"43469c7e-85dc-4d10-939e-fe93c0455c42", APIVersion:"v1", ResourceVersion:"923", FieldPath:"spec.initContainers{upgrade-ipam}"}, Reason:"Created", Message:"Created container upgrade-ipam", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events": EOF'(may retry after sleeping)
- Jan 6 16:53:08 kubernetes-master2 kubelet[2785]: I0106 16:53:08.146862 2785 request.go:655] Throttling request took 1.174541323s, request: POST:https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods
- Jan 6 16:53:08 kubernetes-master2 kubelet[2785]: E0106 16:53:08.148211 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-scheduler-kubernetes-master2_kube-system(9be8cb4627e7e5ad4c3f8acabd4b49b3)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": EOF
- Jan 6 16:53:08 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.Yrk1Bf.mount: Succeeded.
- Jan 6 16:53:08 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.Yrk1Bf.mount: Succeeded.
- Jan 6 16:53:09 kubernetes-master2 kubelet[2785]: E0106 16:53:09.919682 2785 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master2?timeout=10s": context deadline exceeded
- Jan 6 16:53:10 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.OPgoHG.mount: Succeeded.
- Jan 6 16:53:10 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.OPgoHG.mount: Succeeded.
- Jan 6 16:53:12 kubernetes-master2 kubelet[2785]: E0106 16:53:12.923299 2785 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master2": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master2?timeout=10s": context deadline exceeded
- Jan 6 16:53:13 kubernetes-master2 systemd[1]: docker-6fa73a1f06d65e7163b9d94ceee56684106ac4fe0381265698e46714ffb2785d.scope: Succeeded.
- Jan 6 16:53:13 kubernetes-master2 systemd[1]: docker-6fa73a1f06d65e7163b9d94ceee56684106ac4fe0381265698e46714ffb2785d.scope: Consumed 1.033s CPU time.
- Jan 6 16:53:13 kubernetes-master2 containerd[826]: time="2021-01-06T16:53:13.470988707+01:00" level=info msg="shim reaped" id=6fa73a1f06d65e7163b9d94ceee56684106ac4fe0381265698e46714ffb2785d
- Jan 6 16:53:13 kubernetes-master2 dockerd[827]: time="2021-01-06T16:53:13.481152486+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:53:13 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-f8c3bd64c550dd6c9ebf61fe27c7f06c727dd2280764e976ed1f8f43a1f3e162-merged.mount: Succeeded.
- Jan 6 16:53:13 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-f8c3bd64c550dd6c9ebf61fe27c7f06c727dd2280764e976ed1f8f43a1f3e162-merged.mount: Succeeded.
- Jan 6 16:53:14 kubernetes-master2 kubelet[2785]: I0106 16:53:14.177128 2785 scope.go:95] [topologymanager] RemoveContainer - Container ID: abed8faa123e031d19c85424e8f656edea88145fb6dfc109422a2c899a2368cb
- Jan 6 16:53:15 kubernetes-master2 kubelet[2785]: E0106 16:53:15.747996 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": EOF
- Jan 6 16:53:15 kubernetes-master2 kubelet[2785]: I0106 16:53:15.748181 2785 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6fa73a1f06d65e7163b9d94ceee56684106ac4fe0381265698e46714ffb2785d
- Jan 6 16:53:15 kubernetes-master2 kubelet[2785]: E0106 16:53:15.749249 2785 pod_workers.go:191] Error syncing pod 7a2e5860052277dd81c04d93d6d3fde5 ("kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)"
- Jan 6 16:53:16 kubernetes-master2 kubelet[2785]: E0106 16:53:16.748326 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-controller-manager-kubernetes-master2_kube-system(c61f75a63a6b7c302751a6cc76c53045)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": EOF
- Jan 6 16:53:17 kubernetes-master2 kubelet[2785]: E0106 16:53:17.857793 2785 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node-9xz2c.1657af73c342fd06", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"calico-node-9xz2c", UID:"43469c7e-85dc-4d10-939e-fe93c0455c42", APIVersion:"v1", ResourceVersion:"923", FieldPath:"spec.initContainers{upgrade-ipam}"}, Reason:"Created", Message:"Created container upgrade-ipam", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events": EOF'(may retry after sleeping)
- Jan 6 16:53:18 kubernetes-master2 kubelet[2785]: I0106 16:53:18.346887 2785 request.go:655] Throttling request took 1.798535871s, request: GET:https://kubernetes-cluster.homelab01.local:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1
- Jan 6 16:53:18 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.WhtcQC.mount: Succeeded.
- Jan 6 16:53:18 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.WhtcQC.mount: Succeeded.
- Jan 6 16:53:20 kubernetes-master2 kubelet[2785]: W0106 16:53:20.147869 2785 status_manager.go:550] Failed to get status for pod "kube-proxy-c89x6_kube-system(a20032bf-7e4b-4a76-904f-fee2ccd8130b)": an error on the server ("") has prevented the request from succeeding (get pods kube-proxy-c89x6)
- Jan 6 16:53:20 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.xnzYE3.mount: Succeeded.
- Jan 6 16:53:20 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.xnzYE3.mount: Succeeded.
- Jan 6 16:53:22 kubernetes-master2 kubelet[2785]: E0106 16:53:22.148167 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": EOF
- Jan 6 16:53:22 kubernetes-master2 kubelet[2785]: I0106 16:53:22.148300 2785 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6fa73a1f06d65e7163b9d94ceee56684106ac4fe0381265698e46714ffb2785d
- Jan 6 16:53:22 kubernetes-master2 kubelet[2785]: E0106 16:53:22.149298 2785 pod_workers.go:191] Error syncing pod 7a2e5860052277dd81c04d93d6d3fde5 ("kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)"
- Jan 6 16:53:22 kubernetes-master2 kubelet[2785]: E0106 16:53:22.935924 2785 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master2": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master2?timeout=10s": context deadline exceeded
- Jan 6 16:53:26 kubernetes-master2 kubelet[2785]: E0106 16:53:26.934243 2785 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master2?timeout=10s": context deadline exceeded
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: I0106 16:53:27.148676 2785 trace.go:205] Trace[901208852]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/kubelet.go:438 (06-Jan-2021 16:53:03.746) (total time: 23402ms):
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: Trace[901208852]: [23.402200891s] [23.402200891s] END
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: E0106 16:53:27.148720 2785 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: an error on the server ("") has prevented the request from succeeding (get nodes)
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: I0106 16:53:27.348165 2785 trace.go:205] Trace[1351369692]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (06-Jan-2021 16:53:03.821) (total time: 23526ms):
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: Trace[1351369692]: [23.526315419s] [23.526315419s] END
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: E0106 16:53:27.348205 2785 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: an error on the server ("") has prevented the request from succeeding (get runtimeclasses.node.k8s.io)
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: I0106 16:53:27.547952 2785 trace.go:205] Trace[1888077152]: "Reflector ListAndWatch" name:object-"kube-system"/"kube-proxy" (06-Jan-2021 16:53:03.830) (total time: 23717ms):
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: Trace[1888077152]: [23.717183208s] [23.717183208s] END
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: E0106 16:53:27.547996 2785 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: an error on the server ("") has prevented the request from succeeding (get configmaps)
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: I0106 16:53:27.747836 2785 trace.go:205] Trace[1499881102]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (06-Jan-2021 16:53:03.946) (total time: 23800ms):
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: Trace[1499881102]: [23.800857196s] [23.800857196s] END
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: E0106 16:53:27.747908 2785 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: E0106 16:53:27.860179 2785 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node-9xz2c.1657af73c342fd06", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"calico-node-9xz2c", UID:"43469c7e-85dc-4d10-939e-fe93c0455c42", APIVersion:"v1", ResourceVersion:"923", FieldPath:"spec.initContainers{upgrade-ipam}"}, Reason:"Created", Message:"Created container upgrade-ipam", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events": EOF'(may retry after sleeping)
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: I0106 16:53:27.947895 2785 trace.go:205] Trace[366739544]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46 (06-Jan-2021 16:53:03.952) (total time: 23995ms):
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: Trace[366739544]: [23.995244157s] [23.995244157s] END
- Jan 6 16:53:27 kubernetes-master2 kubelet[2785]: E0106 16:53:27.948557 2785 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: an error on the server ("") has prevented the request from succeeding (get pods)
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: I0106 16:53:28.147761 2785 trace.go:205] Trace[1807054112]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (06-Jan-2021 16:53:04.004) (total time: 24143ms):
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: Trace[1807054112]: [24.143052364s] [24.143052364s] END
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: E0106 16:53:28.147788 2785 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: an error on the server ("") has prevented the request from succeeding (get csidrivers.storage.k8s.io)
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: I0106 16:53:28.347028 2785 request.go:655] Throttling request took 1.399117043s, request: GET:https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=603
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: I0106 16:53:28.349870 2785 trace.go:205] Trace[730619198]: "Reflector ListAndWatch" name:object-"kube-system"/"kubernetes-services-endpoint" (06-Jan-2021 16:53:04.275) (total time: 24074ms):
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: Trace[730619198]: [24.074175846s] [24.074175846s] END
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: E0106 16:53:28.350598 2785 reflector.go:138] object-"kube-system"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: an error on the server ("") has prevented the request from succeeding (get configmaps)
- Jan 6 16:53:28 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.vpGmrX.mount: Succeeded.
- Jan 6 16:53:28 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.vpGmrX.mount: Succeeded.
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: I0106 16:53:28.547595 2785 trace.go:205] Trace[1541939284]: "Reflector ListAndWatch" name:object-"kube-system"/"calico-config" (06-Jan-2021 16:53:04.275) (total time: 24271ms):
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: Trace[1541939284]: [24.271763208s] [24.271763208s] END
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: E0106 16:53:28.547633 2785 reflector.go:138] object-"kube-system"/"calico-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: an error on the server ("") has prevented the request from succeeding (get configmaps)
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: I0106 16:53:28.748800 2785 trace.go:205] Trace[1397939885]: "Reflector ListAndWatch" name:object-"kube-system"/"kube-proxy-token-bszcf" (06-Jan-2021 16:53:04.275) (total time: 24473ms):
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: Trace[1397939885]: [24.473295698s] [24.473295698s] END
- Jan 6 16:53:28 kubernetes-master2 kubelet[2785]: E0106 16:53:28.749252 2785 reflector.go:138] object-"kube-system"/"kube-proxy-token-bszcf": Failed to watch *v1.Secret: failed to list *v1.Secret: an error on the server ("") has prevented the request from succeeding (get secrets)
- Jan 6 16:53:29 kubernetes-master2 kubelet[2785]: I0106 16:53:29.147785 2785 trace.go:205] Trace[1239880497]: "Reflector ListAndWatch" name:object-"kube-system"/"calico-node-token-bsmvn" (06-Jan-2021 16:53:04.441) (total time: 24706ms):
- Jan 6 16:53:29 kubernetes-master2 kubelet[2785]: Trace[1239880497]: [24.706478309s] [24.706478309s] END
- Jan 6 16:53:29 kubernetes-master2 kubelet[2785]: E0106 16:53:29.147959 2785 reflector.go:138] object-"kube-system"/"calico-node-token-bsmvn": Failed to watch *v1.Secret: failed to list *v1.Secret: an error on the server ("") has prevented the request from succeeding (get secrets)
- Jan 6 16:53:30 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.WxsZG7.mount: Succeeded.
- Jan 6 16:53:30 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.WxsZG7.mount: Succeeded.
- Jan 6 16:53:32 kubernetes-master2 kubelet[2785]: E0106 16:53:32.949753 2785 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master2": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master2?timeout=10s": context deadline exceeded
- Jan 6 16:53:32 kubernetes-master2 kubelet[2785]: E0106 16:53:32.950762 2785 kubelet_node_status.go:434] Unable to update node status: update node status exceeds retry count
- Jan 6 16:53:36 kubernetes-master2 systemd[1]: session-1.scope: Succeeded.
- Jan 6 16:53:37 kubernetes-master2 kubelet[2785]: E0106 16:53:37.861644 2785 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node-9xz2c.1657af73c342fd06", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"calico-node-9xz2c", UID:"43469c7e-85dc-4d10-939e-fe93c0455c42", APIVersion:"v1", ResourceVersion:"923", FieldPath:"spec.initContainers{upgrade-ipam}"}, Reason:"Created", Message:"Created container upgrade-ipam", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events": EOF'(may retry after sleeping)
- Jan 6 16:53:38 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.Bp0leN.mount: Succeeded.
- Jan 6 16:53:38 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.Bp0leN.mount: Succeeded.
- Jan 6 16:53:38 kubernetes-master2 kubelet[2785]: I0106 16:53:38.546773 2785 request.go:655] Throttling request took 1.574297836s, request: POST:https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods
- Jan 6 16:53:38 kubernetes-master2 kubelet[2785]: E0106 16:53:38.547936 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": EOF
- Jan 6 16:53:38 kubernetes-master2 kubelet[2785]: I0106 16:53:38.548050 2785 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6fa73a1f06d65e7163b9d94ceee56684106ac4fe0381265698e46714ffb2785d
- Jan 6 16:53:38 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-bb2e13b728d3654779969ef3eda2b7052fc56d64937d9e3aaa3866a89537cc09\x2dinit-merged.mount: Succeeded.
- Jan 6 16:53:38 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-bb2e13b728d3654779969ef3eda2b7052fc56d64937d9e3aaa3866a89537cc09\x2dinit-merged.mount: Succeeded.
- Jan 6 16:53:38 kubernetes-master2 systemd[1109]: var-lib-docker-overlay2-bb2e13b728d3654779969ef3eda2b7052fc56d64937d9e3aaa3866a89537cc09-merged.mount: Succeeded.
- Jan 6 16:53:38 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-bb2e13b728d3654779969ef3eda2b7052fc56d64937d9e3aaa3866a89537cc09-merged.mount: Succeeded.
- Jan 6 16:53:38 kubernetes-master2 containerd[826]: time="2021-01-06T16:53:38.648532673+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cdf645af5905cf8a4615f9e56b567169ab71c4d16559b1dbf46469e2ec2e183b/shim.sock" debug=false pid=5121
- Jan 6 16:53:38 kubernetes-master2 systemd[1]: Started libcontainer container cdf645af5905cf8a4615f9e56b567169ab71c4d16559b1dbf46469e2ec2e183b.
- Jan 6 16:53:40 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.j5esU8.mount: Succeeded.
- Jan 6 16:53:40 kubernetes-master2 systemd[1109]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.j5esU8.mount: Succeeded.
- Jan 6 16:53:40 kubernetes-master2 kubelet[2785]: E0106 16:53:40.947533 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": EOF
- Jan 6 16:53:43 kubernetes-master2 kubelet[2785]: E0106 16:53:43.949948 2785 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master2?timeout=10s": context deadline exceeded
- Jan 6 16:53:44 kubernetes-master2 kubelet[2785]: W0106 16:53:44.747969 2785 status_manager.go:550] Failed to get status for pod "calico-node-9xz2c_kube-system(43469c7e-85dc-4d10-939e-fe93c0455c42)": an error on the server ("") has prevented the request from succeeding (get pods calico-node-9xz2c)
- Jan 6 16:53:45 kubernetes-master2 kubelet[2785]: E0106 16:53:45.148330 2785 csi_plugin.go:293] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: an error on the server ("") has prevented the request from succeeding (get csinodes.storage.k8s.io kubernetes-master2)
- Jan 6 16:53:46 kubernetes-master2 systemd[1]: Stopping User Manager for UID 1000...
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Stopped target Main User Target.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Stopped target Basic System.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Stopped target Paths.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Stopped target Sockets.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Stopped target Timers.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: dbus.socket: Succeeded.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Closed D-Bus User Message Bus Socket.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: dirmngr.socket: Succeeded.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Closed GnuPG network certificate management daemon.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: gpg-agent-browser.socket: Succeeded.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Closed GnuPG cryptographic agent and passphrase cache (access for web browsers).
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: gpg-agent-extra.socket: Succeeded.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Closed GnuPG cryptographic agent and passphrase cache (restricted).
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: gpg-agent-ssh.socket: Succeeded.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Closed GnuPG cryptographic agent (ssh-agent emulation).
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: gpg-agent.socket: Succeeded.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Closed GnuPG cryptographic agent and passphrase cache.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: pk-debconf-helper.socket: Succeeded.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Closed debconf communication socket.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: snapd.session-agent.socket: Succeeded.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Closed REST API socket for snapd user session agent.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Reached target Shutdown.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: systemd-exit.service: Succeeded.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Finished Exit the Session.
- Jan 6 16:53:46 kubernetes-master2 systemd[1109]: Reached target Exit the Session.
- Jan 6 16:53:46 kubernetes-master2 systemd[1]: [email protected]: Succeeded.
- Jan 6 16:53:46 kubernetes-master2 systemd[1]: Stopped User Manager for UID 1000.
- Jan 6 16:53:46 kubernetes-master2 systemd[1]: Stopping User Runtime Directory /run/user/1000...
- Jan 6 16:53:46 kubernetes-master2 systemd[1]: run-user-1000.mount: Succeeded.
- Jan 6 16:53:46 kubernetes-master2 systemd[1]: [email protected]: Succeeded.
- Jan 6 16:53:46 kubernetes-master2 systemd[1]: Stopped User Runtime Directory /run/user/1000.
- Jan 6 16:53:46 kubernetes-master2 systemd[1]: Removed slice User Slice of UID 1000.
- Jan 6 16:53:47 kubernetes-master2 kubelet[2785]: E0106 16:53:47.863942 2785 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node-9xz2c.1657af73c342fd06", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"calico-node-9xz2c", UID:"43469c7e-85dc-4d10-939e-fe93c0455c42", APIVersion:"v1", ResourceVersion:"923", FieldPath:"spec.initContainers{upgrade-ipam}"}, Reason:"Created", Message:"Created container upgrade-ipam", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events": EOF'(may retry after sleeping)
- Jan 6 16:53:48 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.RrkRcV.mount: Succeeded.
- Jan 6 16:53:48 kubernetes-master2 kubelet[2785]: I0106 16:53:48.746764 2785 request.go:655] Throttling request took 1.398479501s, request: GET:https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-master2
- Jan 6 16:53:50 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.YdjnWF.mount: Succeeded.
- Jan 6 16:53:52 kubernetes-master2 kubelet[2785]: E0106 16:53:52.963279 2785 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master2": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master2?resourceVersion=0&timeout=10s": context deadline exceeded
- Jan 6 16:53:53 kubernetes-master2 kubelet[2785]: I0106 16:53:53.348680 2785 trace.go:205] Trace[105227402]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (06-Jan-2021 16:53:29.486) (total time: 23862ms):
- Jan 6 16:53:53 kubernetes-master2 kubelet[2785]: Trace[105227402]: [23.862422814s] [23.862422814s] END
- Jan 6 16:53:53 kubernetes-master2 kubelet[2785]: E0106 16:53:53.348747 2785 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: an error on the server ("") has prevented the request from succeeding (get runtimeclasses.node.k8s.io)
- Jan 6 16:53:53 kubernetes-master2 kubelet[2785]: I0106 16:53:53.947872 2785 trace.go:205] Trace[1626251587]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/kubelet.go:438 (06-Jan-2021 16:53:29.553) (total time: 24393ms):
- Jan 6 16:53:53 kubernetes-master2 kubelet[2785]: Trace[1626251587]: [24.393940323s] [24.393940323s] END
- Jan 6 16:53:53 kubernetes-master2 kubelet[2785]: E0106 16:53:53.947918 2785 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: an error on the server ("") has prevented the request from succeeding (get nodes)
- Jan 6 16:53:54 kubernetes-master2 kubelet[2785]: I0106 16:53:54.547836 2785 trace.go:205] Trace[2144800110]: "Reflector ListAndWatch" name:object-"kube-system"/"kube-proxy" (06-Jan-2021 16:53:29.780) (total time: 24767ms):
- Jan 6 16:53:54 kubernetes-master2 kubelet[2785]: Trace[2144800110]: [24.767739623s] [24.767739623s] END
- Jan 6 16:53:54 kubernetes-master2 kubelet[2785]: E0106 16:53:54.548571 2785 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: an error on the server ("") has prevented the request from succeeding (get configmaps)
- Jan 6 16:53:54 kubernetes-master2 kubelet[2785]: I0106 16:53:54.747894 2785 trace.go:205] Trace[1072962792]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (06-Jan-2021 16:53:29.984) (total time: 24763ms):
- Jan 6 16:53:54 kubernetes-master2 kubelet[2785]: Trace[1072962792]: [24.763244787s] [24.763244787s] END
- Jan 6 16:53:54 kubernetes-master2 kubelet[2785]: E0106 16:53:54.747923 2785 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
- Jan 6 16:53:55 kubernetes-master2 kubelet[2785]: I0106 16:53:55.148160 2785 trace.go:205] Trace[198819239]: "Reflector ListAndWatch" name:object-"kube-system"/"kubernetes-services-endpoint" (06-Jan-2021 16:53:30.428) (total time: 24719ms):
- Jan 6 16:53:55 kubernetes-master2 kubelet[2785]: Trace[198819239]: [24.719950935s] [24.719950935s] END
- Jan 6 16:53:55 kubernetes-master2 kubelet[2785]: E0106 16:53:55.148207 2785 reflector.go:138] object-"kube-system"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: an error on the server ("") has prevented the request from succeeding (get configmaps)
- Jan 6 16:53:55 kubernetes-master2 kubelet[2785]: I0106 16:53:55.347928 2785 trace.go:205] Trace[1508882268]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (06-Jan-2021 16:53:30.460) (total time: 24887ms):
- Jan 6 16:53:55 kubernetes-master2 kubelet[2785]: Trace[1508882268]: [24.887228795s] [24.887228795s] END
- Jan 6 16:53:55 kubernetes-master2 kubelet[2785]: E0106 16:53:55.347962 2785 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: an error on the server ("") has prevented the request from succeeding (get csidrivers.storage.k8s.io)
- Jan 6 16:53:55 kubernetes-master2 kubelet[2785]: I0106 16:53:55.547759 2785 trace.go:205] Trace[2131862513]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46 (06-Jan-2021 16:53:30.514) (total time: 25032ms):
- Jan 6 16:53:55 kubernetes-master2 kubelet[2785]: Trace[2131862513]: [25.032985901s] [25.032985901s] END
- Jan 6 16:53:55 kubernetes-master2 kubelet[2785]: E0106 16:53:55.547788 2785 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: an error on the server ("") has prevented the request from succeeding (get pods)
- Jan 6 16:53:55 kubernetes-master2 kubelet[2785]: I0106 16:53:55.948043 2785 trace.go:205] Trace[1395348704]: "Reflector ListAndWatch" name:object-"kube-system"/"kube-proxy-token-bszcf" (06-Jan-2021 16:53:30.603) (total time: 25344ms):
- Jan 6 16:53:55 kubernetes-master2 kubelet[2785]: Trace[1395348704]: [25.344327934s] [25.344327934s] END
- Jan 6 16:53:55 kubernetes-master2 kubelet[2785]: E0106 16:53:55.948069 2785 reflector.go:138] object-"kube-system"/"kube-proxy-token-bszcf": Failed to watch *v1.Secret: failed to list *v1.Secret: an error on the server ("") has prevented the request from succeeding (get secrets)
- Jan 6 16:53:56 kubernetes-master2 kubelet[2785]: I0106 16:53:56.348435 2785 trace.go:205] Trace[1736948630]: "Reflector ListAndWatch" name:object-"kube-system"/"calico-node-token-bsmvn" (06-Jan-2021 16:53:30.830) (total time: 25517ms):
- Jan 6 16:53:56 kubernetes-master2 kubelet[2785]: Trace[1736948630]: [25.517879017s] [25.517879017s] END
- Jan 6 16:53:56 kubernetes-master2 kubelet[2785]: E0106 16:53:56.348480 2785 reflector.go:138] object-"kube-system"/"calico-node-token-bsmvn": Failed to watch *v1.Secret: failed to list *v1.Secret: an error on the server ("") has prevented the request from succeeding (get secrets)
- Jan 6 16:53:56 kubernetes-master2 kubelet[2785]: I0106 16:53:56.548001 2785 trace.go:205] Trace[1382522678]: "Reflector ListAndWatch" name:object-"kube-system"/"calico-config" (06-Jan-2021 16:53:31.670) (total time: 24877ms):
- Jan 6 16:53:56 kubernetes-master2 kubelet[2785]: Trace[1382522678]: [24.87723515s] [24.87723515s] END
- Jan 6 16:53:56 kubernetes-master2 kubelet[2785]: E0106 16:53:56.548591 2785 reflector.go:138] object-"kube-system"/"calico-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: an error on the server ("") has prevented the request from succeeding (get configmaps)
- Jan 6 16:53:57 kubernetes-master2 kubelet[2785]: E0106 16:53:57.865445 2785 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node-9xz2c.1657af73c342fd06", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"calico-node-9xz2c", UID:"43469c7e-85dc-4d10-939e-fe93c0455c42", APIVersion:"v1", ResourceVersion:"923", FieldPath:"spec.initContainers{upgrade-ipam}"}, Reason:"Created", Message:"Created container upgrade-ipam", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503e0342706, ext:9425049617, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events": EOF'(may retry after sleeping)
- Jan 6 16:53:58 kubernetes-master2 systemd[1]: run-docker-runtime\x2drunc-moby-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3-runc.okZCeE.mount: Succeeded.
- Jan 6 16:53:59 kubernetes-master2 systemd[1]: docker-cdf645af5905cf8a4615f9e56b567169ab71c4d16559b1dbf46469e2ec2e183b.scope: Succeeded.
- Jan 6 16:53:59 kubernetes-master2 containerd[826]: time="2021-01-06T16:53:59.547277841+01:00" level=info msg="shim reaped" id=cdf645af5905cf8a4615f9e56b567169ab71c4d16559b1dbf46469e2ec2e183b
- Jan 6 16:53:59 kubernetes-master2 dockerd[827]: time="2021-01-06T16:53:59.557533694+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:53:59 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-bb2e13b728d3654779969ef3eda2b7052fc56d64937d9e3aaa3866a89537cc09-merged.mount: Succeeded.
- Jan 6 16:54:00 kubernetes-master2 kubelet[2785]: I0106 16:54:00.472850 2785 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6fa73a1f06d65e7163b9d94ceee56684106ac4fe0381265698e46714ffb2785d
- Jan 6 16:54:00 kubernetes-master2 dockerd[827]: time="2021-01-06T16:54:00.608535463+01:00" level=info msg="Container a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3 failed to exit within 2 seconds of signal 15 - using the force"
- Jan 6 16:54:00 kubernetes-master2 systemd[1]: docker-a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3.scope: Succeeded.
- Jan 6 16:54:00 kubernetes-master2 containerd[826]: time="2021-01-06T16:54:00.713053024+01:00" level=info msg="shim reaped" id=a201a454553480660ef13ad4173fbcba1c022c66d0a62b99faca18fc1d7893c3
- Jan 6 16:54:00 kubernetes-master2 dockerd[827]: time="2021-01-06T16:54:00.723291813+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:54:00 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-e8d352f63c9020d3b5d1c5d606c0b0e96aff63197ae3b37b94fbc7aedfb6bb04-merged.mount: Succeeded.
- Jan 6 16:54:00 kubernetes-master2 kubelet[2785]: E0106 16:54:00.750745 2785 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/pods": EOF
- Jan 6 16:54:00 kubernetes-master2 kubelet[2785]: I0106 16:54:00.751239 2785 scope.go:95] [topologymanager] RemoveContainer - Container ID: cdf645af5905cf8a4615f9e56b567169ab71c4d16559b1dbf46469e2ec2e183b
- Jan 6 16:54:00 kubernetes-master2 kubelet[2785]: E0106 16:54:00.752025 2785 pod_workers.go:191] Error syncing pod 7a2e5860052277dd81c04d93d6d3fde5 ("kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)"
- Jan 6 16:54:00 kubernetes-master2 systemd[1]: var-lib-docker-overlay2-5713d2d9be4e824cd86353c5f486ae14784193cbcf34e95aea9d14d331389863\x2dinit-merged.mount: Succeeded.
- Jan 6 16:54:00 kubernetes-master2 containerd[826]: time="2021-01-06T16:54:00.797720747+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e2b808477ebcb0b2a1ee91f0c48e9b04e6ffdb3d1266b0436f202483efbeac6a/shim.sock" debug=false pid=5484
- Jan 6 16:54:00 kubernetes-master2 systemd[1]: Started libcontainer container e2b808477ebcb0b2a1ee91f0c48e9b04e6ffdb3d1266b0436f202483efbeac6a.
- Jan 6 16:54:00 kubernetes-master2 kubelet[2785]: E0106 16:54:00.963488 2785 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master2?timeout=10s": context deadline exceeded
- Jan 6 16:54:01 kubernetes-master2 kubelet[2785]: I0106 16:54:01.495039 2785 scope.go:95] [topologymanager] RemoveContainer - Container ID: ff89c13a3010a3d8b9aeb8a8b9d2afc9548bf9fdfbf5a9375e5514863a490bde
- Jan 6 16:54:02 kubernetes-master2 kubelet[2785]: I0106 16:54:02.146817 2785 request.go:655] Throttling request took 1.051583598s, request: GET:https://kubernetes-cluster.homelab01.local:8443/api/v1/services?resourceVersion=348
- Jan 6 16:54:02 kubernetes-master2 kubelet[2785]: E0106 16:54:02.975785 2785 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master2": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master2?timeout=10s": context deadline exceeded
- Jan 6 16:54:03 kubernetes-master2 systemd[1]: Created slice User Slice of UID 1000.
- Jan 6 16:54:03 kubernetes-master2 systemd[1]: Starting User Runtime Directory /run/user/1000...
- Jan 6 16:54:03 kubernetes-master2 systemd[1]: Finished User Runtime Directory /run/user/1000.
- Jan 6 16:54:03 kubernetes-master2 systemd[1]: Starting User Manager for UID 1000...
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Reached target Paths.
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Reached target Timers.
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Starting D-Bus User Message Bus Socket.
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Listening on GnuPG network certificate management daemon.
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Listening on GnuPG cryptographic agent and passphrase cache.
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Listening on debconf communication socket.
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Listening on REST API socket for snapd user session agent.
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Listening on D-Bus User Message Bus Socket.
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Reached target Sockets.
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Reached target Basic System.
- Jan 6 16:54:03 kubernetes-master2 systemd[1]: Started User Manager for UID 1000.
- Jan 6 16:54:03 kubernetes-master2 systemd[1]: Started Session 3 of user wojcieh.
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Reached target Main User Target.
- Jan 6 16:54:03 kubernetes-master2 systemd[5552]: Startup finished in 118ms.
- Jan 6 16:54:03 kubernetes-master2 kubelet[2785]: W0106 16:54:03.547805 2785 status_manager.go:550] Failed to get status for pod "kube-apiserver-kubernetes-master2_kube-system(7a2e5860052277dd81c04d93d6d3fde5)": an error on the server ("") has prevented the request from succeeding (get pods kube-apiserver-kubernetes-master2)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement