Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Jan 6 16:42:11 kubernetes-master1 systemd[1]: Stopping System Logging Service...
- Jan 6 16:42:11 kubernetes-master1 rsyslogd: [origin software="rsyslogd" swVersion="8.2001.0" x-pid="811" x-info="https://www.rsyslog.com"] exiting on signal 15.
- Jan 6 16:42:11 kubernetes-master1 systemd[1]: rsyslog.service: Succeeded.
- Jan 6 16:42:11 kubernetes-master1 systemd[1]: Stopped System Logging Service.
- Jan 6 16:42:11 kubernetes-master1 systemd[1]: Starting System Logging Service...
- Jan 6 16:42:11 kubernetes-master1 systemd[1]: Started System Logging Service.
- Jan 6 16:42:11 kubernetes-master1 rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd. [v8.2001.0]
- Jan 6 16:42:11 kubernetes-master1 rsyslogd: rsyslogd's groupid changed to 110
- Jan 6 16:42:11 kubernetes-master1 rsyslogd: rsyslogd's userid changed to 104
- Jan 6 16:42:11 kubernetes-master1 rsyslogd: [origin software="rsyslogd" swVersion="8.2001.0" x-pid="1410" x-info="https://www.rsyslog.com"] start
- Jan 6 16:42:20 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
- Jan 6 16:42:20 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:42:20 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: F0106 16:42:20.771345 1416 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: goroutine 1 [running]:
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000c4001, 0xc0000d6840, 0xfb, 0x14d)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0007f3810, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000657e80, 0x1, 0x1)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000262b00, 0xc0000c6010, 0x3, 0x3)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000262b00, 0xc0000c6010, 0x3, 0x3, 0xc000262b00, 0xc0000c6010)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000262b00, 0x1657aef0bb905cd5, 0x70c9020, 0x409b25)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: main.main()
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: goroutine 19 [chan receive]:
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: goroutine 90 [select]:
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0008c5630)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: goroutine 102 [select]:
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000476120, 0x1, 0xc0000a40c0)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000aa0201, 0xc0000a40c0)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:42:20 kubernetes-master1 kubelet[1416]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:42:20 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:42:20 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:42:30 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
- Jan 6 16:42:30 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:42:30 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: F0106 16:42:31.022540 1444 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: goroutine 1 [running]:
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0004d6ee0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0002f46b0, 0x1, 0x1)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0009badc0, 0xc000124010, 0x3, 0x3)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0009badc0, 0xc000124010, 0x3, 0x3, 0xc0009badc0, 0xc000124010)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0009badc0, 0x1657aef31e94ddd0, 0x70c9020, 0x409b25)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: main.main()
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: goroutine 19 [chan receive]:
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: goroutine 87 [select]:
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0004afe00)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: goroutine 48 [select]:
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0008b8480, 0x1, 0xc0001000c0)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0x1, 0xc0001000c0)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:42:31 kubernetes-master1 kubelet[1444]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:42:31 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:42:31 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:42:41 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
- Jan 6 16:42:41 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:42:41 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: F0106 16:42:41.276141 1471 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: goroutine 1 [running]:
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000cc001, 0xc0000de840, 0xfb, 0x14d)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0008e2d90, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc00057de80, 0x1, 0x1)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc00027f340, 0xc0000ce010, 0x3, 0x3)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00027f340, 0xc0000ce010, 0x3, 0x3, 0xc00027f340, 0xc0000ce010)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00027f340, 0x1657aef581bf719c, 0x70c9020, 0x409b25)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: main.main()
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: goroutine 19 [chan receive]:
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: goroutine 90 [select]:
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc00037c990, 0x1, 0xc0000b00c0)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000325901, 0xc0000b00c0)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: goroutine 79 [select]:
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000813590)
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:42:41 kubernetes-master1 kubelet[1471]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:42:41 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:42:41 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:42:51 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
- Jan 6 16:42:51 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:42:51 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: F0106 16:42:51.523133 1491 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: goroutine 1 [running]:
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0008c9340, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0003d1de0, 0x1, 0x1)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000416dc0, 0xc000124010, 0x3, 0x3)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000416dc0, 0xc000124010, 0x3, 0x3, 0xc000416dc0, 0xc000124010)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000416dc0, 0x1657aef7e4839f8c, 0x70c9020, 0x409b25)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: main.main()
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: goroutine 19 [chan receive]:
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: goroutine 102 [select]:
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000a91650, 0x1, 0xc0001000c0)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000934201, 0xc0001000c0)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: goroutine 92 [select]:
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00007f450)
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:42:51 kubernetes-master1 kubelet[1491]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:42:51 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:42:51 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:43:01 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
- Jan 6 16:43:01 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:01 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: F0106 16:43:01.758678 1517 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: goroutine 1 [running]:
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000c2001, 0xc0000d6840, 0xfb, 0x14d)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000af71f0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0006413e0, 0x1, 0x1)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000677b80, 0xc0000c4010, 0x3, 0x3)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000677b80, 0xc0000c4010, 0x3, 0x3, 0xc000677b80, 0xc0000c4010)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000677b80, 0x1657aefa469757c5, 0x70c9020, 0x409b25)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: main.main()
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: goroutine 19 [chan receive]:
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: goroutine 94 [select]:
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0000d5590)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: goroutine 106 [select]:
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000700390, 0x1, 0xc0000a00c0)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc0006a9c01, 0xc0000a00c0)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:43:01 kubernetes-master1 kubelet[1517]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:43:01 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:43:01 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:43:11 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9.
- Jan 6 16:43:11 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:11 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: F0106 16:43:12.004520 1554 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: goroutine 1 [running]:
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0000be840, 0xfb, 0x14d)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0008b33b0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000490820, 0x1, 0x1)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0008cf8c0, 0xc00004e090, 0x3, 0x3)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0008cf8c0, 0xc00004e090, 0x3, 0x3, 0xc0008cf8c0, 0xc00004e090)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0008cf8c0, 0x1657aefca94c33c9, 0x70c9020, 0x409b25)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: main.main()
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: goroutine 6 [chan receive]:
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: goroutine 60 [select]:
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00076f8b0)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: goroutine 103 [select]:
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0008bc240, 0x1, 0xc0000a00c0)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000159501, 0xc0000a00c0)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:43:12 kubernetes-master1 kubelet[1554]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:43:12 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:43:12 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:43:22 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10.
- Jan 6 16:43:22 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:22 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: F0106 16:43:22.272416 1580 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: goroutine 1 [running]:
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0000c6840, 0xfb, 0x14d)
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000b8b5e0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000b93d80, 0x1, 0x1)
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc00043b600, 0xc00004e090, 0x3, 0x3)
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00043b600, 0xc00004e090, 0x3, 0x3, 0xc00043b600, 0xc00004e090)
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00043b600, 0x1657aeff0d458312, 0x70c9020, 0x409b25)
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: main.main()
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: goroutine 6 [chan receive]:
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: goroutine 91 [select]:
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00046a8c0)
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: goroutine 106 [runnable]:
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:80
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:43:22 kubernetes-master1 kubelet[1580]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:43:22 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:43:22 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:43:32 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11.
- Jan 6 16:43:32 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:32 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: F0106 16:43:32.523409 1606 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: goroutine 1 [running]:
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0000c6840, 0xfb, 0x14d)
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc00028aee0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0004779c0, 0x1, 0x1)
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0006bcdc0, 0xc00004e090, 0x3, 0x3)
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0006bcdc0, 0xc00004e090, 0x3, 0x3, 0xc0006bcdc0, 0xc00004e090)
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0006bcdc0, 0x1657af0170529ce2, 0x70c9020, 0x409b25)
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: main.main()
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: goroutine 6 [chan receive]:
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: goroutine 93 [select]:
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0001e2410)
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: goroutine 73 [runnable]:
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:80
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:43:32 kubernetes-master1 kubelet[1606]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:43:32 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:43:32 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:43:42 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12.
- Jan 6 16:43:42 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:42 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: F0106 16:43:42.761699 1631 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: goroutine 1 [running]:
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000c4001, 0xc0000d6b00, 0xfb, 0x14d)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000ad38f0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000347c80, 0x1, 0x1)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0000f9340, 0xc0000c6010, 0x3, 0x3)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0000f9340, 0xc0000c6010, 0x3, 0x3, 0xc0000f9340, 0xc0000c6010)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0000f9340, 0x1657af03d2918f1e, 0x70c9020, 0x409b25)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: main.main()
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: goroutine 19 [chan receive]:
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: goroutine 89 [select]:
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0000a1e00)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: goroutine 104 [select]:
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0006bc030, 0x1, 0xc0000a20c0)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc0008dbb01, 0xc0000a20c0)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:43:42 kubernetes-master1 kubelet[1631]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:43:42 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:43:42 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:43:52 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13.
- Jan 6 16:43:52 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:52 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: F0106 16:43:53.017584 1652 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: goroutine 1 [running]:
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc00064bea0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000649d70, 0x1, 0x1)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc00040b600, 0xc000124010, 0x3, 0x3)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00040b600, 0xc000124010, 0x3, 0x3, 0xc00040b600, 0xc000124010)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00040b600, 0x1657af0635ddab58, 0x70c9020, 0x409b25)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: main.main()
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: goroutine 19 [chan receive]:
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: goroutine 84 [select]:
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000893720)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: goroutine 99 [select]:
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc00081b710, 0x1, 0xc0001000c0)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc0000ed801, 0xc0001000c0)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:43:53 kubernetes-master1 kubelet[1652]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:43:53 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:43:53 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:44:03 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14.
- Jan 6 16:44:03 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:03 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: F0106 16:44:03.260062 1672 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: goroutine 1 [running]:
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000c4001, 0xc0000d6840, 0xfb, 0x14d)
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000bc7c00, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000bd3d10, 0x1, 0x1)
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000516840, 0xc0000c6010, 0x3, 0x3)
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000516840, 0xc0000c6010, 0x3, 0x3, 0xc000516840, 0xc0000c6010)
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000516840, 0x1657af089853845b, 0x70c9020, 0x409b25)
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: main.main()
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: goroutine 19 [chan receive]:
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: goroutine 73 [select]:
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0002569b0)
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: goroutine 104 [runnable]:
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:80
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:44:03 kubernetes-master1 kubelet[1672]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:44:03 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:44:03 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:44:13 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15.
- Jan 6 16:44:13 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:13 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: F0106 16:44:13.528157 1698 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: goroutine 1 [running]:
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0009b8850, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000528360, 0x1, 0x1)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0001598c0, 0xc000124010, 0x3, 0x3)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0001598c0, 0xc000124010, 0x3, 0x3, 0xc0001598c0, 0xc000124010)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0001598c0, 0x1657af0afc6434df, 0x70c9020, 0x409b25)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: main.main()
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: goroutine 19 [chan receive]:
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: goroutine 88 [select]:
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000388eb0)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: goroutine 102 [select]:
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0008f7e60, 0x1, 0xc0001000c0)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000890f01, 0xc0001000c0)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:44:13 kubernetes-master1 kubelet[1698]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:44:13 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:44:13 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:44:23 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16.
- Jan 6 16:44:23 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:23 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: F0106 16:44:23.770983 1724 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: goroutine 1 [running]:
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0000be840, 0xfb, 0x14d)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000aca620, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0005f7390, 0x1, 0x1)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000267080, 0xc00004e090, 0x3, 0x3)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000267080, 0xc00004e090, 0x3, 0x3, 0xc000267080, 0xc00004e090)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000267080, 0x1657af0d5ee55889, 0x70c9020, 0x409b25)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: main.main()
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: goroutine 6 [chan receive]:
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: goroutine 100 [select]:
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0002ee840, 0x1, 0xc0000a00c0)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000374501, 0xc0000a00c0)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: goroutine 86 [select]:
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0002054f0)
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:44:23 kubernetes-master1 kubelet[1724]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:44:23 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:44:23 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:44:33 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17.
- Jan 6 16:44:33 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:33 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: F0106 16:44:34.024090 1749 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: goroutine 1 [running]:
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0000be840, 0xfb, 0x14d)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0008ccc40, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0004dbed0, 0x1, 0x1)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0000df8c0, 0xc00004e090, 0x3, 0x3)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0000df8c0, 0xc00004e090, 0x3, 0x3, 0xc0000df8c0, 0xc00004e090)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0000df8c0, 0x1657af0fc1fea38d, 0x70c9020, 0x409b25)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: main.main()
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: goroutine 6 [chan receive]:
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: goroutine 74 [select]:
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000205590)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: goroutine 81 [select]:
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000a88840, 0x1, 0xc0000a00c0)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000a8c001, 0xc0000a00c0)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:44:34 kubernetes-master1 kubelet[1749]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:44:34 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:44:34 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:44:44 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18.
- Jan 6 16:44:44 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:44 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: F0106 16:44:44.268464 1774 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: goroutine 1 [running]:
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000acb5e0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0002be0b0, 0x1, 0x1)
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0003c0dc0, 0xc000124010, 0x3, 0x3)
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0003c0dc0, 0xc000124010, 0x3, 0x3, 0xc0003c0dc0, 0xc000124010)
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0003c0dc0, 0x1657af1224902c9f, 0x70c9020, 0x409b25)
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: main.main()
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: goroutine 19 [chan receive]:
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: goroutine 77 [select]:
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00039ed70)
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: goroutine 89 [runnable]:
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:80
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:44:44 kubernetes-master1 kubelet[1774]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:44:44 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:44:44 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:44:54 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19.
- Jan 6 16:44:54 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:54 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: F0106 16:44:54.543879 1815 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: goroutine 1 [running]:
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0005c9f10, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc00005e930, 0x1, 0x1)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc00043f600, 0xc000124010, 0x3, 0x3)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00043f600, 0xc000124010, 0x3, 0x3, 0xc00043f600, 0xc000124010)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00043f600, 0x1657af14891f3fe2, 0x70c9020, 0x409b25)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: main.main()
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: goroutine 19 [chan receive]:
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: goroutine 91 [select]:
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0000c81e0)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: goroutine 106 [select]:
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0007fb860, 0x1, 0xc0001000c0)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000b12601, 0xc0001000c0)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:44:54 kubernetes-master1 kubelet[1815]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:44:54 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:44:54 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:04 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
- Jan 6 16:45:04 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:04 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: F0106 16:45:04.769414 1840 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: goroutine 1 [running]:
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000bc7e30, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000432830, 0x1, 0x1)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000287340, 0xc000124010, 0x3, 0x3)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000287340, 0xc000124010, 0x3, 0x3, 0xc000287340, 0xc000124010)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000287340, 0x1657af16ea9b8e70, 0x70c9020, 0x409b25)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: main.main()
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: goroutine 19 [chan receive]:
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: goroutine 75 [select]:
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00007ec80)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: goroutine 104 [select]:
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000a48de0, 0x1, 0xc0001000c0)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000a78601, 0xc0001000c0)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:45:04 kubernetes-master1 kubelet[1840]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:45:04 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:45:04 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:14 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21.
- Jan 6 16:45:14 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:14 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: F0106 16:45:15.020490 1866 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: goroutine 1 [running]:
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000bc3e30, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0004e7510, 0x1, 0x1)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000673340, 0xc000124010, 0x3, 0x3)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000673340, 0xc000124010, 0x3, 0x3, 0xc000673340, 0xc000124010)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000673340, 0x1657af194d9d796c, 0x70c9020, 0x409b25)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: main.main()
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: goroutine 19 [chan receive]:
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: goroutine 37 [select]:
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0000b68c0)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: goroutine 99 [select]:
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0002c50e0, 0x1, 0xc0001000c0)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000ac6d01, 0xc0001000c0)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:45:15 kubernetes-master1 kubelet[1866]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:45:15 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:45:15 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:25 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22.
- Jan 6 16:45:25 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:25 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: F0106 16:45:25.251025 1892 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: goroutine 1 [running]:
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0000be840, 0xfb, 0x14d)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000883dc0, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000822560, 0x1, 0x1)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0000de000, 0xc00004e090, 0x3, 0x3)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0000de000, 0xc00004e090, 0x3, 0x3, 0xc0000de000, 0xc00004e090)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0000de000, 0x1657af1baf689c0c, 0x70c9020, 0x409b25)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: main.main()
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: goroutine 6 [chan receive]:
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: goroutine 86 [select]:
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000205b80)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: goroutine 101 [select]:
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000309c20, 0x1, 0xc0000a00c0)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000b83901, 0xc0000a00c0)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:45:25 kubernetes-master1 kubelet[1892]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:45:25 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:45:25 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:35 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 23.
- Jan 6 16:45:35 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:35 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: F0106 16:45:35.514766 1919 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: goroutine 1 [running]:
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0000be840, 0xfb, 0x14d)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc0008cab60, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc00061bcb0, 0x1, 0x1)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0002eb080, 0xc00004e090, 0x3, 0x3)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0002eb080, 0xc00004e090, 0x3, 0x3, 0xc0002eb080, 0xc00004e090)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0002eb080, 0x1657af1e132cedd5, 0x70c9020, 0x409b25)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: main.main()
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: goroutine 6 [chan receive]:
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: goroutine 65 [select]:
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0002f93b0)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: goroutine 108 [select]:
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000a196e0, 0x1, 0xc0000a00c0)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000281d01, 0xc0000a00c0)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:45:35 kubernetes-master1 kubelet[1919]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:45:35 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:45:35 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:45 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24.
- Jan 6 16:45:45 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:45 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: F0106 16:45:45.764224 1986 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: goroutine 1 [running]:
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000122001, 0xc000136840, 0xfb, 0x14d)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc000912310, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000462c90, 0x1, 0x1)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0004162c0, 0xc000124010, 0x3, 0x3)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0004162c0, 0xc000124010, 0x3, 0x3, 0xc0004162c0, 0xc000124010)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0004162c0, 0x1657af207616aea8, 0x70c9020, 0x409b25)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: main.main()
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: goroutine 19 [chan receive]:
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: goroutine 92 [select]:
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000853c70)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: goroutine 107 [select]:
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc0000d6000, 0x1, 0xc0001000c0)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc0007d6d01, 0xc0001000c0)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:45:45 kubernetes-master1 kubelet[1986]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:45:45 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:45:45 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:55 kubernetes-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 25.
- Jan 6 16:45:55 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:55 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: F0106 16:45:56.026894 2022 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: goroutine 1 [running]:
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000cc001, 0xc0000de840, 0xfb, 0x14d)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70c9460, 0xc000000003, 0x0, 0x0, 0xc00016df10, 0x6f34162, 0x9, 0xc6, 0x411b00)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70c9460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc00069f4a0, 0x1, 0x1)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0004f7b80, 0xc0000ce010, 0x3, 0x3)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xe5a
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0004f7b80, 0xc0000ce010, 0x3, 0x3, 0xc0004f7b80, 0xc0000ce010)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0004f7b80, 0x1657af22d9cbc0c0, 0x70c9020, 0x409b25)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: main.main()
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: goroutine 19 [chan receive]:
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70c9460)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: goroutine 84 [select]:
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0004f4190)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: goroutine 99 [select]:
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a721f0, 0x4f0bde0, 0xc000678000, 0x1, 0xc0000aa0c0)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a721f0, 0x12a05f200, 0x0, 0xc000afa901, 0xc0000aa0c0)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a721f0, 0x12a05f200)
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
- Jan 6 16:45:56 kubernetes-master1 kubelet[2022]: #011/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
- Jan 6 16:45:56 kubernetes-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
- Jan 6 16:45:56 kubernetes-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
- Jan 6 16:45:56 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:46:51 kubernetes-master1 systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:46:51 kubernetes-master1 systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:46:51 kubernetes-master1 systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:46:51 kubernetes-master1 systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:46:56 kubernetes-master1 systemd[1]: Reloading.
- Jan 6 16:46:56 kubernetes-master1 systemd[1]: /lib/systemd/system/dbus.socket:5: ListenStream= references a path below legacy directory /var/run/, updating /var/run/dbus/system_bus_socket → /run/dbus/system_bus_socket; please update the unit file accordingly.
- Jan 6 16:46:56 kubernetes-master1 systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:46:56 kubernetes-master1 systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:46:56 kubernetes-master1 systemd[1]: /lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
- Jan 6 16:46:56 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:46:57 kubernetes-master1 systemd[1]: Started Kubernetes systemd probe.
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.355751 2333 server.go:416] Version: v1.20.1
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.356791 2333 server.go:837] Client rotation is on, will bootstrap in background
- Jan 6 16:46:57 kubernetes-master1 systemd[1]: run-rdb5daffc9c38417f94eb04503e527bce.scope: Succeeded.
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.369327 2333 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: E0106 16:46:57.382251 2333 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://kubernetes-cluster.homelab01.local:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": EOF
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.462078 2333 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.462580 2333 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.462645 2333 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.463039 2333 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.463058 2333 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.463065 2333 container_manager_linux.go:315] Creating device plugin manager: true
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: W0106 16:46:57.463236 2333 kubelet.go:297] Using dockershim is deprecated, please consider using a full-fledged CRI implementation
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.464032 2333 client.go:77] Connecting to docker on unix:///var/run/docker.sock
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.464050 2333 client.go:94] Start docker client with request timeout=2m0s
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: W0106 16:46:57.475221 2333 docker_service.go:559] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.475256 2333 docker_service.go:240] Hairpin mode set to "hairpin-veth"
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: W0106 16:46:57.475382 2333 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: W0106 16:46:57.477976 2333 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.478064 2333 docker_service.go:255] Docker cri networking managed by cni
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: W0106 16:46:57.478111 2333 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.488248 2333 docker_service.go:260] Docker Info: &{ID:ZJPZ:5PX5:H3KS:7XMT:AT4K:IYXO:V2DE:K6Q5:RRDJ:46NY:7ZZV:U2TQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2021-01-06T16:46:57.479333116+01:00 LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.4.0-59-generic OperatingSystem:Ubuntu 20.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00065bf80 NCPU:2 MemTotal:4127334400 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes-master1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.11 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support]}
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.488348 2333 docker_service.go:273] Setting cgroupDriver to systemd
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.506930 2333 remote_runtime.go:62] parsed scheme: ""
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.507222 2333 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.507497 2333 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.507707 2333 clientconn.go:948] ClientConn switching balancer to "pick_first"
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.507923 2333 remote_image.go:50] parsed scheme: ""
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.508094 2333 remote_image.go:50] scheme "" not registered, fallback to default scheme
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.508630 2333 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.508793 2333 clientconn.go:948] ClientConn switching balancer to "pick_first"
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.508980 2333 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.510634 2333 kubelet.go:273] Watching apiserver
- Jan 6 16:46:57 kubernetes-master1 kubelet[2333]: I0106 16:46:57.542206 2333 kuberuntime_manager.go:216] Container runtime docker initialized, version: 19.03.11, apiVersion: 1.40.0
- Jan 6 16:46:59 kubernetes-master1 kubelet[2333]: E0106 16:46:59.540824 2333 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://kubernetes-cluster.homelab01.local:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": EOF
- Jan 6 16:47:02 kubernetes-master1 kubelet[2333]: W0106 16:47:02.478575 2333 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.670423 2333 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://kubernetes-cluster.homelab01.local:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": EOF
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.879354 2333 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: #011For verbose messaging see aws.Config.CredentialsChainVerboseErrors
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.885044 2333 server.go:1176] Started kubelet
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.885141 2333 kubelet.go:1271] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.887787 2333 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.887862 2333 server.go:148] Starting to listen on 0.0.0.0:10250
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.895660 2333 server.go:409] Adding debug handlers to kubelet server.
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.897012 2333 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes-master1.1657af32a674ab09", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-master1", UID:"kubernetes-master1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff594bdf4b2c509, ext:6973746040, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff594bdf4b2c509, ext:6973746040, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/default/events": EOF'(may retry after sleeping)
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.898572 2333 volume_manager.go:271] Starting Kubelet Volume Manager
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.899628 2333 desired_state_of_world_populator.go:142] Desired state populator starts to run
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.915808 2333 remote_runtime.go:332] ContainerStatus "e1c469a598290ddba8ea2f1f0d8ab75bac90a08a5c0d4f488ec3d1ae931c5860" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: e1c469a598290ddba8ea2f1f0d8ab75bac90a08a5c0d4f488ec3d1ae931c5860
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.915871 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "e1c469a598290ddba8ea2f1f0d8ab75bac90a08a5c0d4f488ec3d1ae931c5860": rpc error: code = Unknown desc = Error: No such container: e1c469a598290ddba8ea2f1f0d8ab75bac90a08a5c0d4f488ec3d1ae931c5860
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.919412 2333 remote_runtime.go:332] ContainerStatus "60ecf1450a989bf5745d61cdcaed3d080f2780592ecef1321628c0629000eef5" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 60ecf1450a989bf5745d61cdcaed3d080f2780592ecef1321628c0629000eef5
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.919441 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "60ecf1450a989bf5745d61cdcaed3d080f2780592ecef1321628c0629000eef5": rpc error: code = Unknown desc = Error: No such container: 60ecf1450a989bf5745d61cdcaed3d080f2780592ecef1321628c0629000eef5
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.921639 2333 remote_runtime.go:332] ContainerStatus "91b91babcfdb7b5f6249f7b0e1f36e66021e500de95e18c3922bff372519d20d" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 91b91babcfdb7b5f6249f7b0e1f36e66021e500de95e18c3922bff372519d20d
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.922270 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "91b91babcfdb7b5f6249f7b0e1f36e66021e500de95e18c3922bff372519d20d": rpc error: code = Unknown desc = Error: No such container: 91b91babcfdb7b5f6249f7b0e1f36e66021e500de95e18c3922bff372519d20d
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.924713 2333 remote_runtime.go:332] ContainerStatus "6fd23be4054bdd4ff27fa3f3507cf4369579e1d39b693a6f82945eb2ea55da52" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 6fd23be4054bdd4ff27fa3f3507cf4369579e1d39b693a6f82945eb2ea55da52
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.925058 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "6fd23be4054bdd4ff27fa3f3507cf4369579e1d39b693a6f82945eb2ea55da52": rpc error: code = Unknown desc = Error: No such container: 6fd23be4054bdd4ff27fa3f3507cf4369579e1d39b693a6f82945eb2ea55da52
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.929940 2333 remote_runtime.go:332] ContainerStatus "cc70527d7f77d9e95b216b15c0cbb85425bd542181d905dc2aeaefe02b8e1620" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: cc70527d7f77d9e95b216b15c0cbb85425bd542181d905dc2aeaefe02b8e1620
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.930292 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "cc70527d7f77d9e95b216b15c0cbb85425bd542181d905dc2aeaefe02b8e1620": rpc error: code = Unknown desc = Error: No such container: cc70527d7f77d9e95b216b15c0cbb85425bd542181d905dc2aeaefe02b8e1620
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.931545 2333 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.935195 2333 remote_runtime.go:332] ContainerStatus "b376fc22ade4d39f018ca397dc8f374bf64ab551d392f15d23b59974e5e56c52" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: b376fc22ade4d39f018ca397dc8f374bf64ab551d392f15d23b59974e5e56c52
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.935762 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "b376fc22ade4d39f018ca397dc8f374bf64ab551d392f15d23b59974e5e56c52": rpc error: code = Unknown desc = Error: No such container: b376fc22ade4d39f018ca397dc8f374bf64ab551d392f15d23b59974e5e56c52
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.938947 2333 remote_runtime.go:332] ContainerStatus "de422cfb09fe5a05f468b01e1d26af2741dfee01bda5890b0bed7bda0fb122f4" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: de422cfb09fe5a05f468b01e1d26af2741dfee01bda5890b0bed7bda0fb122f4
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.939329 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "de422cfb09fe5a05f468b01e1d26af2741dfee01bda5890b0bed7bda0fb122f4": rpc error: code = Unknown desc = Error: No such container: de422cfb09fe5a05f468b01e1d26af2741dfee01bda5890b0bed7bda0fb122f4
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.941651 2333 remote_runtime.go:332] ContainerStatus "0a466aa5e8eb052b14cfaede0c1e40fe9caef3c97834f7f89948f3dd8937f852" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 0a466aa5e8eb052b14cfaede0c1e40fe9caef3c97834f7f89948f3dd8937f852
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.942006 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "0a466aa5e8eb052b14cfaede0c1e40fe9caef3c97834f7f89948f3dd8937f852": rpc error: code = Unknown desc = Error: No such container: 0a466aa5e8eb052b14cfaede0c1e40fe9caef3c97834f7f89948f3dd8937f852
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.944130 2333 remote_runtime.go:332] ContainerStatus "8afe884211a97d335e8c57e4dfbefddaa952344f9655ae2832fc94d6d20314df" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 8afe884211a97d335e8c57e4dfbefddaa952344f9655ae2832fc94d6d20314df
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.945270 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "8afe884211a97d335e8c57e4dfbefddaa952344f9655ae2832fc94d6d20314df": rpc error: code = Unknown desc = Error: No such container: 8afe884211a97d335e8c57e4dfbefddaa952344f9655ae2832fc94d6d20314df
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.948337 2333 remote_runtime.go:332] ContainerStatus "4c908e97cd470bc2b792fd0f510ebbfd4c8a835687232f3fa15f1ec174b8e2d4" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 4c908e97cd470bc2b792fd0f510ebbfd4c8a835687232f3fa15f1ec174b8e2d4
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.948685 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "4c908e97cd470bc2b792fd0f510ebbfd4c8a835687232f3fa15f1ec174b8e2d4": rpc error: code = Unknown desc = Error: No such container: 4c908e97cd470bc2b792fd0f510ebbfd4c8a835687232f3fa15f1ec174b8e2d4
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.950446 2333 remote_runtime.go:332] ContainerStatus "fab6eea8a814d588aa7e31e7563ec4607f459bda8e62bd3c88cd40db3492620c" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: fab6eea8a814d588aa7e31e7563ec4607f459bda8e62bd3c88cd40db3492620c
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.950839 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "fab6eea8a814d588aa7e31e7563ec4607f459bda8e62bd3c88cd40db3492620c": rpc error: code = Unknown desc = Error: No such container: fab6eea8a814d588aa7e31e7563ec4607f459bda8e62bd3c88cd40db3492620c
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.952747 2333 remote_runtime.go:332] ContainerStatus "62fc9fe3b6422c9c043d6c43a0cf86571cb66dfca8176740500929ee12219f3d" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 62fc9fe3b6422c9c043d6c43a0cf86571cb66dfca8176740500929ee12219f3d
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.953073 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "62fc9fe3b6422c9c043d6c43a0cf86571cb66dfca8176740500929ee12219f3d": rpc error: code = Unknown desc = Error: No such container: 62fc9fe3b6422c9c043d6c43a0cf86571cb66dfca8176740500929ee12219f3d
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.955747 2333 remote_runtime.go:332] ContainerStatus "80010fca8b93bfd812c146768269adf5bc9c973ac2c37996f56cd6e3bfc16038" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 80010fca8b93bfd812c146768269adf5bc9c973ac2c37996f56cd6e3bfc16038
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.956369 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "80010fca8b93bfd812c146768269adf5bc9c973ac2c37996f56cd6e3bfc16038": rpc error: code = Unknown desc = Error: No such container: 80010fca8b93bfd812c146768269adf5bc9c973ac2c37996f56cd6e3bfc16038
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.957764 2333 client.go:86] parsed scheme: "unix"
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.958427 2333 client.go:86] scheme "unix" not registered, fallback to default scheme
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.958468 2333 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.958481 2333 clientconn.go:948] ClientConn switching balancer to "pick_first"
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.960275 2333 remote_runtime.go:332] ContainerStatus "1e5bf5b9dde349c4bfcb156ae1378fc88831e30864b29b85147eed5206de285b" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 1e5bf5b9dde349c4bfcb156ae1378fc88831e30864b29b85147eed5206de285b
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.960305 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "1e5bf5b9dde349c4bfcb156ae1378fc88831e30864b29b85147eed5206de285b": rpc error: code = Unknown desc = Error: No such container: 1e5bf5b9dde349c4bfcb156ae1378fc88831e30864b29b85147eed5206de285b
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.961791 2333 remote_runtime.go:332] ContainerStatus "77288f6dfdbd3b8619cbdc75adc2fc2cb9ff5795e9a2878ca536d6f405423898" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 77288f6dfdbd3b8619cbdc75adc2fc2cb9ff5795e9a2878ca536d6f405423898
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.962078 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "77288f6dfdbd3b8619cbdc75adc2fc2cb9ff5795e9a2878ca536d6f405423898": rpc error: code = Unknown desc = Error: No such container: 77288f6dfdbd3b8619cbdc75adc2fc2cb9ff5795e9a2878ca536d6f405423898
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: E0106 16:47:03.963542 2333 remote_runtime.go:332] ContainerStatus "ff0f7ec16eb0830882004486b37657465da156f84f496f492940f03acd77963e" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: ff0f7ec16eb0830882004486b37657465da156f84f496f492940f03acd77963e
- Jan 6 16:47:03 kubernetes-master1 kubelet[2333]: I0106 16:47:03.963774 2333 kuberuntime_gc.go:347] Error getting ContainerStatus for containerID "ff0f7ec16eb0830882004486b37657465da156f84f496f492940f03acd77963e": rpc error: code = Unknown desc = Error: No such container: ff0f7ec16eb0830882004486b37657465da156f84f496f492940f03acd77963e
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.028400 2333 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.028710 2333 status_manager.go:158] Starting to sync pod status with apiserver
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.028913 2333 kubelet.go:1799] Starting kubelet main sync loop.
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.029159 2333 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.033956 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.085929 2333 kubelet_node_status.go:71] Attempting to register node kubernetes-master1
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.087048 2333 kubelet_node_status.go:93] Unable to register node "kubernetes-master1" with API server: Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes": EOF
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.129484 2333 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.135937 2333 cpu_manager.go:193] [cpumanager] starting with none policy
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.136345 2333 cpu_manager.go:194] [cpumanager] reconciling every 10s
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.136380 2333 state_mem.go:36] [cpumanager] initializing new in-memory state store
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.137003 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.138437 2333 policy_none.go:43] [cpumanager] none policy: Start
- Jan 6 16:47:04 kubernetes-master1 systemd[1]: Created slice libcontainer container kubepods.slice.
- Jan 6 16:47:04 kubernetes-master1 systemd[1]: Created slice libcontainer container kubepods-burstable.slice.
- Jan 6 16:47:04 kubernetes-master1 systemd[1]: Created slice libcontainer container kubepods-besteffort.slice.
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: W0106 16:47:04.189235 2333 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.189785 2333 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "kubernetes-master1" not found
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.190152 2333 plugin_manager.go:114] Starting Kubelet Plugin Manager
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.237639 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.321622 2333 kubelet_node_status.go:71] Attempting to register node kubernetes-master1
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.322599 2333 kubelet_node_status.go:93] Unable to register node "kubernetes-master1" with API server: Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes": EOF
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.330214 2333 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.338092 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.360041 2333 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.396228 2333 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:47:04 kubernetes-master1 systemd[1]: Created slice libcontainer container kubepods-burstable-pod66b0dec8ac0c51772012999d6a94770d.slice.
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.436792 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/2d932d5f886b8cfbaeec38eaf545a088-ca-certs") pod "kube-apiserver-kubernetes-master1" (UID: "2d932d5f886b8cfbaeec38eaf545a088")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.438211 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:04 kubernetes-master1 systemd[1]: Created slice libcontainer container kubepods-burstable-pod2d932d5f886b8cfbaeec38eaf545a088.slice.
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.463120 2333 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:47:04 kubernetes-master1 systemd[1]: Created slice libcontainer container kubepods-burstable-podc61f75a63a6b7c302751a6cc76c53045.slice.
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537025 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/2d932d5f886b8cfbaeec38eaf545a088-usr-local-share-ca-certificates") pod "kube-apiserver-kubernetes-master1" (UID: "2d932d5f886b8cfbaeec38eaf545a088")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537097 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-flexvolume-dir") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537155 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-k8s-certs") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537231 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-usr-local-share-ca-certificates") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537268 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/66b0dec8ac0c51772012999d6a94770d-etcd-data") pod "etcd-kubernetes-master1" (UID: "66b0dec8ac0c51772012999d6a94770d")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537307 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/2d932d5f886b8cfbaeec38eaf545a088-etc-ca-certificates") pod "kube-apiserver-kubernetes-master1" (UID: "2d932d5f886b8cfbaeec38eaf545a088")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537345 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-etc-ca-certificates") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537374 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-etc-pki") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537403 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-kubeconfig") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537437 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/9be8cb4627e7e5ad4c3f8acabd4b49b3-kubeconfig") pod "kube-scheduler-kubernetes-master1" (UID: "9be8cb4627e7e5ad4c3f8acabd4b49b3")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537465 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/2d932d5f886b8cfbaeec38eaf545a088-etc-pki") pod "kube-apiserver-kubernetes-master1" (UID: "2d932d5f886b8cfbaeec38eaf545a088")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537494 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/2d932d5f886b8cfbaeec38eaf545a088-usr-share-ca-certificates") pod "kube-apiserver-kubernetes-master1" (UID: "2d932d5f886b8cfbaeec38eaf545a088")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537523 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-usr-share-ca-certificates") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537550 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/66b0dec8ac0c51772012999d6a94770d-etcd-certs") pod "etcd-kubernetes-master1" (UID: "66b0dec8ac0c51772012999d6a94770d")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537675 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/2d932d5f886b8cfbaeec38eaf545a088-k8s-certs") pod "kube-apiserver-kubernetes-master1" (UID: "2d932d5f886b8cfbaeec38eaf545a088")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.537719 2333 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-ca-certs") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.538457 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:04 kubernetes-master1 systemd[1]: Created slice libcontainer container kubepods-burstable-pod9be8cb4627e7e5ad4c3f8acabd4b49b3.slice.
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.638878 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.743216 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:04 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-29911eb4df411f5d11f758b74b2386fcb064d9805d9c36b583d93db77b89db0c\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: I0106 16:47:04.770605 2333 kubelet_node_status.go:71] Attempting to register node kubernetes-master1
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.772002 2333 kubelet_node_status.go:93] Unable to register node "kubernetes-master1" with API server: Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes": EOF
- Jan 6 16:47:04 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-1f82b3fc998d0570daf618a89322c36450db6e7bbc0345e26f1b4af569f267ab\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:04 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-98672bbd1e30584c22a58e6e9b86725d736fd3aeb39183677b12564c3534bcaa\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.844010 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:04 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-98672bbd1e30584c22a58e6e9b86725d736fd3aeb39183677b12564c3534bcaa-merged.mount: Succeeded.
- Jan 6 16:47:04 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-98672bbd1e30584c22a58e6e9b86725d736fd3aeb39183677b12564c3534bcaa-merged.mount: Succeeded.
- Jan 6 16:47:04 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-6287c579482164b80c5e0d440e3f0f0009c00726f24748956d21b79d2e74e4f7\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:04 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-6287c579482164b80c5e0d440e3f0f0009c00726f24748956d21b79d2e74e4f7\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:04 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-6287c579482164b80c5e0d440e3f0f0009c00726f24748956d21b79d2e74e4f7-merged.mount: Succeeded.
- Jan 6 16:47:04 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-6287c579482164b80c5e0d440e3f0f0009c00726f24748956d21b79d2e74e4f7-merged.mount: Succeeded.
- Jan 6 16:47:04 kubernetes-master1 kubelet[2333]: E0106 16:47:04.944207 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:05 kubernetes-master1 containerd[821]: time="2021-01-06T16:47:05.028006213+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6ca4c22fb210c18be97696da4623ab2e261c738dd9edc21d2e25315eaab45df3/shim.sock" debug=false pid=2793
- Jan 6 16:47:05 kubernetes-master1 containerd[821]: time="2021-01-06T16:47:05.028986492+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b34185f8eb34749ad8b9557d27062b7d5bcbf077748c95a0a95d3f456816b6c9/shim.sock" debug=false pid=2795
- Jan 6 16:47:05 kubernetes-master1 containerd[821]: time="2021-01-06T16:47:05.029255398+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2bc181928e270dd8ce70c8cfa401253b4bf2adedbffc04a54096e9d86869452d/shim.sock" debug=false pid=2797
- Jan 6 16:47:05 kubernetes-master1 containerd[821]: time="2021-01-06T16:47:05.030608263+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ac9c79cd6e20050c75fb0ee5563bd19e7b55d20f0580a58b243cf39899c9bef4/shim.sock" debug=false pid=2799
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: E0106 16:47:05.044291 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: Started libcontainer container 2bc181928e270dd8ce70c8cfa401253b4bf2adedbffc04a54096e9d86869452d.
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: Started libcontainer container b34185f8eb34749ad8b9557d27062b7d5bcbf077748c95a0a95d3f456816b6c9.
- Jan 6 16:47:05 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-b34185f8eb34749ad8b9557d27062b7d5bcbf077748c95a0a95d3f456816b6c9-runc.WmTIvA.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-b34185f8eb34749ad8b9557d27062b7d5bcbf077748c95a0a95d3f456816b6c9-runc.WmTIvA.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: Started libcontainer container 6ca4c22fb210c18be97696da4623ab2e261c738dd9edc21d2e25315eaab45df3.
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: E0106 16:47:05.144627 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: Started libcontainer container ac9c79cd6e20050c75fb0ee5563bd19e7b55d20f0580a58b243cf39899c9bef4.
- Jan 6 16:47:05 kubernetes-master1 kernel: [ 335.772497] cgroup: cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: E0106 16:47:05.244750 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: E0106 16:47:05.345665 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:05 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-7c6ab0dd3f084b9d6fe55981ef94d7b313d76cae6517123948d1f45401696f11\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-7c6ab0dd3f084b9d6fe55981ef94d7b313d76cae6517123948d1f45401696f11\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-7c6ab0dd3f084b9d6fe55981ef94d7b313d76cae6517123948d1f45401696f11-merged.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 containerd[821]: time="2021-01-06T16:47:05.389587515+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e189bc7d347503e5b8414d6d3badf39a1476239b0e9245fddcb53a473a1271e7/shim.sock" debug=false pid=2938
- Jan 6 16:47:05 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-491b3c637393258ea4b3e6fcad33da24b2c0456c4dd801a91ad222251bff9419\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-491b3c637393258ea4b3e6fcad33da24b2c0456c4dd801a91ad222251bff9419-merged.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-491b3c637393258ea4b3e6fcad33da24b2c0456c4dd801a91ad222251bff9419-merged.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: E0106 16:47:05.445859 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: Started libcontainer container e189bc7d347503e5b8414d6d3badf39a1476239b0e9245fddcb53a473a1271e7.
- Jan 6 16:47:05 kubernetes-master1 containerd[821]: time="2021-01-06T16:47:05.506299338+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2578f99d3679bb2987561a9b89833a5c9fafb19423bfcc5aeab133e31ea98bba/shim.sock" debug=false pid=2978
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: W0106 16:47:05.521716 2333 pod_container_deletor.go:79] Container "6ca4c22fb210c18be97696da4623ab2e261c738dd9edc21d2e25315eaab45df3" not found in pod's containers
- Jan 6 16:47:05 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-3389617e12740c7c323a25120c054cd71b3a935685c1d5da779bd7cfbe294af4\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-3389617e12740c7c323a25120c054cd71b3a935685c1d5da779bd7cfbe294af4\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-2578f99d3679bb2987561a9b89833a5c9fafb19423bfcc5aeab133e31ea98bba-runc.McQnj4.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: E0106 16:47:05.555393 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-2578f99d3679bb2987561a9b89833a5c9fafb19423bfcc5aeab133e31ea98bba-runc.McQnj4.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: Started libcontainer container 2578f99d3679bb2987561a9b89833a5c9fafb19423bfcc5aeab133e31ea98bba.
- Jan 6 16:47:05 kubernetes-master1 containerd[821]: time="2021-01-06T16:47:05.655199234+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0b749d081fa1d485845a10d8a2c42785aa93f35e5cd9aabfa95cc0d67182ea4e/shim.sock" debug=false pid=3042
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: E0106 16:47:05.659643 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: I0106 16:47:05.664700 2333 kubelet_node_status.go:71] Attempting to register node kubernetes-master1
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-15d1c9e70a95cabd87bf748aaa87224437c4800d98ebd98249e6548b4b21c3ef\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-15d1c9e70a95cabd87bf748aaa87224437c4800d98ebd98249e6548b4b21c3ef\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: W0106 16:47:05.709595 2333 pod_container_deletor.go:79] Container "b34185f8eb34749ad8b9557d27062b7d5bcbf077748c95a0a95d3f456816b6c9" not found in pod's containers
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: Started libcontainer container 0b749d081fa1d485845a10d8a2c42785aa93f35e5cd9aabfa95cc0d67182ea4e.
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: E0106 16:47:05.742569 2333 kubelet_node_status.go:93] Unable to register node "kubernetes-master1" with API server: Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes": EOF
- Jan 6 16:47:05 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-0b749d081fa1d485845a10d8a2c42785aa93f35e5cd9aabfa95cc0d67182ea4e-runc.iVYZmV.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-0b749d081fa1d485845a10d8a2c42785aa93f35e5cd9aabfa95cc0d67182ea4e-runc.iVYZmV.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-15d1c9e70a95cabd87bf748aaa87224437c4800d98ebd98249e6548b4b21c3ef-merged.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-15d1c9e70a95cabd87bf748aaa87224437c4800d98ebd98249e6548b4b21c3ef-merged.mount: Succeeded.
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: E0106 16:47:05.793049 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:05 kubernetes-master1 containerd[821]: time="2021-01-06T16:47:05.815707665+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/241235f74f34afe2077581315ae09713cd0d4fce7e37bbf92cc06d943425a1bc/shim.sock" debug=false pid=3100
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: W0106 16:47:05.820939 2333 docker_container.go:245] Deleted previously existing symlink file: "/var/log/pods/kube-system_kube-controller-manager-kubernetes-master1_c61f75a63a6b7c302751a6cc76c53045/kube-controller-manager/0.log"
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: W0106 16:47:05.843482 2333 docker_container.go:245] Deleted previously existing symlink file: "/var/log/pods/kube-system_kube-scheduler-kubernetes-master1_9be8cb4627e7e5ad4c3f8acabd4b49b3/kube-scheduler/0.log"
- Jan 6 16:47:05 kubernetes-master1 systemd[1]: Started libcontainer container 241235f74f34afe2077581315ae09713cd0d4fce7e37bbf92cc06d943425a1bc.
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: E0106 16:47:05.894225 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: W0106 16:47:05.960974 2333 pod_container_deletor.go:79] Container "2bc181928e270dd8ce70c8cfa401253b4bf2adedbffc04a54096e9d86869452d" not found in pod's containers
- Jan 6 16:47:05 kubernetes-master1 kubelet[2333]: E0106 16:47:05.994639 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:06 kubernetes-master1 kubelet[2333]: W0106 16:47:06.044381 2333 pod_container_deletor.go:79] Container "ac9c79cd6e20050c75fb0ee5563bd19e7b55d20f0580a58b243cf39899c9bef4" not found in pod's containers
- Jan 6 16:47:06 kubernetes-master1 kubelet[2333]: E0106 16:47:06.094780 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:06 kubernetes-master1 kubelet[2333]: E0106 16:47:06.194951 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:06 kubernetes-master1 kubelet[2333]: E0106 16:47:06.295071 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:06 kubernetes-master1 kubelet[2333]: E0106 16:47:06.395232 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:06 kubernetes-master1 kubelet[2333]: E0106 16:47:06.495358 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:06 kubernetes-master1 kubelet[2333]: E0106 16:47:06.595485 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:06 kubernetes-master1 kubelet[2333]: E0106 16:47:06.695598 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:06 kubernetes-master1 kubelet[2333]: E0106 16:47:06.795742 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:06 kubernetes-master1 kubelet[2333]: E0106 16:47:06.895857 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:06 kubernetes-master1 kubelet[2333]: E0106 16:47:06.997253 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: E0106 16:47:07.097463 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: E0106 16:47:07.197668 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: E0106 16:47:07.297833 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: E0106 16:47:07.397944 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: I0106 16:47:07.428015 2333 kubelet_node_status.go:71] Attempting to register node kubernetes-master1
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: W0106 16:47:07.479895 2333 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: E0106 16:47:07.498054 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: E0106 16:47:07.598160 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: E0106 16:47:07.698225 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: E0106 16:47:07.798311 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: E0106 16:47:07.898447 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: E0106 16:47:07.939092 2333 kubelet_node_status.go:93] Unable to register node "kubernetes-master1" with API server: Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes": EOF
- Jan 6 16:47:07 kubernetes-master1 kubelet[2333]: E0106 16:47:07.998581 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: E0106 16:47:08.101253 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: E0106 16:47:08.201367 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: E0106 16:47:08.301446 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: I0106 16:47:08.339216 2333 trace.go:205] Trace[668473437]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/kubelet.go:438 (06-Jan-2021 16:46:57.527) (total time: 10811ms):
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: Trace[668473437]: [10.811386466s] [10.811386466s] END
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: E0106 16:47:08.339253 2333 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: an error on the server ("") has prevented the request from succeeding (get nodes)
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: E0106 16:47:08.401596 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: E0106 16:47:08.501718 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: I0106 16:47:08.539201 2333 trace.go:205] Trace[1087997032]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46 (06-Jan-2021 16:46:57.533) (total time: 11006ms):
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: Trace[1087997032]: [11.006088904s] [11.006088904s] END
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: E0106 16:47:08.539224 2333 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: an error on the server ("") has prevented the request from succeeding (get pods)
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: E0106 16:47:08.601859 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: E0106 16:47:08.701983 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: I0106 16:47:08.739270 2333 trace.go:205] Trace[2046484285]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (06-Jan-2021 16:46:57.528) (total time: 11210ms):
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: Trace[2046484285]: [11.210413092s] [11.210413092s] END
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: E0106 16:47:08.739305 2333 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: E0106 16:47:08.802075 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:08 kubernetes-master1 kubelet[2333]: E0106 16:47:08.902318 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:09 kubernetes-master1 kubelet[2333]: E0106 16:47:09.002428 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:09 kubernetes-master1 kubelet[2333]: E0106 16:47:09.102573 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:09 kubernetes-master1 kubelet[2333]: E0106 16:47:09.199876 2333 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:47:09 kubernetes-master1 kubelet[2333]: E0106 16:47:09.202858 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:09 kubernetes-master1 kubelet[2333]: E0106 16:47:09.303051 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:09 kubernetes-master1 kubelet[2333]: E0106 16:47:09.403266 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:09 kubernetes-master1 kubelet[2333]: E0106 16:47:09.503465 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:09 kubernetes-master1 kubelet[2333]: E0106 16:47:09.603619 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:09 kubernetes-master1 kubelet[2333]: E0106 16:47:09.703749 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:09 kubernetes-master1 kubelet[2333]: E0106 16:47:09.803908 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:09 kubernetes-master1 kubelet[2333]: E0106 16:47:09.904105 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:10 kubernetes-master1 kubelet[2333]: E0106 16:47:10.004348 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:10 kubernetes-master1 kubelet[2333]: E0106 16:47:10.104459 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:10 kubernetes-master1 kubelet[2333]: E0106 16:47:10.204536 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:10 kubernetes-master1 kubelet[2333]: E0106 16:47:10.304625 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:10 kubernetes-master1 kubelet[2333]: E0106 16:47:10.404766 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:10 kubernetes-master1 kubelet[2333]: E0106 16:47:10.504947 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:10 kubernetes-master1 kubelet[2333]: E0106 16:47:10.605087 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:10 kubernetes-master1 kubelet[2333]: E0106 16:47:10.705283 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:10 kubernetes-master1 kubelet[2333]: E0106 16:47:10.805453 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:10 kubernetes-master1 kubelet[2333]: E0106 16:47:10.906127 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:11 kubernetes-master1 kubelet[2333]: E0106 16:47:11.006266 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:11 kubernetes-master1 kubelet[2333]: E0106 16:47:11.106400 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:11 kubernetes-master1 kubelet[2333]: I0106 16:47:11.171382 2333 kubelet_node_status.go:71] Attempting to register node kubernetes-master1
- Jan 6 16:47:11 kubernetes-master1 kubelet[2333]: E0106 16:47:11.206563 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:11 kubernetes-master1 kubelet[2333]: E0106 16:47:11.306725 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:11 kubernetes-master1 kubelet[2333]: E0106 16:47:11.406900 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:11 kubernetes-master1 kubelet[2333]: E0106 16:47:11.507039 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:11 kubernetes-master1 kubelet[2333]: E0106 16:47:11.607212 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:11 kubernetes-master1 kubelet[2333]: E0106 16:47:11.707380 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:11 kubernetes-master1 kubelet[2333]: E0106 16:47:11.739041 2333 kubelet_node_status.go:93] Unable to register node "kubernetes-master1" with API server: Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes": EOF
- Jan 6 16:47:11 kubernetes-master1 kubelet[2333]: E0106 16:47:11.812394 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:11 kubernetes-master1 kubelet[2333]: E0106 16:47:11.912623 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: E0106 16:47:12.012794 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: E0106 16:47:12.112944 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: E0106 16:47:12.213099 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: E0106 16:47:12.271518 2333 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://kubernetes-cluster.homelab01.local:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": EOF
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: E0106 16:47:12.313232 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: E0106 16:47:12.331938 2333 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes-master1.1657af32a674ab09", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-master1", UID:"kubernetes-master1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff594bdf4b2c509, ext:6973746040, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff594bdf4b2c509, ext:6973746040, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/default/events": EOF'(may retry after sleeping)
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: E0106 16:47:12.413600 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: W0106 16:47:12.480112 2333 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: E0106 16:47:12.513810 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: E0106 16:47:12.613937 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: E0106 16:47:12.714079 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: E0106 16:47:12.814268 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:12 kubernetes-master1 kubelet[2333]: E0106 16:47:12.914394 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:13 kubernetes-master1 kubelet[2333]: E0106 16:47:13.014607 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:13 kubernetes-master1 kubelet[2333]: E0106 16:47:13.114829 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:13 kubernetes-master1 kubelet[2333]: E0106 16:47:13.215759 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:13 kubernetes-master1 kubelet[2333]: E0106 16:47:13.316444 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:13 kubernetes-master1 kubelet[2333]: E0106 16:47:13.416924 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:13 kubernetes-master1 kubelet[2333]: E0106 16:47:13.517727 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:13 kubernetes-master1 kubelet[2333]: E0106 16:47:13.618321 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:13 kubernetes-master1 kubelet[2333]: E0106 16:47:13.718750 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:13 kubernetes-master1 kubelet[2333]: E0106 16:47:13.819138 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:13 kubernetes-master1 kubelet[2333]: E0106 16:47:13.917146 2333 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:47:13 kubernetes-master1 kubelet[2333]: E0106 16:47:13.919871 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: E0106 16:47:14.020223 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: E0106 16:47:14.120624 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: E0106 16:47:14.190103 2333 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "kubernetes-master1" not found
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: E0106 16:47:14.212366 2333 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: E0106 16:47:14.221056 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: E0106 16:47:14.321271 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: E0106 16:47:14.421633 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: E0106 16:47:14.522031 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: E0106 16:47:14.622633 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: E0106 16:47:14.723136 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: I0106 16:47:14.742236 2333 trace.go:205] Trace[1014494635]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (06-Jan-2021 16:47:03.900) (total time: 10842ms):
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: Trace[1014494635]: ---"Objects listed" 10841ms (16:47:00.742)
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: Trace[1014494635]: [10.842061623s] [10.842061623s] END
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: E0106 16:47:14.823406 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:14 kubernetes-master1 kubelet[2333]: E0106 16:47:14.923554 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: E0106 16:47:15.023865 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: E0106 16:47:15.124247 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: E0106 16:47:15.130430 2333 nodelease.go:49] failed to get node "kubernetes-master1" when trying to set owner ref to the node lease: nodes "kubernetes-master1" not found
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: E0106 16:47:15.224713 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: E0106 16:47:15.325275 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: I0106 16:47:15.340549 2333 trace.go:205] Trace[1606019315]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (06-Jan-2021 16:47:04.034) (total time: 11305ms):
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: Trace[1606019315]: ---"Objects listed" 11305ms (16:47:00.340)
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: Trace[1606019315]: [11.305860966s] [11.305860966s] END
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: E0106 16:47:15.425814 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: E0106 16:47:15.526034 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: E0106 16:47:15.626359 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: E0106 16:47:15.726500 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: E0106 16:47:15.826634 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:15 kubernetes-master1 kubelet[2333]: E0106 16:47:15.926787 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:16 kubernetes-master1 kubelet[2333]: E0106 16:47:16.026935 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:16 kubernetes-master1 kubelet[2333]: E0106 16:47:16.127344 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:16 kubernetes-master1 kubelet[2333]: I0106 16:47:16.216561 2333 reconciler.go:157] Reconciler: start to sync state
- Jan 6 16:47:16 kubernetes-master1 kubelet[2333]: E0106 16:47:16.227751 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:16 kubernetes-master1 kubelet[2333]: E0106 16:47:16.328061 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:16 kubernetes-master1 kubelet[2333]: E0106 16:47:16.428348 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:16 kubernetes-master1 kubelet[2333]: E0106 16:47:16.528942 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:16 kubernetes-master1 kubelet[2333]: E0106 16:47:16.631604 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:16 kubernetes-master1 kubelet[2333]: E0106 16:47:16.732120 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:16 kubernetes-master1 kubelet[2333]: E0106 16:47:16.832718 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:16 kubernetes-master1 kubelet[2333]: E0106 16:47:16.933483 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:17 kubernetes-master1 kubelet[2333]: E0106 16:47:17.033924 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:17 kubernetes-master1 kubelet[2333]: E0106 16:47:17.134047 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:17 kubernetes-master1 kubelet[2333]: E0106 16:47:17.234212 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:17 kubernetes-master1 kubelet[2333]: E0106 16:47:17.334710 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:17 kubernetes-master1 kubelet[2333]: E0106 16:47:17.435393 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:17 kubernetes-master1 kubelet[2333]: W0106 16:47:17.480345 2333 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:17 kubernetes-master1 kubelet[2333]: E0106 16:47:17.535592 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:17 kubernetes-master1 kubelet[2333]: E0106 16:47:17.635994 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:17 kubernetes-master1 kubelet[2333]: E0106 16:47:17.736602 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:17 kubernetes-master1 kubelet[2333]: E0106 16:47:17.837317 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:17 kubernetes-master1 kubelet[2333]: E0106 16:47:17.937998 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:17 kubernetes-master1 kubelet[2333]: E0106 16:47:17.942708 2333 csi_plugin.go:293] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "kubernetes-master1" not found
- Jan 6 16:47:18 kubernetes-master1 kubelet[2333]: E0106 16:47:18.038291 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:18 kubernetes-master1 kubelet[2333]: E0106 16:47:18.138360 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:18 kubernetes-master1 kubelet[2333]: I0106 16:47:18.194243 2333 kubelet_node_status.go:71] Attempting to register node kubernetes-master1
- Jan 6 16:47:18 kubernetes-master1 kubelet[2333]: E0106 16:47:18.238547 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:18 kubernetes-master1 kubelet[2333]: E0106 16:47:18.338616 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:18 kubernetes-master1 kubelet[2333]: E0106 16:47:18.438748 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:18 kubernetes-master1 kubelet[2333]: E0106 16:47:18.538819 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:18 kubernetes-master1 kubelet[2333]: E0106 16:47:18.638985 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:18 kubernetes-master1 kubelet[2333]: E0106 16:47:18.739131 2333 kubelet.go:2240] node "kubernetes-master1" not found
- Jan 6 16:47:18 kubernetes-master1 kubelet[2333]: I0106 16:47:18.746576 2333 kubelet_node_status.go:74] Successfully registered node kubernetes-master1
- Jan 6 16:47:19 kubernetes-master1 kubelet[2333]: E0106 16:47:19.223963 2333 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:47:19 kubernetes-master1 systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:47:19 kubernetes-master1 systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:47:19 kubernetes-master1 systemd[1]: Reloading.
- Jan 6 16:47:19 kubernetes-master1 systemd[1]: /lib/systemd/system/dbus.socket:5: ListenStream= references a path below legacy directory /var/run/, updating /var/run/dbus/system_bus_socket → /run/dbus/system_bus_socket; please update the unit file accordingly.
- Jan 6 16:47:19 kubernetes-master1 systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:47:19 kubernetes-master1 systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
- Jan 6 16:47:19 kubernetes-master1 systemd[1]: /lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
- Jan 6 16:47:20 kubernetes-master1 kubelet[2333]: I0106 16:47:20.265441 2333 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/etc/kubernetes/pki/ca.crt
- Jan 6 16:47:20 kubernetes-master1 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
- Jan 6 16:47:20 kubernetes-master1 systemd[1]: kubelet.service: Succeeded.
- Jan 6 16:47:20 kubernetes-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
- Jan 6 16:47:20 kubernetes-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
- Jan 6 16:47:20 kubernetes-master1 systemd[1]: Started Kubernetes systemd probe.
- Jan 6 16:47:20 kubernetes-master1 systemd[1]: run-r604377efa7a84441ae8df9b0007127d9.scope: Succeeded.
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.646796 3514 server.go:416] Version: v1.20.1
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.647214 3514 server.go:837] Client rotation is on, will bootstrap in background
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.653080 3514 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.665028 3514 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.865960 3514 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.866785 3514 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.866810 3514 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.869417 3514 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.869439 3514 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.869446 3514 container_manager_linux.go:315] Creating device plugin manager: true
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: W0106 16:47:20.869529 3514 kubelet.go:297] Using dockershim is deprecated, please consider using a full-fledged CRI implementation
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.869546 3514 client.go:77] Connecting to docker on unix:///var/run/docker.sock
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.869560 3514 client.go:94] Start docker client with request timeout=2m0s
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: W0106 16:47:20.880997 3514 docker_service.go:559] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.881051 3514 docker_service.go:240] Hairpin mode set to "hairpin-veth"
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: W0106 16:47:20.881166 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: W0106 16:47:20.883739 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.883824 3514 docker_service.go:255] Docker cri networking managed by cni
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: W0106 16:47:20.883852 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.896121 3514 docker_service.go:260] Docker Info: &{ID:ZJPZ:5PX5:H3KS:7XMT:AT4K:IYXO:V2DE:K6Q5:RRDJ:46NY:7ZZV:U2TQ Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:68 SystemTime:2021-01-06T16:47:20.885150724+01:00 LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.4.0-59-generic OperatingSystem:Ubuntu 20.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0004da380 NCPU:2 MemTotal:4127334400 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes-master1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.11 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support]}
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.896220 3514 docker_service.go:273] Setting cgroupDriver to systemd
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.916588 3514 remote_runtime.go:62] parsed scheme: ""
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.916616 3514 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.916652 3514 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.916663 3514 clientconn.go:948] ClientConn switching balancer to "pick_first"
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.916700 3514 remote_image.go:50] parsed scheme: ""
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.916707 3514 remote_image.go:50] scheme "" not registered, fallback to default scheme
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.916717 3514 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.916723 3514 clientconn.go:948] ClientConn switching balancer to "pick_first"
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.916754 3514 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.916776 3514 kubelet.go:273] Watching apiserver
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.917121 3514 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
- Jan 6 16:47:20 kubernetes-master1 kubelet[3514]: I0106 16:47:20.960745 3514 kuberuntime_manager.go:216] Container runtime docker initialized, version: 19.03.11, apiVersion: 1.40.0
- Jan 6 16:47:25 kubernetes-master1 kubelet[3514]: W0106 16:47:25.884013 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: E0106 16:47:27.254122 3514 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: #011For verbose messaging see aws.Config.CredentialsChainVerboseErrors
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.255788 3514 server.go:1176] Started kubelet
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: E0106 16:47:27.256411 3514 kubelet.go:1271] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.257873 3514 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.273555 3514 server.go:148] Starting to listen on 0.0.0.0:10250
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.297417 3514 server.go:409] Adding debug handlers to kubelet server.
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.275326 3514 volume_manager.go:271] Starting Kubelet Volume Manager
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.275337 3514 desired_state_of_world_populator.go:142] Desired state populator starts to run
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: E0106 16:47:27.337169 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.337336 3514 client.go:86] parsed scheme: "unix"
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.337355 3514 client.go:86] scheme "unix" not registered, fallback to default scheme
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.337434 3514 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.337445 3514 clientconn.go:948] ClientConn switching balancer to "pick_first"
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.339501 3514 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.339738 3514 status_manager.go:158] Starting to sync pod status with apiserver
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.339907 3514 kubelet.go:1799] Starting kubelet main sync loop.
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: E0106 16:47:27.340110 3514 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: E0106 16:47:27.440426 3514 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.443662 3514 kubelet_node_status.go:71] Attempting to register node kubernetes-master1
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.460614 3514 kubelet_node_status.go:109] Node kubernetes-master1 was previously registered
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.460955 3514 kubelet_node_status.go:74] Successfully registered node kubernetes-master1
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.617827 3514 cpu_manager.go:193] [cpumanager] starting with none policy
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.618143 3514 cpu_manager.go:194] [cpumanager] reconciling every 10s
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.618347 3514 state_mem.go:36] [cpumanager] initializing new in-memory state store
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.618664 3514 state_mem.go:88] [cpumanager] updated default cpuset: ""
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.618861 3514 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.619094 3514 policy_none.go:43] [cpumanager] none policy: Start
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: W0106 16:47:27.620474 3514 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.620943 3514 plugin_manager.go:114] Starting Kubelet Plugin Manager
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.640686 3514 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.640795 3514 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.640840 3514 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.640876 3514 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.640918 3514 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:47:27 kubernetes-master1 systemd[1]: Created slice libcontainer container kubepods-besteffort-pod562266a8_ade1_41ba_a994_36e814486ded.slice.
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713163 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/2d932d5f886b8cfbaeec38eaf545a088-etc-pki") pod "kube-apiserver-kubernetes-master1" (UID: "2d932d5f886b8cfbaeec38eaf545a088")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713236 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-usr-share-ca-certificates") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713261 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-bszcf" (UniqueName: "kubernetes.io/secret/562266a8-ade1-41ba-a994-36e814486ded-kube-proxy-token-bszcf") pod "kube-proxy-smmmx" (UID: "562266a8-ade1-41ba-a994-36e814486ded")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713280 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/9be8cb4627e7e5ad4c3f8acabd4b49b3-kubeconfig") pod "kube-scheduler-kubernetes-master1" (UID: "9be8cb4627e7e5ad4c3f8acabd4b49b3")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713301 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/66b0dec8ac0c51772012999d6a94770d-etcd-data") pod "etcd-kubernetes-master1" (UID: "66b0dec8ac0c51772012999d6a94770d")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713320 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/2d932d5f886b8cfbaeec38eaf545a088-usr-local-share-ca-certificates") pod "kube-apiserver-kubernetes-master1" (UID: "2d932d5f886b8cfbaeec38eaf545a088")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713338 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/2d932d5f886b8cfbaeec38eaf545a088-usr-share-ca-certificates") pod "kube-apiserver-kubernetes-master1" (UID: "2d932d5f886b8cfbaeec38eaf545a088")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713357 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-etc-pki") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713376 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/562266a8-ade1-41ba-a994-36e814486ded-xtables-lock") pod "kube-proxy-smmmx" (UID: "562266a8-ade1-41ba-a994-36e814486ded")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713393 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/66b0dec8ac0c51772012999d6a94770d-etcd-certs") pod "etcd-kubernetes-master1" (UID: "66b0dec8ac0c51772012999d6a94770d")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713409 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/2d932d5f886b8cfbaeec38eaf545a088-ca-certs") pod "kube-apiserver-kubernetes-master1" (UID: "2d932d5f886b8cfbaeec38eaf545a088")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713425 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/2d932d5f886b8cfbaeec38eaf545a088-k8s-certs") pod "kube-apiserver-kubernetes-master1" (UID: "2d932d5f886b8cfbaeec38eaf545a088")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713443 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-flexvolume-dir") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713460 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-k8s-certs") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713479 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-kubeconfig") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713501 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/562266a8-ade1-41ba-a994-36e814486ded-kube-proxy") pod "kube-proxy-smmmx" (UID: "562266a8-ade1-41ba-a994-36e814486ded")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713517 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/562266a8-ade1-41ba-a994-36e814486ded-lib-modules") pod "kube-proxy-smmmx" (UID: "562266a8-ade1-41ba-a994-36e814486ded")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713538 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/2d932d5f886b8cfbaeec38eaf545a088-etc-ca-certificates") pod "kube-apiserver-kubernetes-master1" (UID: "2d932d5f886b8cfbaeec38eaf545a088")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713557 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-ca-certs") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713576 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-etc-ca-certificates") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713595 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c61f75a63a6b7c302751a6cc76c53045-usr-local-share-ca-certificates") pod "kube-controller-manager-kubernetes-master1" (UID: "c61f75a63a6b7c302751a6cc76c53045")
- Jan 6 16:47:27 kubernetes-master1 kubelet[3514]: I0106 16:47:27.713602 3514 reconciler.go:157] Reconciler: start to sync state
- Jan 6 16:47:27 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-e6d7070a0adfacf2a1431bb60c1e2908e805afd1ec72a79fa7e52e6abec99a09\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:27 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-e6d7070a0adfacf2a1431bb60c1e2908e805afd1ec72a79fa7e52e6abec99a09\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:28 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-e6d7070a0adfacf2a1431bb60c1e2908e805afd1ec72a79fa7e52e6abec99a09-merged.mount: Succeeded.
- Jan 6 16:47:28 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-e6d7070a0adfacf2a1431bb60c1e2908e805afd1ec72a79fa7e52e6abec99a09-merged.mount: Succeeded.
- Jan 6 16:47:28 kubernetes-master1 containerd[821]: time="2021-01-06T16:47:28.087055041+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8d3560afd6709634ef5694f1c307755ad419841e8c67ea1570bb3b91d08698ff/shim.sock" debug=false pid=3728
- Jan 6 16:47:28 kubernetes-master1 systemd[1]: Started libcontainer container 8d3560afd6709634ef5694f1c307755ad419841e8c67ea1570bb3b91d08698ff.
- Jan 6 16:47:28 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-277401ff2cc882ffe5f36aa263e403c25610f664674292773bd6ad5632aae588\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:28 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-277401ff2cc882ffe5f36aa263e403c25610f664674292773bd6ad5632aae588\x2dinit-merged.mount: Succeeded.
- Jan 6 16:47:28 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-277401ff2cc882ffe5f36aa263e403c25610f664674292773bd6ad5632aae588-merged.mount: Succeeded.
- Jan 6 16:47:28 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-277401ff2cc882ffe5f36aa263e403c25610f664674292773bd6ad5632aae588-merged.mount: Succeeded.
- Jan 6 16:47:28 kubernetes-master1 containerd[821]: time="2021-01-06T16:47:28.343564965+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b77e6438c63c3129bd7067fd3a49f537fba6a10e1320846532b650171023886e/shim.sock" debug=false pid=3781
- Jan 6 16:47:28 kubernetes-master1 systemd[1]: Started libcontainer container b77e6438c63c3129bd7067fd3a49f537fba6a10e1320846532b650171023886e.
- Jan 6 16:47:28 kubernetes-master1 kernel: [ 359.362156] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
- Jan 6 16:47:28 kubernetes-master1 kernel: [ 359.362172] IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
- Jan 6 16:47:28 kubernetes-master1 kernel: [ 359.362319] IPVS: ipvs loaded.
- Jan 6 16:47:28 kubernetes-master1 kernel: [ 359.368198] IPVS: [rr] scheduler registered.
- Jan 6 16:47:28 kubernetes-master1 kernel: [ 359.372646] IPVS: [wrr] scheduler registered.
- Jan 6 16:47:28 kubernetes-master1 kernel: [ 359.376863] IPVS: [sh] scheduler registered.
- Jan 6 16:47:28 kubernetes-master1 kubelet[3514]: E0106 16:47:28.864570 3514 kubelet.go:1635] Failed creating a mirror pod for "etcd-kubernetes-master1_kube-system(66b0dec8ac0c51772012999d6a94770d)": pods "etcd-kubernetes-master1" already exists
- Jan 6 16:47:30 kubernetes-master1 kubelet[3514]: W0106 16:47:30.884868 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:32 kubernetes-master1 kubelet[3514]: E0106 16:47:32.632162 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:47:35 kubernetes-master1 kubelet[3514]: W0106 16:47:35.885700 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:37 kubernetes-master1 kubelet[3514]: E0106 16:47:37.655347 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:47:40 kubernetes-master1 kubelet[3514]: W0106 16:47:40.886592 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:42 kubernetes-master1 kubelet[3514]: E0106 16:47:42.668573 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:47:45 kubernetes-master1 kubelet[3514]: W0106 16:47:45.886820 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:47 kubernetes-master1 kubelet[3514]: E0106 16:47:47.689581 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:47:50 kubernetes-master1 kubelet[3514]: W0106 16:47:50.886975 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:52 kubernetes-master1 kubelet[3514]: E0106 16:47:52.702823 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:47:55 kubernetes-master1 kubelet[3514]: W0106 16:47:55.887974 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:47:57 kubernetes-master1 kubelet[3514]: E0106 16:47:57.716193 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:48:00 kubernetes-master1 kubelet[3514]: W0106 16:48:00.889786 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:48:02 kubernetes-master1 kubelet[3514]: E0106 16:48:02.727132 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:48:05 kubernetes-master1 kubelet[3514]: W0106 16:48:05.890668 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:48:07 kubernetes-master1 kubelet[3514]: E0106 16:48:07.741442 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:48:10 kubernetes-master1 kubelet[3514]: W0106 16:48:10.890903 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:48:12 kubernetes-master1 kubelet[3514]: E0106 16:48:12.755283 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:48:15 kubernetes-master1 kubelet[3514]: W0106 16:48:15.891287 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:48:17 kubernetes-master1 kubelet[3514]: E0106 16:48:17.767108 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:48:20 kubernetes-master1 kubelet[3514]: W0106 16:48:20.891537 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:48:22 kubernetes-master1 kubelet[3514]: E0106 16:48:22.779504 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:48:25 kubernetes-master1 kubelet[3514]: W0106 16:48:25.891918 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:48:27 kubernetes-master1 kubelet[3514]: E0106 16:48:27.791017 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:48:30 kubernetes-master1 kubelet[3514]: W0106 16:48:30.892148 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:48:32 kubernetes-master1 kubelet[3514]: E0106 16:48:32.803542 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:48:35 kubernetes-master1 kubelet[3514]: W0106 16:48:35.892606 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:48:37 kubernetes-master1 kubelet[3514]: E0106 16:48:37.815571 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:48:40 kubernetes-master1 kubelet[3514]: W0106 16:48:40.892984 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:48:42 kubernetes-master1 kubelet[3514]: E0106 16:48:42.826124 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:48:45 kubernetes-master1 kubelet[3514]: W0106 16:48:45.894343 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:48:47 kubernetes-master1 kubelet[3514]: E0106 16:48:47.843730 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:48:50 kubernetes-master1 kubelet[3514]: W0106 16:48:50.895356 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:48:52 kubernetes-master1 kubelet[3514]: E0106 16:48:52.855270 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:48:55 kubernetes-master1 kubelet[3514]: W0106 16:48:55.896644 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:48:57 kubernetes-master1 kubelet[3514]: E0106 16:48:57.866968 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:00 kubernetes-master1 kubelet[3514]: W0106 16:49:00.897640 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:49:02 kubernetes-master1 kubelet[3514]: E0106 16:49:02.879834 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:05 kubernetes-master1 kubelet[3514]: W0106 16:49:05.898628 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:49:07 kubernetes-master1 kubelet[3514]: E0106 16:49:07.893611 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:10 kubernetes-master1 kubelet[3514]: W0106 16:49:10.898809 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:49:12 kubernetes-master1 kubelet[3514]: E0106 16:49:12.907973 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:15 kubernetes-master1 kubelet[3514]: W0106 16:49:15.899000 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:49:17 kubernetes-master1 kubelet[3514]: E0106 16:49:17.919741 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:20 kubernetes-master1 kubelet[3514]: W0106 16:49:20.899187 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:49:22 kubernetes-master1 kubelet[3514]: E0106 16:49:22.930704 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:25 kubernetes-master1 kubelet[3514]: W0106 16:49:25.900735 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:49:27 kubernetes-master1 kubelet[3514]: E0106 16:49:27.941464 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:30 kubernetes-master1 kubelet[3514]: W0106 16:49:30.901062 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:49:32 kubernetes-master1 kubelet[3514]: E0106 16:49:32.951977 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:35 kubernetes-master1 kubelet[3514]: W0106 16:49:35.901950 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:49:37 kubernetes-master1 kubelet[3514]: E0106 16:49:37.970251 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:40 kubernetes-master1 kubelet[3514]: W0106 16:49:40.902131 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:49:42 kubernetes-master1 kubelet[3514]: E0106 16:49:42.984313 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:45 kubernetes-master1 kubelet[3514]: W0106 16:49:45.902439 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:49:47 kubernetes-master1 kubelet[3514]: E0106 16:49:47.997134 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:50 kubernetes-master1 kubelet[3514]: W0106 16:49:50.902660 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:49:53 kubernetes-master1 kubelet[3514]: E0106 16:49:53.026732 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.021217 3514 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:49:55 kubernetes-master1 systemd[1]: Created slice libcontainer container kubepods-burstable-pode4abcb35_a164_473b_84f9_caea05c484fc.slice.
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.056899 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sysfs" (UniqueName: "kubernetes.io/host-path/e4abcb35-a164-473b-84f9-caea05c484fc-sysfs") pod "calico-node-bnvmz" (UID: "e4abcb35-a164-473b-84f9-caea05c484fc")
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.057208 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/e4abcb35-a164-473b-84f9-caea05c484fc-var-lib-calico") pod "calico-node-bnvmz" (UID: "e4abcb35-a164-473b-84f9-caea05c484fc")
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.057373 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/e4abcb35-a164-473b-84f9-caea05c484fc-cni-bin-dir") pod "calico-node-bnvmz" (UID: "e4abcb35-a164-473b-84f9-caea05c484fc")
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.057518 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/e4abcb35-a164-473b-84f9-caea05c484fc-var-run-calico") pod "calico-node-bnvmz" (UID: "e4abcb35-a164-473b-84f9-caea05c484fc")
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.057706 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/e4abcb35-a164-473b-84f9-caea05c484fc-cni-net-dir") pod "calico-node-bnvmz" (UID: "e4abcb35-a164-473b-84f9-caea05c484fc")
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.057910 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/e4abcb35-a164-473b-84f9-caea05c484fc-lib-modules") pod "calico-node-bnvmz" (UID: "e4abcb35-a164-473b-84f9-caea05c484fc")
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.058067 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/e4abcb35-a164-473b-84f9-caea05c484fc-xtables-lock") pod "calico-node-bnvmz" (UID: "e4abcb35-a164-473b-84f9-caea05c484fc")
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.158493 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "policysync" (UniqueName: "kubernetes.io/host-path/e4abcb35-a164-473b-84f9-caea05c484fc-policysync") pod "calico-node-bnvmz" (UID: "e4abcb35-a164-473b-84f9-caea05c484fc")
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.158794 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-bsmvn" (UniqueName: "kubernetes.io/secret/e4abcb35-a164-473b-84f9-caea05c484fc-calico-node-token-bsmvn") pod "calico-node-bnvmz" (UID: "e4abcb35-a164-473b-84f9-caea05c484fc")
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.159028 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvol-driver-host" (UniqueName: "kubernetes.io/host-path/e4abcb35-a164-473b-84f9-caea05c484fc-flexvol-driver-host") pod "calico-node-bnvmz" (UID: "e4abcb35-a164-473b-84f9-caea05c484fc")
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.159068 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-log-dir" (UniqueName: "kubernetes.io/host-path/e4abcb35-a164-473b-84f9-caea05c484fc-cni-log-dir") pod "calico-node-bnvmz" (UID: "e4abcb35-a164-473b-84f9-caea05c484fc")
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: I0106 16:49:55.159137 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "host-local-net-dir" (UniqueName: "kubernetes.io/host-path/e4abcb35-a164-473b-84f9-caea05c484fc-host-local-net-dir") pod "calico-node-bnvmz" (UID: "e4abcb35-a164-473b-84f9-caea05c484fc")
- Jan 6 16:49:55 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-2260c90ca1bf613ab24a4ebfdd07d6a6b3d6f88bc99003193800887432050e39\x2dinit-merged.mount: Succeeded.
- Jan 6 16:49:55 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-2260c90ca1bf613ab24a4ebfdd07d6a6b3d6f88bc99003193800887432050e39-merged.mount: Succeeded.
- Jan 6 16:49:55 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-2260c90ca1bf613ab24a4ebfdd07d6a6b3d6f88bc99003193800887432050e39-merged.mount: Succeeded.
- Jan 6 16:49:55 kubernetes-master1 containerd[821]: time="2021-01-06T16:49:55.430710421+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e8439041f105ddc06eb06c57c3552deafb994536536cc41753a3d7d2846786c5/shim.sock" debug=false pid=4427
- Jan 6 16:49:55 kubernetes-master1 systemd[1]: Started libcontainer container e8439041f105ddc06eb06c57c3552deafb994536536cc41753a3d7d2846786c5.
- Jan 6 16:49:55 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-e8439041f105ddc06eb06c57c3552deafb994536536cc41753a3d7d2846786c5-runc.JybQv5.mount: Succeeded.
- Jan 6 16:49:55 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-e8439041f105ddc06eb06c57c3552deafb994536536cc41753a3d7d2846786c5-runc.JybQv5.mount: Succeeded.
- Jan 6 16:49:55 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-e0e9c583ed809eee832a4a9ba7c112f04ffc8f99aac6157fcebd9cc645d5b2e8\x2dinit-merged.mount: Succeeded.
- Jan 6 16:49:55 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-e0e9c583ed809eee832a4a9ba7c112f04ffc8f99aac6157fcebd9cc645d5b2e8\x2dinit-merged.mount: Succeeded.
- Jan 6 16:49:55 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-e0e9c583ed809eee832a4a9ba7c112f04ffc8f99aac6157fcebd9cc645d5b2e8-merged.mount: Succeeded.
- Jan 6 16:49:55 kubernetes-master1 containerd[821]: time="2021-01-06T16:49:55.804520930+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f06585b8af8e5d8ea6dc507c358d3953deaf1387b26a54aafc515db1d6a2dbbc/shim.sock" debug=false pid=4477
- Jan 6 16:49:55 kubernetes-master1 systemd[1]: Started libcontainer container f06585b8af8e5d8ea6dc507c358d3953deaf1387b26a54aafc515db1d6a2dbbc.
- Jan 6 16:49:55 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-f06585b8af8e5d8ea6dc507c358d3953deaf1387b26a54aafc515db1d6a2dbbc-runc.G9Yfsc.mount: Succeeded.
- Jan 6 16:49:55 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-f06585b8af8e5d8ea6dc507c358d3953deaf1387b26a54aafc515db1d6a2dbbc-runc.G9Yfsc.mount: Succeeded.
- Jan 6 16:49:55 kubernetes-master1 kubelet[3514]: W0106 16:49:55.903467 3514 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
- Jan 6 16:49:56 kubernetes-master1 systemd[1]: docker-f06585b8af8e5d8ea6dc507c358d3953deaf1387b26a54aafc515db1d6a2dbbc.scope: Succeeded.
- Jan 6 16:49:56 kubernetes-master1 containerd[821]: time="2021-01-06T16:49:56.270889504+01:00" level=info msg="shim reaped" id=f06585b8af8e5d8ea6dc507c358d3953deaf1387b26a54aafc515db1d6a2dbbc
- Jan 6 16:49:56 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-e0e9c583ed809eee832a4a9ba7c112f04ffc8f99aac6157fcebd9cc645d5b2e8-merged.mount: Succeeded.
- Jan 6 16:49:56 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-e0e9c583ed809eee832a4a9ba7c112f04ffc8f99aac6157fcebd9cc645d5b2e8-merged.mount: Succeeded.
- Jan 6 16:49:56 kubernetes-master1 dockerd[828]: time="2021-01-06T16:49:56.293928678+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:49:56 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-a922acb801edca693c7b8518d5752d820538591bf43ab59c9f1d18689b2026ef\x2dinit-merged.mount: Succeeded.
- Jan 6 16:49:56 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-a922acb801edca693c7b8518d5752d820538591bf43ab59c9f1d18689b2026ef\x2dinit-merged.mount: Succeeded.
- Jan 6 16:49:56 kubernetes-master1 containerd[821]: time="2021-01-06T16:49:56.391452692+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/94c0f03a796712b530570aa0cc2d18bb5e2503a7e2e8082b424cadb9118c1ebe/shim.sock" debug=false pid=4536
- Jan 6 16:49:56 kubernetes-master1 systemd[1]: Started libcontainer container 94c0f03a796712b530570aa0cc2d18bb5e2503a7e2e8082b424cadb9118c1ebe.
- Jan 6 16:49:57 kubernetes-master1 systemd[1]: docker-94c0f03a796712b530570aa0cc2d18bb5e2503a7e2e8082b424cadb9118c1ebe.scope: Succeeded.
- Jan 6 16:49:57 kubernetes-master1 containerd[821]: time="2021-01-06T16:49:57.647440066+01:00" level=info msg="shim reaped" id=94c0f03a796712b530570aa0cc2d18bb5e2503a7e2e8082b424cadb9118c1ebe
- Jan 6 16:49:57 kubernetes-master1 dockerd[828]: time="2021-01-06T16:49:57.657584840+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:49:57 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-a922acb801edca693c7b8518d5752d820538591bf43ab59c9f1d18689b2026ef-merged.mount: Succeeded.
- Jan 6 16:49:57 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-a922acb801edca693c7b8518d5752d820538591bf43ab59c9f1d18689b2026ef-merged.mount: Succeeded.
- Jan 6 16:49:58 kubernetes-master1 kubelet[3514]: E0106 16:49:58.041020 3514 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Jan 6 16:49:58 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-cfb19333f1d41d0e8906b3d9ece0a4770f76f97dafc8522858a756e0ce9b9729\x2dinit-merged.mount: Succeeded.
- Jan 6 16:49:58 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-cfb19333f1d41d0e8906b3d9ece0a4770f76f97dafc8522858a756e0ce9b9729\x2dinit-merged.mount: Succeeded.
- Jan 6 16:49:58 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-cfb19333f1d41d0e8906b3d9ece0a4770f76f97dafc8522858a756e0ce9b9729-merged.mount: Succeeded.
- Jan 6 16:49:58 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-cfb19333f1d41d0e8906b3d9ece0a4770f76f97dafc8522858a756e0ce9b9729-merged.mount: Succeeded.
- Jan 6 16:49:58 kubernetes-master1 containerd[821]: time="2021-01-06T16:49:58.433350564+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e94dc7603b1be7b5d6deb2bf419828eaf5ce4c85077ab620361af0076d7254b8/shim.sock" debug=false pid=4626
- Jan 6 16:49:58 kubernetes-master1 systemd[1]: Started libcontainer container e94dc7603b1be7b5d6deb2bf419828eaf5ce4c85077ab620361af0076d7254b8.
- Jan 6 16:49:58 kubernetes-master1 systemd[1]: docker-e94dc7603b1be7b5d6deb2bf419828eaf5ce4c85077ab620361af0076d7254b8.scope: Succeeded.
- Jan 6 16:49:58 kubernetes-master1 containerd[821]: time="2021-01-06T16:49:58.690275043+01:00" level=info msg="shim reaped" id=e94dc7603b1be7b5d6deb2bf419828eaf5ce4c85077ab620361af0076d7254b8
- Jan 6 16:49:58 kubernetes-master1 dockerd[828]: time="2021-01-06T16:49:58.702221707+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:49:58 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-cfb19333f1d41d0e8906b3d9ece0a4770f76f97dafc8522858a756e0ce9b9729-merged.mount: Succeeded.
- Jan 6 16:49:58 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-cfb19333f1d41d0e8906b3d9ece0a4770f76f97dafc8522858a756e0ce9b9729-merged.mount: Succeeded.
- Jan 6 16:49:59 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-2736550bc618ae66d920c6a7fd65939add0075fc4c6932a191a0fe6e8215cbc5\x2dinit-merged.mount: Succeeded.
- Jan 6 16:49:59 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-2736550bc618ae66d920c6a7fd65939add0075fc4c6932a191a0fe6e8215cbc5\x2dinit-merged.mount: Succeeded.
- Jan 6 16:49:59 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-2736550bc618ae66d920c6a7fd65939add0075fc4c6932a191a0fe6e8215cbc5-merged.mount: Succeeded.
- Jan 6 16:49:59 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-2736550bc618ae66d920c6a7fd65939add0075fc4c6932a191a0fe6e8215cbc5-merged.mount: Succeeded.
- Jan 6 16:49:59 kubernetes-master1 containerd[821]: time="2021-01-06T16:49:59.460576025+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7/shim.sock" debug=false pid=4689
- Jan 6 16:49:59 kubernetes-master1 systemd[1]: Started libcontainer container c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7.
- Jan 6 16:50:02 kubernetes-master1 systemd-udevd[4451]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
- Jan 6 16:50:02 kubernetes-master1 kernel: [ 512.794166] ipip: IPv4 and MPLS over IPv4 tunneling driver
- Jan 6 16:50:02 kubernetes-master1 networkd-dispatcher[809]: WARNING:Unknown index 4 seen, reloading interface list
- Jan 6 16:50:02 kubernetes-master1 networkd-dispatcher[809]: WARNING:Unknown index 5 seen, reloading interface list
- Jan 6 16:50:02 kubernetes-master1 systemd-udevd[4588]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
- Jan 6 16:50:02 kubernetes-master1 systemd-udevd[4588]: Using default interface naming scheme 'v245'.
- Jan 6 16:50:02 kubernetes-master1 systemd-udevd[4588]: calico_tmp_A: Could not generate persistent MAC: No data available
- Jan 6 16:50:02 kubernetes-master1 systemd-udevd[4451]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
- Jan 6 16:50:02 kubernetes-master1 systemd-udevd[4451]: Using default interface naming scheme 'v245'.
- Jan 6 16:50:02 kubernetes-master1 systemd-udevd[4451]: calico_tmp_B: Could not generate persistent MAC: No data available
- Jan 6 16:50:03 kubernetes-master1 systemd-networkd[773]: tunl0: Link UP
- Jan 6 16:50:03 kubernetes-master1 systemd-networkd[773]: tunl0: Gained carrier
- Jan 6 16:50:10 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.sJnJv4.mount: Succeeded.
- Jan 6 16:50:10 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.sJnJv4.mount: Succeeded.
- Jan 6 16:50:10 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.8rZMbn.mount: Succeeded.
- Jan 6 16:50:10 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.8rZMbn.mount: Succeeded.
- Jan 6 16:50:10 kubernetes-master1 kubelet[3514]: I0106 16:50:10.728272 3514 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:50:10 kubernetes-master1 kubelet[3514]: I0106 16:50:10.748222 3514 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:50:10 kubernetes-master1 systemd[1]: Created slice libcontainer container kubepods-burstable-pod6b33cf08_35a1_40ea_b5fd_fe3f1d1a4dbf.slice.
- Jan 6 16:50:10 kubernetes-master1 systemd[1]: Created slice libcontainer container kubepods-burstable-poda2c1e027_f5be_4a48_bcc5_03f9f12e2ec1.slice.
- Jan 6 16:50:10 kubernetes-master1 kubelet[3514]: I0106 16:50:10.833860 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-mmrhd" (UniqueName: "kubernetes.io/secret/6b33cf08-35a1-40ea-b5fd-fe3f1d1a4dbf-coredns-token-mmrhd") pod "coredns-74ff55c5b-tctl9" (UID: "6b33cf08-35a1-40ea-b5fd-fe3f1d1a4dbf")
- Jan 6 16:50:10 kubernetes-master1 kubelet[3514]: I0106 16:50:10.834029 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6b33cf08-35a1-40ea-b5fd-fe3f1d1a4dbf-config-volume") pod "coredns-74ff55c5b-tctl9" (UID: "6b33cf08-35a1-40ea-b5fd-fe3f1d1a4dbf")
- Jan 6 16:50:10 kubernetes-master1 kubelet[3514]: I0106 16:50:10.934454 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a2c1e027-f5be-4a48-bcc5-03f9f12e2ec1-config-volume") pod "coredns-74ff55c5b-skdzk" (UID: "a2c1e027-f5be-4a48-bcc5-03f9f12e2ec1")
- Jan 6 16:50:10 kubernetes-master1 kubelet[3514]: I0106 16:50:10.934572 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-mmrhd" (UniqueName: "kubernetes.io/secret/a2c1e027-f5be-4a48-bcc5-03f9f12e2ec1-coredns-token-mmrhd") pod "coredns-74ff55c5b-skdzk" (UID: "a2c1e027-f5be-4a48-bcc5-03f9f12e2ec1")
- Jan 6 16:50:11 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-ca54e1fd15290af14d02b6dec10ac66af3db7d2b200d8ad051eeddd162d5e679\x2dinit-merged.mount: Succeeded.
- Jan 6 16:50:11 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-ca54e1fd15290af14d02b6dec10ac66af3db7d2b200d8ad051eeddd162d5e679\x2dinit-merged.mount: Succeeded.
- Jan 6 16:50:11 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-ca54e1fd15290af14d02b6dec10ac66af3db7d2b200d8ad051eeddd162d5e679-merged.mount: Succeeded.
- Jan 6 16:50:11 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-ca54e1fd15290af14d02b6dec10ac66af3db7d2b200d8ad051eeddd162d5e679-merged.mount: Succeeded.
- Jan 6 16:50:11 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-c12b826b4d60a860287cf5a4e8ff9c997692e41fc9ce574034e1eaf342352890\x2dinit-merged.mount: Succeeded.
- Jan 6 16:50:11 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-c12b826b4d60a860287cf5a4e8ff9c997692e41fc9ce574034e1eaf342352890\x2dinit-merged.mount: Succeeded.
- Jan 6 16:50:11 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-c12b826b4d60a860287cf5a4e8ff9c997692e41fc9ce574034e1eaf342352890-merged.mount: Succeeded.
- Jan 6 16:50:11 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-c12b826b4d60a860287cf5a4e8ff9c997692e41fc9ce574034e1eaf342352890-merged.mount: Succeeded.
- Jan 6 16:50:11 kubernetes-master1 containerd[821]: time="2021-01-06T16:50:11.220794573+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc/shim.sock" debug=false pid=5128
- Jan 6 16:50:11 kubernetes-master1 containerd[821]: time="2021-01-06T16:50:11.240688251+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612/shim.sock" debug=false pid=5144
- Jan 6 16:50:11 kubernetes-master1 systemd[1]: Started libcontainer container 97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc.
- Jan 6 16:50:11 kubernetes-master1 systemd[1]: Started libcontainer container cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612.
- Jan 6 16:50:11 kubernetes-master1 kubelet[3514]: W0106 16:50:11.721635 3514 pod_container_deletor.go:79] Container "cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" not found in pod's containers
- Jan 6 16:50:11 kubernetes-master1 kubelet[3514]: W0106 16:50:11.835831 3514 pod_container_deletor.go:79] Container "97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" not found in pod's containers
- Jan 6 16:50:11 kubernetes-master1 systemd-udevd[5196]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
- Jan 6 16:50:11 kubernetes-master1 systemd-udevd[5196]: Using default interface naming scheme 'v245'.
- Jan 6 16:50:11 kubernetes-master1 networkd-dispatcher[809]: WARNING:Unknown index 7 seen, reloading interface list
- Jan 6 16:50:11 kubernetes-master1 systemd-networkd[773]: cali57ef0e67f63: Link UP
- Jan 6 16:50:11 kubernetes-master1 systemd-networkd[773]: cali57ef0e67f63: Gained carrier
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.818 [INFO][5247] plugin.go 257: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0 coredns-74ff55c5b- kube-system a2c1e027-f5be-4a48-bcc5-03f9f12e2ec1 755 0 2021-01-06 16:47:20 +0100 CET <nil> <nil> map[k8s-app:kube-dns pod-template-hash:74ff55c5b projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s kubernetes-master1 coredns-74ff55c5b-skdzk eth0 [] [] [kns.kube-system ksa.kube-system.coredns] cali57ef0e67f63 [{dns UDP 53} {dns-tcp TCP 53} {metrics TCP 9153}]}} ContainerID="cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" Namespace="kube-system" Pod="coredns-74ff55c5b-skdzk" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.820 [INFO][5247] k8s.go 69: Extracted identifiers for CmdAddK8s ContainerID="cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" Namespace="kube-system" Pod="coredns-74ff55c5b-skdzk" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.882 [INFO][5265] ipam_plugin.go 214: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" HandleID="k8s-pod-network.cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" Workload="kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.899 [INFO][5265] ipam_plugin.go 253: Auto assigning IP ContainerID="cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" HandleID="k8s-pod-network.cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" Workload="kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000469260), Attrs:map[string]string{"namespace":"kube-system", "node":"kubernetes-master1", "pod":"coredns-74ff55c5b-skdzk"}, Hostname:"kubernetes-master1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil)}
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.899 [INFO][5265] ipam.go 92: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'kubernetes-master1'
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.899 [INFO][5265] ipam.go 548: Looking up existing affinities for host handle="k8s-pod-network.cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.907 [INFO][5265] ipam.go 346: Looking up existing affinities for host host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.914 [INFO][5265] ipam.go 429: Trying affinity for 172.16.69.192/26 host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.920 [INFO][5265] ipam.go 140: Attempting to load block cidr=172.16.69.192/26 host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.925 [INFO][5265] ipam.go 217: Affinity is confirmed and block has been loaded cidr=172.16.69.192/26 host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.925 [INFO][5265] ipam.go 947: Attempting to assign 1 addresses from block block=172.16.69.192/26 handle="k8s-pod-network.cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.928 [INFO][5265] ipam.go 1424: Creating new handle: k8s-pod-network.cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.944 [INFO][5265] ipam.go 970: Writing block in order to claim IPs block=172.16.69.192/26 handle="k8s-pod-network.cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.951 [INFO][5265] ipam.go 983: Successfully claimed IPs: [172.16.69.193/26] block=172.16.69.192/26 handle="k8s-pod-network.cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.951 [INFO][5265] ipam.go 706: Auto-assigned 1 out of 1 IPv4s: [172.16.69.193/26] handle="k8s-pod-network.cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.952 [INFO][5265] ipam_plugin.go 255: Calico CNI IPAM assigned addresses IPv4=[172.16.69.193/26] IPv6=[] ContainerID="cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" HandleID="k8s-pod-network.cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" Workload="kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.957 [INFO][5247] k8s.go 372: Populated endpoint ContainerID="cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" Namespace="kube-system" Pod="coredns-74ff55c5b-skdzk" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0", GenerateName:"coredns-74ff55c5b-", Namespace:"kube-system", SelfLink:"", UID:"a2c1e027-f5be-4a48-bcc5-03f9f12e2ec1", ResourceVersion:"755", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63745544840, loc:(*time.Location)(0x25fda40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"74ff55c5b", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"kubernetes-master1", ContainerID:"", Pod:"coredns-74ff55c5b-skdzk", Endpoint:"eth0", IPNetworks:[]string{"172.16.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57ef0e67f63", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}}
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.957 [INFO][5247] k8s.go 373: Calico CNI using IPs: [172.16.69.193/32] ContainerID="cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" Namespace="kube-system" Pod="coredns-74ff55c5b-skdzk" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.957 [INFO][5247] dataplane_linux.go 66: Setting the host side veth name to cali57ef0e67f63 ContainerID="cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" Namespace="kube-system" Pod="coredns-74ff55c5b-skdzk" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.959 [INFO][5247] dataplane_linux.go 420: Disabling IPv4 forwarding ContainerID="cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" Namespace="kube-system" Pod="coredns-74ff55c5b-skdzk" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.994 [INFO][5247] k8s.go 400: Added Mac, interface name, and active container ID to endpoint ContainerID="cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" Namespace="kube-system" Pod="coredns-74ff55c5b-skdzk" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0", GenerateName:"coredns-74ff55c5b-", Namespace:"kube-system", SelfLink:"", UID:"a2c1e027-f5be-4a48-bcc5-03f9f12e2ec1", ResourceVersion:"755", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63745544840, loc:(*time.Location)(0x25fda40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"74ff55c5b", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"kubernetes-master1", ContainerID:"cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612", Pod:"coredns-74ff55c5b-skdzk", Endpoint:"eth0", IPNetworks:[]string{"172.16.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57ef0e67f63", MAC:"4a:6b:3d:0d:27:38", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}}
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.013 [INFO][5247] k8s.go 474: Wrote updated endpoint to datastore ContainerID="cc46e2419c22168b0ae160f9c04647a1e706bee2f326a5029f2297b8f1fba612" Namespace="kube-system" Pod="coredns-74ff55c5b-skdzk" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--skdzk-eth0"
- Jan 6 16:50:12 kubernetes-master1 dockerd[828]: time="2021-01-06T16:50:12.052993303+01:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
- Jan 6 16:50:12 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-7a4956cb42cc35132123d3a12a0fa078722db89126a86a5cdb2e2d821327ebcf\x2dinit-merged.mount: Succeeded.
- Jan 6 16:50:12 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-7a4956cb42cc35132123d3a12a0fa078722db89126a86a5cdb2e2d821327ebcf\x2dinit-merged.mount: Succeeded.
- Jan 6 16:50:12 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-7a4956cb42cc35132123d3a12a0fa078722db89126a86a5cdb2e2d821327ebcf-merged.mount: Succeeded.
- Jan 6 16:50:12 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-7a4956cb42cc35132123d3a12a0fa078722db89126a86a5cdb2e2d821327ebcf-merged.mount: Succeeded.
- Jan 6 16:50:12 kubernetes-master1 containerd[821]: time="2021-01-06T16:50:12.161980594+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/066708eddcc647768ac04ccb9fbe844c640e3e1191362c88172633e40afbc0d3/shim.sock" debug=false pid=5308
- Jan 6 16:50:12 kubernetes-master1 systemd[1]: Started libcontainer container 066708eddcc647768ac04ccb9fbe844c640e3e1191362c88172633e40afbc0d3.
- Jan 6 16:50:12 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-066708eddcc647768ac04ccb9fbe844c640e3e1191362c88172633e40afbc0d3-runc.HHEdQK.mount: Succeeded.
- Jan 6 16:50:12 kubernetes-master1 networkd-dispatcher[809]: WARNING:Unknown index 8 seen, reloading interface list
- Jan 6 16:50:12 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-066708eddcc647768ac04ccb9fbe844c640e3e1191362c88172633e40afbc0d3-runc.HHEdQK.mount: Succeeded.
- Jan 6 16:50:12 kubernetes-master1 systemd-networkd[773]: cali396d1b83556: Link UP
- Jan 6 16:50:12 kubernetes-master1 systemd-networkd[773]: cali396d1b83556: Gained carrier
- Jan 6 16:50:12 kubernetes-master1 systemd-udevd[5158]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
- Jan 6 16:50:12 kubernetes-master1 systemd-udevd[5158]: Using default interface naming scheme 'v245'.
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.924 [INFO][5274] plugin.go 257: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0 coredns-74ff55c5b- kube-system 6b33cf08-35a1-40ea-b5fd-fe3f1d1a4dbf 754 0 2021-01-06 16:47:20 +0100 CET <nil> <nil> map[k8s-app:kube-dns pod-template-hash:74ff55c5b projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s kubernetes-master1 coredns-74ff55c5b-tctl9 eth0 [] [] [kns.kube-system ksa.kube-system.coredns] cali396d1b83556 [{dns UDP 53} {dns-tcp TCP 53} {metrics TCP 9153}]}} ContainerID="97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" Namespace="kube-system" Pod="coredns-74ff55c5b-tctl9" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:11.925 [INFO][5274] k8s.go 69: Extracted identifiers for CmdAddK8s ContainerID="97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" Namespace="kube-system" Pod="coredns-74ff55c5b-tctl9" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.035 [INFO][5281] ipam_plugin.go 214: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" HandleID="k8s-pod-network.97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" Workload="kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.066 [INFO][5281] ipam_plugin.go 253: Auto assigning IP ContainerID="97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" HandleID="k8s-pod-network.97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" Workload="kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a2390), Attrs:map[string]string{"namespace":"kube-system", "node":"kubernetes-master1", "pod":"coredns-74ff55c5b-tctl9"}, Hostname:"kubernetes-master1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil)}
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.066 [INFO][5281] ipam.go 92: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'kubernetes-master1'
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.066 [INFO][5281] ipam.go 548: Looking up existing affinities for host handle="k8s-pod-network.97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.098 [INFO][5281] ipam.go 346: Looking up existing affinities for host host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.107 [INFO][5281] ipam.go 429: Trying affinity for 172.16.69.192/26 host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.120 [INFO][5281] ipam.go 140: Attempting to load block cidr=172.16.69.192/26 host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.125 [INFO][5281] ipam.go 217: Affinity is confirmed and block has been loaded cidr=172.16.69.192/26 host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.125 [INFO][5281] ipam.go 947: Attempting to assign 1 addresses from block block=172.16.69.192/26 handle="k8s-pod-network.97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.132 [INFO][5281] ipam.go 1424: Creating new handle: k8s-pod-network.97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.168 [INFO][5281] ipam.go 970: Writing block in order to claim IPs block=172.16.69.192/26 handle="k8s-pod-network.97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.175 [INFO][5281] ipam.go 983: Successfully claimed IPs: [172.16.69.194/26] block=172.16.69.192/26 handle="k8s-pod-network.97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.176 [INFO][5281] ipam.go 706: Auto-assigned 1 out of 1 IPv4s: [172.16.69.194/26] handle="k8s-pod-network.97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" host="kubernetes-master1"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.176 [INFO][5281] ipam_plugin.go 255: Calico CNI IPAM assigned addresses IPv4=[172.16.69.194/26] IPv6=[] ContainerID="97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" HandleID="k8s-pod-network.97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" Workload="kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.179 [INFO][5274] k8s.go 372: Populated endpoint ContainerID="97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" Namespace="kube-system" Pod="coredns-74ff55c5b-tctl9" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0", GenerateName:"coredns-74ff55c5b-", Namespace:"kube-system", SelfLink:"", UID:"6b33cf08-35a1-40ea-b5fd-fe3f1d1a4dbf", ResourceVersion:"754", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63745544840, loc:(*time.Location)(0x25fda40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"74ff55c5b", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"kubernetes-master1", ContainerID:"", Pod:"coredns-74ff55c5b-tctl9", Endpoint:"eth0", IPNetworks:[]string{"172.16.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali396d1b83556", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}}
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.179 [INFO][5274] k8s.go 373: Calico CNI using IPs: [172.16.69.194/32] ContainerID="97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" Namespace="kube-system" Pod="coredns-74ff55c5b-tctl9" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.179 [INFO][5274] dataplane_linux.go 66: Setting the host side veth name to cali396d1b83556 ContainerID="97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" Namespace="kube-system" Pod="coredns-74ff55c5b-tctl9" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.181 [INFO][5274] dataplane_linux.go 420: Disabling IPv4 forwarding ContainerID="97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" Namespace="kube-system" Pod="coredns-74ff55c5b-tctl9" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0"
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.209 [INFO][5274] k8s.go 400: Added Mac, interface name, and active container ID to endpoint ContainerID="97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" Namespace="kube-system" Pod="coredns-74ff55c5b-tctl9" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0", GenerateName:"coredns-74ff55c5b-", Namespace:"kube-system", SelfLink:"", UID:"6b33cf08-35a1-40ea-b5fd-fe3f1d1a4dbf", ResourceVersion:"754", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63745544840, loc:(*time.Location)(0x25fda40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"74ff55c5b", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"kubernetes-master1", ContainerID:"97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc", Pod:"coredns-74ff55c5b-tctl9", Endpoint:"eth0", IPNetworks:[]string{"172.16.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali396d1b83556", MAC:"32:5e:e5:cc:71:26", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}}
- Jan 6 16:50:12 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:12.234 [INFO][5274] k8s.go 474: Wrote updated endpoint to datastore ContainerID="97ac85ce563b2961621aaf7be0fcdb77724268c7cd7ef6554702f44b708316cc" Namespace="kube-system" Pod="coredns-74ff55c5b-tctl9" WorkloadEndpoint="kubernetes--master1-k8s-coredns--74ff55c5b--tctl9-eth0"
- Jan 6 16:50:12 kubernetes-master1 dockerd[828]: time="2021-01-06T16:50:12.285708441+01:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
- Jan 6 16:50:12 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-c865de45b51cb70020de7c0f7d05949f37767f99347993063815dbabaf68880b\x2dinit-merged.mount: Succeeded.
- Jan 6 16:50:12 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-c865de45b51cb70020de7c0f7d05949f37767f99347993063815dbabaf68880b\x2dinit-merged.mount: Succeeded.
- Jan 6 16:50:12 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-c865de45b51cb70020de7c0f7d05949f37767f99347993063815dbabaf68880b-merged.mount: Succeeded.
- Jan 6 16:50:12 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-c865de45b51cb70020de7c0f7d05949f37767f99347993063815dbabaf68880b-merged.mount: Succeeded.
- Jan 6 16:50:12 kubernetes-master1 containerd[821]: time="2021-01-06T16:50:12.366572003+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d9fc708e2e19cad392552f322f9881e659d64783618778279ba8cb683dfc3804/shim.sock" debug=false pid=5349
- Jan 6 16:50:12 kubernetes-master1 systemd[1]: Started libcontainer container d9fc708e2e19cad392552f322f9881e659d64783618778279ba8cb683dfc3804.
- Jan 6 16:50:12 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-d9fc708e2e19cad392552f322f9881e659d64783618778279ba8cb683dfc3804-runc.Oy9enK.mount: Succeeded.
- Jan 6 16:50:12 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-d9fc708e2e19cad392552f322f9881e659d64783618778279ba8cb683dfc3804-runc.Oy9enK.mount: Succeeded.
- Jan 6 16:50:13 kubernetes-master1 systemd-networkd[773]: cali57ef0e67f63: Gained IPv6LL
- Jan 6 16:50:13 kubernetes-master1 systemd-networkd[773]: cali396d1b83556: Gained IPv6LL
- Jan 6 16:50:16 kubernetes-master1 kubelet[3514]: I0106 16:50:16.729550 3514 topology_manager.go:187] [topologymanager] Topology Admit Handler
- Jan 6 16:50:16 kubernetes-master1 systemd[1]: Created slice libcontainer container kubepods-besteffort-podb5c99ee0_7177_457a_bdfb_9d833e4c183a.slice.
- Jan 6 16:50:16 kubernetes-master1 kubelet[3514]: I0106 16:50:16.868078 3514 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-kube-controllers-token-xjg6v" (UniqueName: "kubernetes.io/secret/b5c99ee0-7177-457a-bdfb-9d833e4c183a-calico-kube-controllers-token-xjg6v") pod "calico-kube-controllers-744cfdf676-mks4d" (UID: "b5c99ee0-7177-457a-bdfb-9d833e4c183a")
- Jan 6 16:50:17 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-951e9d278f6e95078bb778f821e0fff81d4c316346a38a12659aa25b4f8318cb\x2dinit-merged.mount: Succeeded.
- Jan 6 16:50:17 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-951e9d278f6e95078bb778f821e0fff81d4c316346a38a12659aa25b4f8318cb\x2dinit-merged.mount: Succeeded.
- Jan 6 16:50:17 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-951e9d278f6e95078bb778f821e0fff81d4c316346a38a12659aa25b4f8318cb-merged.mount: Succeeded.
- Jan 6 16:50:17 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-951e9d278f6e95078bb778f821e0fff81d4c316346a38a12659aa25b4f8318cb-merged.mount: Succeeded.
- Jan 6 16:50:17 kubernetes-master1 containerd[821]: time="2021-01-06T16:50:17.214683895+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b/shim.sock" debug=false pid=5499
- Jan 6 16:50:17 kubernetes-master1 systemd[1]: Started libcontainer container 2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b.
- Jan 6 16:50:17 kubernetes-master1 systemd-udevd[5537]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
- Jan 6 16:50:17 kubernetes-master1 systemd-udevd[5537]: Using default interface naming scheme 'v245'.
- Jan 6 16:50:17 kubernetes-master1 networkd-dispatcher[809]: WARNING:Unknown index 9 seen, reloading interface list
- Jan 6 16:50:17 kubernetes-master1 systemd-networkd[773]: cali9f47dc26969: Link UP
- Jan 6 16:50:17 kubernetes-master1 systemd-networkd[773]: cali9f47dc26969: Gained carrier
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.611 [INFO][5571] plugin.go 257: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0 calico-kube-controllers-744cfdf676- kube-system b5c99ee0-7177-457a-bdfb-9d833e4c183a 788 0 2021-01-06 16:49:55 +0100 CET <nil> <nil> map[k8s-app:calico-kube-controllers pod-template-hash:744cfdf676 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s kubernetes-master1 calico-kube-controllers-744cfdf676-mks4d eth0 [] [] [kns.kube-system ksa.kube-system.calico-kube-controllers] cali9f47dc26969 []}} ContainerID="2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" Namespace="kube-system" Pod="calico-kube-controllers-744cfdf676-mks4d" WorkloadEndpoint="kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.611 [INFO][5571] k8s.go 69: Extracted identifiers for CmdAddK8s ContainerID="2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" Namespace="kube-system" Pod="calico-kube-controllers-744cfdf676-mks4d" WorkloadEndpoint="kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.651 [INFO][5578] ipam_plugin.go 214: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" HandleID="k8s-pod-network.2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" Workload="kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.668 [INFO][5578] ipam_plugin.go 253: Auto assigning IP ContainerID="2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" HandleID="k8s-pod-network.2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" Workload="kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003792d0), Attrs:map[string]string{"namespace":"kube-system", "node":"kubernetes-master1", "pod":"calico-kube-controllers-744cfdf676-mks4d"}, Hostname:"kubernetes-master1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil)}
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.668 [INFO][5578] ipam.go 92: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'kubernetes-master1'
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.668 [INFO][5578] ipam.go 548: Looking up existing affinities for host handle="k8s-pod-network.2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" host="kubernetes-master1"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.677 [INFO][5578] ipam.go 346: Looking up existing affinities for host host="kubernetes-master1"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.684 [INFO][5578] ipam.go 429: Trying affinity for 172.16.69.192/26 host="kubernetes-master1"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.687 [INFO][5578] ipam.go 140: Attempting to load block cidr=172.16.69.192/26 host="kubernetes-master1"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.691 [INFO][5578] ipam.go 217: Affinity is confirmed and block has been loaded cidr=172.16.69.192/26 host="kubernetes-master1"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.691 [INFO][5578] ipam.go 947: Attempting to assign 1 addresses from block block=172.16.69.192/26 handle="k8s-pod-network.2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" host="kubernetes-master1"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.693 [INFO][5578] ipam.go 1424: Creating new handle: k8s-pod-network.2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.699 [INFO][5578] ipam.go 970: Writing block in order to claim IPs block=172.16.69.192/26 handle="k8s-pod-network.2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" host="kubernetes-master1"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.707 [INFO][5578] ipam.go 983: Successfully claimed IPs: [172.16.69.195/26] block=172.16.69.192/26 handle="k8s-pod-network.2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" host="kubernetes-master1"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.707 [INFO][5578] ipam.go 706: Auto-assigned 1 out of 1 IPv4s: [172.16.69.195/26] handle="k8s-pod-network.2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" host="kubernetes-master1"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.707 [INFO][5578] ipam_plugin.go 255: Calico CNI IPAM assigned addresses IPv4=[172.16.69.195/26] IPv6=[] ContainerID="2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" HandleID="k8s-pod-network.2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" Workload="kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.709 [INFO][5571] k8s.go 372: Populated endpoint ContainerID="2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" Namespace="kube-system" Pod="calico-kube-controllers-744cfdf676-mks4d" WorkloadEndpoint="kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0", GenerateName:"calico-kube-controllers-744cfdf676-", Namespace:"kube-system", SelfLink:"", UID:"b5c99ee0-7177-457a-bdfb-9d833e4c183a", ResourceVersion:"788", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63745544995, loc:(*time.Location)(0x25fda40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-kube-controllers", "pod-template-hash":"744cfdf676", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"kubernetes-master1", ContainerID:"", Pod:"calico-kube-controllers-744cfdf676-mks4d", Endpoint:"eth0", IPNetworks:[]string{"172.16.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.calico-kube-controllers"}, InterfaceName:"cali9f47dc26969", MAC:"", Ports:[]v3.EndpointPort(nil)}}
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.709 [INFO][5571] k8s.go 373: Calico CNI using IPs: [172.16.69.195/32] ContainerID="2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" Namespace="kube-system" Pod="calico-kube-controllers-744cfdf676-mks4d" WorkloadEndpoint="kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.709 [INFO][5571] dataplane_linux.go 66: Setting the host side veth name to cali9f47dc26969 ContainerID="2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" Namespace="kube-system" Pod="calico-kube-controllers-744cfdf676-mks4d" WorkloadEndpoint="kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.711 [INFO][5571] dataplane_linux.go 420: Disabling IPv4 forwarding ContainerID="2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" Namespace="kube-system" Pod="calico-kube-controllers-744cfdf676-mks4d" WorkloadEndpoint="kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0"
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.758 [INFO][5571] k8s.go 400: Added Mac, interface name, and active container ID to endpoint ContainerID="2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" Namespace="kube-system" Pod="calico-kube-controllers-744cfdf676-mks4d" WorkloadEndpoint="kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0", GenerateName:"calico-kube-controllers-744cfdf676-", Namespace:"kube-system", SelfLink:"", UID:"b5c99ee0-7177-457a-bdfb-9d833e4c183a", ResourceVersion:"788", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63745544995, loc:(*time.Location)(0x25fda40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-kube-controllers", "pod-template-hash":"744cfdf676", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"kubernetes-master1", ContainerID:"2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b", Pod:"calico-kube-controllers-744cfdf676-mks4d", Endpoint:"eth0", IPNetworks:[]string{"172.16.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.calico-kube-controllers"}, InterfaceName:"cali9f47dc26969", MAC:"be:ba:e7:b9:c7:e9", Ports:[]v3.EndpointPort(nil)}}
- Jan 6 16:50:17 kubernetes-master1 kubelet[3514]: 2021-01-06 16:50:17.771 [INFO][5571] k8s.go 474: Wrote updated endpoint to datastore ContainerID="2244604b08bc9d4f42c6c2104c9d7a0d2cca3705879f7be519a679ff671b264b" Namespace="kube-system" Pod="calico-kube-controllers-744cfdf676-mks4d" WorkloadEndpoint="kubernetes--master1-k8s-calico--kube--controllers--744cfdf676--mks4d-eth0"
- Jan 6 16:50:17 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-150afe540a89d35ad10803beaa0a6387cdc321900c57c54fd604e412ed44ec66\x2dinit-merged.mount: Succeeded.
- Jan 6 16:50:17 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-150afe540a89d35ad10803beaa0a6387cdc321900c57c54fd604e412ed44ec66\x2dinit-merged.mount: Succeeded.
- Jan 6 16:50:17 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-150afe540a89d35ad10803beaa0a6387cdc321900c57c54fd604e412ed44ec66-merged.mount: Succeeded.
- Jan 6 16:50:17 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-150afe540a89d35ad10803beaa0a6387cdc321900c57c54fd604e412ed44ec66-merged.mount: Succeeded.
- Jan 6 16:50:17 kubernetes-master1 containerd[821]: time="2021-01-06T16:50:17.915150207+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83/shim.sock" debug=false pid=5600
- Jan 6 16:50:17 kubernetes-master1 systemd[1]: Started libcontainer container 9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83.
- Jan 6 16:50:17 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.5mhMWR.mount: Succeeded.
- Jan 6 16:50:17 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.5mhMWR.mount: Succeeded.
- Jan 6 16:50:19 kubernetes-master1 systemd-networkd[773]: cali9f47dc26969: Gained IPv6LL
- Jan 6 16:50:20 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.p5PQ06.mount: Succeeded.
- Jan 6 16:50:20 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.p5PQ06.mount: Succeeded.
- Jan 6 16:50:20 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.jM3GDq.mount: Succeeded.
- Jan 6 16:50:20 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.jM3GDq.mount: Succeeded.
- Jan 6 16:50:20 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.CyFB0O.mount: Succeeded.
- Jan 6 16:50:20 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.CyFB0O.mount: Succeeded.
- Jan 6 16:50:30 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.xFCyT4.mount: Succeeded.
- Jan 6 16:50:30 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.xFCyT4.mount: Succeeded.
- Jan 6 16:50:30 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.Lm9v2j.mount: Succeeded.
- Jan 6 16:50:30 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.Lm9v2j.mount: Succeeded.
- Jan 6 16:50:30 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.cNpxxS.mount: Succeeded.
- Jan 6 16:50:30 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.cNpxxS.mount: Succeeded.
- Jan 6 16:50:40 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.naYHj0.mount: Succeeded.
- Jan 6 16:50:40 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.naYHj0.mount: Succeeded.
- Jan 6 16:50:40 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.BbNbG5.mount: Succeeded.
- Jan 6 16:50:40 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.BbNbG5.mount: Succeeded.
- Jan 6 16:50:40 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.WMsvFx.mount: Succeeded.
- Jan 6 16:50:40 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.WMsvFx.mount: Succeeded.
- Jan 6 16:50:50 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.ecpcHP.mount: Succeeded.
- Jan 6 16:50:50 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.ecpcHP.mount: Succeeded.
- Jan 6 16:50:50 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.MI92e3.mount: Succeeded.
- Jan 6 16:50:50 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.MI92e3.mount: Succeeded.
- Jan 6 16:50:50 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.cr5jF4.mount: Succeeded.
- Jan 6 16:50:50 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.cr5jF4.mount: Succeeded.
- Jan 6 16:51:00 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.btYCEz.mount: Succeeded.
- Jan 6 16:51:00 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.btYCEz.mount: Succeeded.
- Jan 6 16:51:00 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.it027z.mount: Succeeded.
- Jan 6 16:51:00 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.it027z.mount: Succeeded.
- Jan 6 16:51:00 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.Hfq4Ww.mount: Succeeded.
- Jan 6 16:51:00 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.Hfq4Ww.mount: Succeeded.
- Jan 6 16:51:10 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.ax4FAC.mount: Succeeded.
- Jan 6 16:51:10 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.ax4FAC.mount: Succeeded.
- Jan 6 16:51:10 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.2TxAIL.mount: Succeeded.
- Jan 6 16:51:10 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.2TxAIL.mount: Succeeded.
- Jan 6 16:51:10 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.TIyT7q.mount: Succeeded.
- Jan 6 16:51:10 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.TIyT7q.mount: Succeeded.
- Jan 6 16:51:20 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.NkS4Mf.mount: Succeeded.
- Jan 6 16:51:20 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.NkS4Mf.mount: Succeeded.
- Jan 6 16:51:20 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.ZW3Xvk.mount: Succeeded.
- Jan 6 16:51:20 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.ZW3Xvk.mount: Succeeded.
- Jan 6 16:51:20 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.G2T5aN.mount: Succeeded.
- Jan 6 16:51:20 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.G2T5aN.mount: Succeeded.
- Jan 6 16:51:30 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.QO4bZI.mount: Succeeded.
- Jan 6 16:51:30 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.QO4bZI.mount: Succeeded.
- Jan 6 16:51:30 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.flfYNS.mount: Succeeded.
- Jan 6 16:51:30 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.flfYNS.mount: Succeeded.
- Jan 6 16:51:30 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.Urt92L.mount: Succeeded.
- Jan 6 16:51:30 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.Urt92L.mount: Succeeded.
- Jan 6 16:51:40 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.3QO9eS.mount: Succeeded.
- Jan 6 16:51:40 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.3QO9eS.mount: Succeeded.
- Jan 6 16:51:40 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.8daWY5.mount: Succeeded.
- Jan 6 16:51:40 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.8daWY5.mount: Succeeded.
- Jan 6 16:51:40 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.EUad3q.mount: Succeeded.
- Jan 6 16:51:40 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.EUad3q.mount: Succeeded.
- Jan 6 16:51:50 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.Y1xhxP.mount: Succeeded.
- Jan 6 16:51:50 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.Y1xhxP.mount: Succeeded.
- Jan 6 16:51:50 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.xTMBLe.mount: Succeeded.
- Jan 6 16:51:50 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.xTMBLe.mount: Succeeded.
- Jan 6 16:51:50 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.nW34eG.mount: Succeeded.
- Jan 6 16:51:50 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.nW34eG.mount: Succeeded.
- Jan 6 16:51:50 kubernetes-master1 kubelet[3514]: E0106 16:51:50.847348 3514 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-kubernetes-master1.1657af73d5023133", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-kubernetes-master1", UID:"66b0dec8ac0c51772012999d6a94770d", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{etcd}"}, Reason:"Unhealthy", Message:"Liveness probe failed: HTTP probe failed with statuscode: 503", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff59503f1f35b33, ext:263495108936, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59503f1f35b33, ext:263495108936, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
- Jan 6 16:51:52 kubernetes-master1 systemd[1]: docker-e189bc7d347503e5b8414d6d3badf39a1476239b0e9245fddcb53a473a1271e7.scope: Succeeded.
- Jan 6 16:51:52 kubernetes-master1 systemd[1]: docker-e189bc7d347503e5b8414d6d3badf39a1476239b0e9245fddcb53a473a1271e7.scope: Consumed 2.389s CPU time.
- Jan 6 16:51:52 kubernetes-master1 systemd[1]: docker-2578f99d3679bb2987561a9b89833a5c9fafb19423bfcc5aeab133e31ea98bba.scope: Succeeded.
- Jan 6 16:51:52 kubernetes-master1 systemd[1]: docker-2578f99d3679bb2987561a9b89833a5c9fafb19423bfcc5aeab133e31ea98bba.scope: Consumed 6.566s CPU time.
- Jan 6 16:51:53 kubernetes-master1 containerd[821]: time="2021-01-06T16:51:53.157391898+01:00" level=info msg="shim reaped" id=e189bc7d347503e5b8414d6d3badf39a1476239b0e9245fddcb53a473a1271e7
- Jan 6 16:51:53 kubernetes-master1 containerd[821]: time="2021-01-06T16:51:53.163946825+01:00" level=info msg="shim reaped" id=2578f99d3679bb2987561a9b89833a5c9fafb19423bfcc5aeab133e31ea98bba
- Jan 6 16:51:53 kubernetes-master1 dockerd[828]: time="2021-01-06T16:51:53.166996316+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:51:53 kubernetes-master1 dockerd[828]: time="2021-01-06T16:51:53.175819920+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:51:53 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-7c6ab0dd3f084b9d6fe55981ef94d7b313d76cae6517123948d1f45401696f11-merged.mount: Succeeded.
- Jan 6 16:51:53 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-7c6ab0dd3f084b9d6fe55981ef94d7b313d76cae6517123948d1f45401696f11-merged.mount: Succeeded.
- Jan 6 16:51:53 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-491b3c637393258ea4b3e6fcad33da24b2c0456c4dd801a91ad222251bff9419-merged.mount: Succeeded.
- Jan 6 16:51:53 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-491b3c637393258ea4b3e6fcad33da24b2c0456c4dd801a91ad222251bff9419-merged.mount: Succeeded.
- Jan 6 16:51:53 kubernetes-master1 kubelet[3514]: I0106 16:51:53.923506 3514 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2578f99d3679bb2987561a9b89833a5c9fafb19423bfcc5aeab133e31ea98bba
- Jan 6 16:51:53 kubernetes-master1 kubelet[3514]: I0106 16:51:53.931397 3514 scope.go:95] [topologymanager] RemoveContainer - Container ID: e189bc7d347503e5b8414d6d3badf39a1476239b0e9245fddcb53a473a1271e7
- Jan 6 16:51:53 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-81df3aa30649548df663e5bb15e0abe8ffefdfdc6315f7edcf4a6ebc2508f82a\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:53 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-81df3aa30649548df663e5bb15e0abe8ffefdfdc6315f7edcf4a6ebc2508f82a\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:53 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-e4ee73025df2bef61af1b3ef385f53b4869f37fcb06baf658c39b8bf679dffe6\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:53 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-e4ee73025df2bef61af1b3ef385f53b4869f37fcb06baf658c39b8bf679dffe6\x2dinit-merged.mount: Succeeded.
- Jan 6 16:51:54 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-e4ee73025df2bef61af1b3ef385f53b4869f37fcb06baf658c39b8bf679dffe6-merged.mount: Succeeded.
- Jan 6 16:51:54 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-81df3aa30649548df663e5bb15e0abe8ffefdfdc6315f7edcf4a6ebc2508f82a-merged.mount: Succeeded.
- Jan 6 16:51:54 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-81df3aa30649548df663e5bb15e0abe8ffefdfdc6315f7edcf4a6ebc2508f82a-merged.mount: Succeeded.
- Jan 6 16:51:54 kubernetes-master1 containerd[821]: time="2021-01-06T16:51:54.023758557+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d5101268ef47e5ae89b2ec3a6efd3f90f36d0bba717ad5f2b9434cfed4a68d84/shim.sock" debug=false pid=7408
- Jan 6 16:51:54 kubernetes-master1 containerd[821]: time="2021-01-06T16:51:54.041685579+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1b4057c22cea3b4346c3873a9f7c4c3ccc527e14de464f16c6a82268c40eb234/shim.sock" debug=false pid=7418
- Jan 6 16:51:54 kubernetes-master1 systemd[1]: Started libcontainer container d5101268ef47e5ae89b2ec3a6efd3f90f36d0bba717ad5f2b9434cfed4a68d84.
- Jan 6 16:51:54 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-d5101268ef47e5ae89b2ec3a6efd3f90f36d0bba717ad5f2b9434cfed4a68d84-runc.CuQozH.mount: Succeeded.
- Jan 6 16:51:54 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-d5101268ef47e5ae89b2ec3a6efd3f90f36d0bba717ad5f2b9434cfed4a68d84-runc.CuQozH.mount: Succeeded.
- Jan 6 16:51:54 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-1b4057c22cea3b4346c3873a9f7c4c3ccc527e14de464f16c6a82268c40eb234-runc.Jdk0F4.mount: Succeeded.
- Jan 6 16:51:54 kubernetes-master1 systemd[1]: Started libcontainer container 1b4057c22cea3b4346c3873a9f7c4c3ccc527e14de464f16c6a82268c40eb234.
- Jan 6 16:51:54 kubernetes-master1 kubelet[3514]: W0106 16:51:54.221978 3514 docker_container.go:245] Deleted previously existing symlink file: "/var/log/pods/kube-system_kube-controller-manager-kubernetes-master1_c61f75a63a6b7c302751a6cc76c53045/kube-controller-manager/1.log"
- Jan 6 16:51:54 kubernetes-master1 kubelet[3514]: W0106 16:51:54.283971 3514 docker_container.go:245] Deleted previously existing symlink file: "/var/log/pods/kube-system_kube-scheduler-kubernetes-master1_9be8cb4627e7e5ad4c3f8acabd4b49b3/kube-scheduler/1.log"
- Jan 6 16:51:54 kubernetes-master1 kubelet[3514]: E0106 16:51:54.739636 3514 controller.go:187] failed to update lease, error: rpc error: code = Unknown desc = context deadline exceeded
- Jan 6 16:51:56 kubernetes-master1 kubelet[3514]: E0106 16:51:56.208625 3514 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master1": etcdserver: request timed out
- Jan 6 16:51:56 kubernetes-master1 kubelet[3514]: W0106 16:51:56.217641 3514 status_manager.go:550] Failed to get status for pod "kube-apiserver-kubernetes-master1_kube-system(2d932d5f886b8cfbaeec38eaf545a088)": etcdserver: request timed out
- Jan 6 16:51:57 kubernetes-master1 kubelet[3514]: E0106 16:51:57.861094 3514 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-kubernetes-master1.1657af73f25de7c4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-kubernetes-master1", UID:"2d932d5f886b8cfbaeec38eaf545a088", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = context deadline exceeded' (will not retry!)
- Jan 6 16:52:00 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.WhZuey.mount: Succeeded.
- Jan 6 16:52:00 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.WhZuey.mount: Succeeded.
- Jan 6 16:52:00 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.q9folM.mount: Succeeded.
- Jan 6 16:52:00 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.q9folM.mount: Succeeded.
- Jan 6 16:52:00 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.d9A7F6.mount: Succeeded.
- Jan 6 16:52:00 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.d9A7F6.mount: Succeeded.
- Jan 6 16:52:01 kubernetes-master1 kubelet[3514]: E0106 16:52:01.748832 3514 controller.go:187] failed to update lease, error: rpc error: code = Unknown desc = context deadline exceeded
- Jan 6 16:52:01 kubernetes-master1 kubelet[3514]: E0106 16:52:01.749283 3514 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-kubernetes-master1.1657af73f25de7c4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-kubernetes-master1", UID:"2d932d5f886b8cfbaeec38eaf545a088", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59504948bc5be, ext:266001781243, loc:(*time.Location)(0x70c9020)}}, Count:2, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-master1.1657af73f25de7c4": read tcp 192.168.255.201:60132->192.168.255.200:8443: use of closed network connection'(may retry after sleeping)
- Jan 6 16:52:06 kubernetes-master1 kubelet[3514]: E0106 16:52:06.215100 3514 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master1": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:52:06 kubernetes-master1 kubelet[3514]: E0106 16:52:06.217633 3514 controller.go:187] failed to update lease, error: Put "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master1?timeout=10s": read tcp 192.168.255.201:35318->192.168.255.200:8443: use of closed network connection
- Jan 6 16:52:10 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.LjuoCA.mount: Succeeded.
- Jan 6 16:52:10 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.LjuoCA.mount: Succeeded.
- Jan 6 16:52:10 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.rtEPwQ.mount: Succeeded.
- Jan 6 16:52:10 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.rtEPwQ.mount: Succeeded.
- Jan 6 16:52:10 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.uRnAmw.mount: Succeeded.
- Jan 6 16:52:10 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.uRnAmw.mount: Succeeded.
- Jan 6 16:52:13 kubernetes-master1 kubelet[3514]: E0106 16:52:13.282568 3514 controller.go:187] failed to update lease, error: rpc error: code = Unknown desc = context deadline exceeded
- Jan 6 16:52:13 kubernetes-master1 kubelet[3514]: E0106 16:52:13.283349 3514 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-kubernetes-master1.1657af73f25de7c4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-kubernetes-master1", UID:"2d932d5f886b8cfbaeec38eaf545a088", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59504948bc5be, ext:266001781243, loc:(*time.Location)(0x70c9020)}}, Count:2, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-master1.1657af73f25de7c4": read tcp 192.168.255.201:35606->192.168.255.200:8443: use of closed network connection'(may retry after sleeping)
- Jan 6 16:52:16 kubernetes-master1 kubelet[3514]: E0106 16:52:16.216247 3514 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master1": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:52:16 kubernetes-master1 kubelet[3514]: E0106 16:52:16.218942 3514 controller.go:187] failed to update lease, error: Put "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master1?timeout=10s": read tcp 192.168.255.201:36062->192.168.255.200:8443: use of closed network connection
- Jan 6 16:52:16 kubernetes-master1 kubelet[3514]: I0106 16:52:16.219122 3514 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease
- Jan 6 16:52:20 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.TcsuiO.mount: Succeeded.
- Jan 6 16:52:20 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.TcsuiO.mount: Succeeded.
- Jan 6 16:52:20 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.5Hnp3Q.mount: Succeeded.
- Jan 6 16:52:20 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.5Hnp3Q.mount: Succeeded.
- Jan 6 16:52:20 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.uuuhyk.mount: Succeeded.
- Jan 6 16:52:20 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.uuuhyk.mount: Succeeded.
- Jan 6 16:52:24 kubernetes-master1 kubelet[3514]: E0106 16:52:24.206588 3514 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master1": etcdserver: request timed out
- Jan 6 16:52:24 kubernetes-master1 kubelet[3514]: E0106 16:52:24.209574 3514 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-kubernetes-master1.1657af73f25de7c4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-kubernetes-master1", UID:"2d932d5f886b8cfbaeec38eaf545a088", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59504948bc5be, ext:266001781243, loc:(*time.Location)(0x70c9020)}}, Count:2, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-master1.1657af73f25de7c4": read tcp 192.168.255.201:36298->192.168.255.200:8443: use of closed network connection'(may retry after sleeping)
- Jan 6 16:52:26 kubernetes-master1 kubelet[3514]: E0106 16:52:26.220079 3514 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:52:30 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.gpbndY.mount: Succeeded.
- Jan 6 16:52:30 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.gpbndY.mount: Succeeded.
- Jan 6 16:52:30 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.dVO327.mount: Succeeded.
- Jan 6 16:52:30 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.dVO327.mount: Succeeded.
- Jan 6 16:52:30 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.NXns0u.mount: Succeeded.
- Jan 6 16:52:30 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.NXns0u.mount: Succeeded.
- Jan 6 16:52:34 kubernetes-master1 kubelet[3514]: E0106 16:52:34.208050 3514 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master1": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master1?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
- Jan 6 16:52:34 kubernetes-master1 kubelet[3514]: E0106 16:52:34.208080 3514 kubelet_node_status.go:434] Unable to update node status: update node status exceeds retry count
- Jan 6 16:52:36 kubernetes-master1 kubelet[3514]: E0106 16:52:36.423334 3514 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:52:40 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.STSEvV.mount: Succeeded.
- Jan 6 16:52:40 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.STSEvV.mount: Succeeded.
- Jan 6 16:52:41 kubernetes-master1 systemd[1]: Started Session 3 of user wojcieh.
- Jan 6 16:52:45 kubernetes-master1 kubelet[3514]: E0106 16:52:45.193991 3514 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: etcdserver: request timed out
- Jan 6 16:52:45 kubernetes-master1 kubelet[3514]: W0106 16:52:45.197949 3514 status_manager.go:550] Failed to get status for pod "kube-controller-manager-kubernetes-master1_kube-system(c61f75a63a6b7c302751a6cc76c53045)": etcdserver: request timed out
- Jan 6 16:52:45 kubernetes-master1 kubelet[3514]: E0106 16:52:45.203130 3514 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-kubernetes-master1.1657af73f25de7c4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-kubernetes-master1", UID:"2d932d5f886b8cfbaeec38eaf545a088", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59504948bc5be, ext:266001781243, loc:(*time.Location)(0x70c9020)}}, Count:2, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
- Jan 6 16:52:50 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.AQ13zE.mount: Succeeded.
- Jan 6 16:52:50 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.AQ13zE.mount: Succeeded.
- Jan 6 16:52:50 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.wKjUWQ.mount: Succeeded.
- Jan 6 16:52:50 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.wKjUWQ.mount: Succeeded.
- Jan 6 16:52:52 kubernetes-master1 kubelet[3514]: E0106 16:52:52.200846 3514 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master1": etcdserver: request timed out
- Jan 6 16:52:52 kubernetes-master1 kubelet[3514]: E0106 16:52:52.861893 3514 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
- Jan 6 16:52:52 kubernetes-master1 kubelet[3514]: E0106 16:52:52.862071 3514 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master1": rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
- Jan 6 16:52:52 kubernetes-master1 kubelet[3514]: E0106 16:52:52.863419 3514 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-kubernetes-master1.1657af73f25de7c4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-kubernetes-master1", UID:"2d932d5f886b8cfbaeec38eaf545a088", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59505155aaea3, ext:268015341240, loc:(*time.Location)(0x70c9020)}}, Count:3, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-master1.1657af73f25de7c4": read tcp 192.168.255.201:37412->192.168.255.200:8443: use of closed network connection'(may retry after sleeping)
- Jan 6 16:52:52 kubernetes-master1 systemd[1]: docker-0b749d081fa1d485845a10d8a2c42785aa93f35e5cd9aabfa95cc0d67182ea4e.scope: Succeeded.
- Jan 6 16:52:52 kubernetes-master1 systemd[1]: docker-0b749d081fa1d485845a10d8a2c42785aa93f35e5cd9aabfa95cc0d67182ea4e.scope: Consumed 8.176s CPU time.
- Jan 6 16:52:53 kubernetes-master1 containerd[821]: time="2021-01-06T16:52:53.030533562+01:00" level=info msg="shim reaped" id=0b749d081fa1d485845a10d8a2c42785aa93f35e5cd9aabfa95cc0d67182ea4e
- Jan 6 16:52:53 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-3389617e12740c7c323a25120c054cd71b3a935685c1d5da779bd7cfbe294af4-merged.mount: Succeeded.
- Jan 6 16:52:53 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-3389617e12740c7c323a25120c054cd71b3a935685c1d5da779bd7cfbe294af4-merged.mount: Succeeded.
- Jan 6 16:52:53 kubernetes-master1 dockerd[828]: time="2021-01-06T16:52:53.048803493+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:52:53 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-963c429d06607edbfe8aa90b71504e2f17905b42a5cb83d780f9960f6a3962a1\x2dinit-merged.mount: Succeeded.
- Jan 6 16:52:53 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-963c429d06607edbfe8aa90b71504e2f17905b42a5cb83d780f9960f6a3962a1\x2dinit-merged.mount: Succeeded.
- Jan 6 16:52:53 kubernetes-master1 containerd[821]: time="2021-01-06T16:52:53.162166445+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4ec9440258c601dfbf30f7c8c965edfce31c7bdecff7f960664404a56c12f17e/shim.sock" debug=false pid=8422
- Jan 6 16:52:53 kubernetes-master1 systemd[1]: Started libcontainer container 4ec9440258c601dfbf30f7c8c965edfce31c7bdecff7f960664404a56c12f17e.
- Jan 6 16:52:53 kubernetes-master1 kubelet[3514]: W0106 16:52:53.296284 3514 docker_container.go:245] Deleted previously existing symlink file: "/var/log/pods/kube-system_etcd-kubernetes-master1_66b0dec8ac0c51772012999d6a94770d/etcd/1.log"
- Jan 6 16:53:00 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.ln5SVt.mount: Succeeded.
- Jan 6 16:53:00 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.ln5SVt.mount: Succeeded.
- Jan 6 16:53:00 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.FZ8gcK.mount: Succeeded.
- Jan 6 16:53:00 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.FZ8gcK.mount: Succeeded.
- Jan 6 16:53:00 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.32nEAp.mount: Succeeded.
- Jan 6 16:53:00 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.32nEAp.mount: Succeeded.
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: E0106 16:53:02.832176 3514 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-kubernetes-master1.1657af73f25de7c4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-kubernetes-master1", UID:"2d932d5f886b8cfbaeec38eaf545a088", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59505155aaea3, ext:268015341240, loc:(*time.Location)(0x70c9020)}}, Count:3, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-master1.1657af73f25de7c4": EOF'(may retry after sleeping)
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.862806 3514 reflector.go:436] object-"kube-system"/"kubernetes-services-endpoint": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"kubernetes-services-endpoint": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.863346 3514 reflector.go:436] object-"kube-system"/"calico-config": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"calico-config": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.864260 3514 reflector.go:436] object-"kube-system"/"calico-kube-controllers-token-xjg6v": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"calico-kube-controllers-token-xjg6v": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.864773 3514 reflector.go:436] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"coredns": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.865633 3514 reflector.go:436] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: watch of *v1.Node ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.866604 3514 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.866943 3514 reflector.go:436] object-"kube-system"/"kube-proxy-token-bszcf": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"kube-proxy-token-bszcf": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.868176 3514 reflector.go:436] object-"kube-system"/"coredns-token-mmrhd": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"coredns-token-mmrhd": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.868441 3514 reflector.go:436] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: watch of *v1.Pod ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.868651 3514 reflector.go:436] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"kube-proxy": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.868943 3514 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.869335 3514 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: W0106 16:53:02.869450 3514 reflector.go:436] object-"kube-system"/"calico-node-token-bsmvn": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"calico-node-token-bsmvn": Unexpected watch close - watch lasted less than a second and no items received
- Jan 6 16:53:02 kubernetes-master1 kubelet[3514]: E0106 16:53:02.869459 3514 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master1": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:53:04 kubernetes-master1 kubelet[3514]: E0106 16:53:04.462416 3514 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:53:08 kubernetes-master1 kubelet[3514]: I0106 16:53:08.069283 3514 request.go:655] Throttling request took 1.053545123s, request: GET:https://kubernetes-cluster.homelab01.local:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes-master1&resourceVersion=807
- Jan 6 16:53:10 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.lQvUVt.mount: Succeeded.
- Jan 6 16:53:10 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.lQvUVt.mount: Succeeded.
- Jan 6 16:53:10 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.fbSa5M.mount: Succeeded.
- Jan 6 16:53:10 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.fbSa5M.mount: Succeeded.
- Jan 6 16:53:10 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.p7AfCm.mount: Succeeded.
- Jan 6 16:53:10 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.p7AfCm.mount: Succeeded.
- Jan 6 16:53:12 kubernetes-master1 kubelet[3514]: E0106 16:53:12.833257 3514 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-kubernetes-master1.1657af73f25de7c4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-kubernetes-master1", UID:"2d932d5f886b8cfbaeec38eaf545a088", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59505155aaea3, ext:268015341240, loc:(*time.Location)(0x70c9020)}}, Count:3, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-master1.1657af73f25de7c4": EOF'(may retry after sleeping)
- Jan 6 16:53:12 kubernetes-master1 kubelet[3514]: E0106 16:53:12.883298 3514 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master1": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:53:17 kubernetes-master1 kubelet[3514]: E0106 16:53:17.680670 3514 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:53:18 kubernetes-master1 kubelet[3514]: I0106 16:53:18.269206 3514 request.go:655] Throttling request took 1.798616203s, request: GET:https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=603
- Jan 6 16:53:18 kubernetes-master1 kubelet[3514]: W0106 16:53:18.870044 3514 status_manager.go:550] Failed to get status for pod "kube-scheduler-kubernetes-master1_kube-system(9be8cb4627e7e5ad4c3f8acabd4b49b3)": an error on the server ("") has prevented the request from succeeding (get pods kube-scheduler-kubernetes-master1)
- Jan 6 16:53:19 kubernetes-master1 systemd[1]: docker-241235f74f34afe2077581315ae09713cd0d4fce7e37bbf92cc06d943425a1bc.scope: Succeeded.
- Jan 6 16:53:19 kubernetes-master1 systemd[1]: docker-241235f74f34afe2077581315ae09713cd0d4fce7e37bbf92cc06d943425a1bc.scope: Consumed 37.428s CPU time.
- Jan 6 16:53:19 kubernetes-master1 containerd[821]: time="2021-01-06T16:53:19.869404638+01:00" level=info msg="shim reaped" id=241235f74f34afe2077581315ae09713cd0d4fce7e37bbf92cc06d943425a1bc
- Jan 6 16:53:19 kubernetes-master1 dockerd[828]: time="2021-01-06T16:53:19.879492690+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:53:19 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-15d1c9e70a95cabd87bf748aaa87224437c4800d98ebd98249e6548b4b21c3ef-merged.mount: Succeeded.
- Jan 6 16:53:19 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-15d1c9e70a95cabd87bf748aaa87224437c4800d98ebd98249e6548b4b21c3ef-merged.mount: Succeeded.
- Jan 6 16:53:19 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-ef0958735f51ce29b7a63a18d68c58e8e170eab9e4f76206d930a511d27314c8\x2dinit-merged.mount: Succeeded.
- Jan 6 16:53:19 kubernetes-master1 containerd[821]: time="2021-01-06T16:53:19.987240807+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1d9cdc0dfeb2e5a150b3be23acfce921323d4251fa2f72cb93d379e98a70e38b/shim.sock" debug=false pid=8821
- Jan 6 16:53:20 kubernetes-master1 systemd[1]: Started libcontainer container 1d9cdc0dfeb2e5a150b3be23acfce921323d4251fa2f72cb93d379e98a70e38b.
- Jan 6 16:53:20 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.ivTdvX.mount: Succeeded.
- Jan 6 16:53:20 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.ivTdvX.mount: Succeeded.
- Jan 6 16:53:22 kubernetes-master1 kubelet[3514]: E0106 16:53:22.835305 3514 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-kubernetes-master1.1657af73f25de7c4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-kubernetes-master1", UID:"2d932d5f886b8cfbaeec38eaf545a088", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59505155aaea3, ext:268015341240, loc:(*time.Location)(0x70c9020)}}, Count:3, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-master1.1657af73f25de7c4": EOF'(may retry after sleeping)
- Jan 6 16:53:22 kubernetes-master1 kubelet[3514]: E0106 16:53:22.895672 3514 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master1": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:53:22 kubernetes-master1 kubelet[3514]: E0106 16:53:22.895766 3514 kubelet_node_status.go:434] Unable to update node status: update node status exceeds retry count
- Jan 6 16:53:28 kubernetes-master1 kubelet[3514]: I0106 16:53:28.269299 3514 request.go:655] Throttling request took 1.798809957s, request: GET:https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=603
- Jan 6 16:53:30 kubernetes-master1 kubelet[3514]: I0106 16:53:30.270887 3514 trace.go:205] Trace[1531138462]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (06-Jan-2021 16:53:03.898) (total time: 26372ms):
- Jan 6 16:53:30 kubernetes-master1 kubelet[3514]: Trace[1531138462]: [26.372559829s] [26.372559829s] END
- Jan 6 16:53:30 kubernetes-master1 kubelet[3514]: E0106 16:53:30.271633 3514 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: an error on the server ("") has prevented the request from succeeding (get runtimeclasses.node.k8s.io)
- Jan 6 16:53:30 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.aZUST5.mount: Succeeded.
- Jan 6 16:53:30 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.aZUST5.mount: Succeeded.
- Jan 6 16:53:30 kubernetes-master1 kubelet[3514]: I0106 16:53:30.470757 3514 trace.go:205] Trace[569577587]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46 (06-Jan-2021 16:53:04.012) (total time: 26458ms):
- Jan 6 16:53:30 kubernetes-master1 kubelet[3514]: Trace[569577587]: [26.458467764s] [26.458467764s] END
- Jan 6 16:53:30 kubernetes-master1 kubelet[3514]: E0106 16:53:30.470787 3514 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: an error on the server ("") has prevented the request from succeeding (get pods)
- Jan 6 16:53:30 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.7HRlha.mount: Succeeded.
- Jan 6 16:53:30 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.7HRlha.mount: Succeeded.
- Jan 6 16:53:30 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.1XUubw.mount: Succeeded.
- Jan 6 16:53:30 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.1XUubw.mount: Succeeded.
- Jan 6 16:53:30 kubernetes-master1 kubelet[3514]: I0106 16:53:30.670034 3514 trace.go:205] Trace[878405407]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/kubelet.go:438 (06-Jan-2021 16:53:04.088) (total time: 26581ms):
- Jan 6 16:53:30 kubernetes-master1 kubelet[3514]: Trace[878405407]: [26.581467696s] [26.581467696s] END
- Jan 6 16:53:30 kubernetes-master1 kubelet[3514]: E0106 16:53:30.670079 3514 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: an error on the server ("") has prevented the request from succeeding (get nodes)
- Jan 6 16:53:30 kubernetes-master1 kubelet[3514]: I0106 16:53:30.870167 3514 trace.go:205] Trace[2019991284]: "Reflector ListAndWatch" name:object-"kube-system"/"calico-kube-controllers-token-xjg6v" (06-Jan-2021 16:53:04.086) (total time: 26783ms):
- Jan 6 16:53:30 kubernetes-master1 kubelet[3514]: Trace[2019991284]: [26.783847858s] [26.783847858s] END
- Jan 6 16:53:30 kubernetes-master1 kubelet[3514]: E0106 16:53:30.870201 3514 reflector.go:138] object-"kube-system"/"calico-kube-controllers-token-xjg6v": Failed to watch *v1.Secret: failed to list *v1.Secret: an error on the server ("") has prevented the request from succeeding (get secrets)
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: I0106 16:53:31.071455 3514 trace.go:205] Trace[3569993]: "Reflector ListAndWatch" name:object-"kube-system"/"kube-proxy" (06-Jan-2021 16:53:04.110) (total time: 26961ms):
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: Trace[3569993]: [26.961411491s] [26.961411491s] END
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: E0106 16:53:31.071908 3514 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: an error on the server ("") has prevented the request from succeeding (get configmaps)
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: I0106 16:53:31.270577 3514 trace.go:205] Trace[1812650220]: "Reflector ListAndWatch" name:object-"kube-system"/"kube-proxy-token-bszcf" (06-Jan-2021 16:53:04.168) (total time: 27102ms):
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: Trace[1812650220]: [27.102284497s] [27.102284497s] END
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: E0106 16:53:31.270940 3514 reflector.go:138] object-"kube-system"/"kube-proxy-token-bszcf": Failed to watch *v1.Secret: failed to list *v1.Secret: an error on the server ("") has prevented the request from succeeding (get secrets)
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: I0106 16:53:31.470168 3514 trace.go:205] Trace[400448408]: "Reflector ListAndWatch" name:object-"kube-system"/"coredns" (06-Jan-2021 16:53:04.209) (total time: 27260ms):
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: Trace[400448408]: [27.260432227s] [27.260432227s] END
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: E0106 16:53:31.470197 3514 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: an error on the server ("") has prevented the request from succeeding (get configmaps)
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: I0106 16:53:31.670164 3514 trace.go:205] Trace[972480126]: "Reflector ListAndWatch" name:object-"kube-system"/"coredns-token-mmrhd" (06-Jan-2021 16:53:04.226) (total time: 27443ms):
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: Trace[972480126]: [27.443220677s] [27.443220677s] END
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: E0106 16:53:31.670446 3514 reflector.go:138] object-"kube-system"/"coredns-token-mmrhd": Failed to watch *v1.Secret: failed to list *v1.Secret: an error on the server ("") has prevented the request from succeeding (get secrets)
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: I0106 16:53:31.870074 3514 trace.go:205] Trace[397542370]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (06-Jan-2021 16:53:04.266) (total time: 27603ms):
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: Trace[397542370]: [27.603713254s] [27.603713254s] END
- Jan 6 16:53:31 kubernetes-master1 kubelet[3514]: E0106 16:53:31.870130 3514 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: an error on the server ("") has prevented the request from succeeding (get csidrivers.storage.k8s.io)
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: I0106 16:53:32.270370 3514 trace.go:205] Trace[1393665865]: "Reflector ListAndWatch" name:object-"kube-system"/"calico-config" (06-Jan-2021 16:53:04.290) (total time: 27979ms):
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: Trace[1393665865]: [27.979798704s] [27.979798704s] END
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: E0106 16:53:32.270849 3514 reflector.go:138] object-"kube-system"/"calico-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: an error on the server ("") has prevented the request from succeeding (get configmaps)
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: I0106 16:53:32.470331 3514 trace.go:205] Trace[255700380]: "Reflector ListAndWatch" name:object-"kube-system"/"kubernetes-services-endpoint" (06-Jan-2021 16:53:04.382) (total time: 28087ms):
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: Trace[255700380]: [28.087955365s] [28.087955365s] END
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: E0106 16:53:32.471003 3514 reflector.go:138] object-"kube-system"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: an error on the server ("") has prevented the request from succeeding (get configmaps)
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: I0106 16:53:32.671118 3514 trace.go:205] Trace[1967930782]: "Reflector ListAndWatch" name:object-"kube-system"/"calico-node-token-bsmvn" (06-Jan-2021 16:53:04.398) (total time: 28272ms):
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: Trace[1967930782]: [28.272933434s] [28.272933434s] END
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: E0106 16:53:32.671745 3514 reflector.go:138] object-"kube-system"/"calico-node-token-bsmvn": Failed to watch *v1.Secret: failed to list *v1.Secret: an error on the server ("") has prevented the request from succeeding (get secrets)
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: E0106 16:53:32.836389 3514 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-kubernetes-master1.1657af73f25de7c4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-kubernetes-master1", UID:"2d932d5f886b8cfbaeec38eaf545a088", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59505155aaea3, ext:268015341240, loc:(*time.Location)(0x70c9020)}}, Count:3, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-master1.1657af73f25de7c4": EOF'(may retry after sleeping)
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: I0106 16:53:32.870375 3514 trace.go:205] Trace[837851921]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (06-Jan-2021 16:53:04.415) (total time: 28455ms):
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: Trace[837851921]: [28.455148958s] [28.455148958s] END
- Jan 6 16:53:32 kubernetes-master1 kubelet[3514]: E0106 16:53:32.870785 3514 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
- Jan 6 16:53:34 kubernetes-master1 kubelet[3514]: E0106 16:53:34.096042 3514 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:53:38 kubernetes-master1 kubelet[3514]: I0106 16:53:38.469084 3514 request.go:655] Throttling request took 1.797118571s, request: GET:https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dcalico-node-token-bsmvn&resourceVersion=669
- Jan 6 16:53:39 kubernetes-master1 systemd[1]: session-3.scope: Succeeded.
- Jan 6 16:53:40 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.EOt6v0.mount: Succeeded.
- Jan 6 16:53:40 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.EOt6v0.mount: Succeeded.
- Jan 6 16:53:40 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.ZGhljj.mount: Succeeded.
- Jan 6 16:53:40 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.ZGhljj.mount: Succeeded.
- Jan 6 16:53:40 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.CPPUxY.mount: Succeeded.
- Jan 6 16:53:40 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.CPPUxY.mount: Succeeded.
- Jan 6 16:53:40 kubernetes-master1 systemd[1]: docker-1d9cdc0dfeb2e5a150b3be23acfce921323d4251fa2f72cb93d379e98a70e38b.scope: Succeeded.
- Jan 6 16:53:40 kubernetes-master1 containerd[821]: time="2021-01-06T16:53:40.886627356+01:00" level=info msg="shim reaped" id=1d9cdc0dfeb2e5a150b3be23acfce921323d4251fa2f72cb93d379e98a70e38b
- Jan 6 16:53:40 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-ef0958735f51ce29b7a63a18d68c58e8e170eab9e4f76206d930a511d27314c8-merged.mount: Succeeded.
- Jan 6 16:53:40 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-ef0958735f51ce29b7a63a18d68c58e8e170eab9e4f76206d930a511d27314c8-merged.mount: Succeeded.
- Jan 6 16:53:40 kubernetes-master1 dockerd[828]: time="2021-01-06T16:53:40.904377773+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Jan 6 16:53:41 kubernetes-master1 kubelet[3514]: I0106 16:53:41.939139 3514 scope.go:95] [topologymanager] RemoveContainer - Container ID: 241235f74f34afe2077581315ae09713cd0d4fce7e37bbf92cc06d943425a1bc
- Jan 6 16:53:41 kubernetes-master1 kubelet[3514]: I0106 16:53:41.940195 3514 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1d9cdc0dfeb2e5a150b3be23acfce921323d4251fa2f72cb93d379e98a70e38b
- Jan 6 16:53:41 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-151767aa35e7a193f2c220acf8588c000564bfb7eb419a2484403a4687c70ca9\x2dinit-merged.mount: Succeeded.
- Jan 6 16:53:41 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-151767aa35e7a193f2c220acf8588c000564bfb7eb419a2484403a4687c70ca9\x2dinit-merged.mount: Succeeded.
- Jan 6 16:53:41 kubernetes-master1 systemd[1110]: var-lib-docker-overlay2-151767aa35e7a193f2c220acf8588c000564bfb7eb419a2484403a4687c70ca9-merged.mount: Succeeded.
- Jan 6 16:53:41 kubernetes-master1 systemd[1]: var-lib-docker-overlay2-151767aa35e7a193f2c220acf8588c000564bfb7eb419a2484403a4687c70ca9-merged.mount: Succeeded.
- Jan 6 16:53:42 kubernetes-master1 containerd[821]: time="2021-01-06T16:53:42.024643457+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a717a6b92eda1a7abc04e206f0b87c42b1ebc933d93e01949358e00a21d61221/shim.sock" debug=false pid=9250
- Jan 6 16:53:42 kubernetes-master1 systemd[1]: Started libcontainer container a717a6b92eda1a7abc04e206f0b87c42b1ebc933d93e01949358e00a21d61221.
- Jan 6 16:53:42 kubernetes-master1 kubelet[3514]: E0106 16:53:42.837874 3514 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-kubernetes-master1.1657af73f25de7c4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-kubernetes-master1", UID:"2d932d5f886b8cfbaeec38eaf545a088", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59505155aaea3, ext:268015341240, loc:(*time.Location)(0x70c9020)}}, Count:3, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-master1.1657af73f25de7c4": EOF'(may retry after sleeping)
- Jan 6 16:53:42 kubernetes-master1 kubelet[3514]: E0106 16:53:42.911064 3514 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master1": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master1?resourceVersion=0&timeout=10s": context deadline exceeded
- Jan 6 16:53:44 kubernetes-master1 systemd[1]: session-1.scope: Succeeded.
- Jan 6 16:53:47 kubernetes-master1 kubelet[3514]: W0106 16:53:47.270132 3514 status_manager.go:550] Failed to get status for pod "kube-controller-manager-kubernetes-master1_kube-system(c61f75a63a6b7c302751a6cc76c53045)": an error on the server ("") has prevented the request from succeeding (get pods kube-controller-manager-kubernetes-master1)
- Jan 6 16:53:48 kubernetes-master1 kubelet[3514]: I0106 16:53:48.469263 3514 request.go:655] Throttling request took 1.799202472s, request: GET:https://kubernetes-cluster.homelab01.local:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1
- Jan 6 16:53:50 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.QaaWNp.mount: Succeeded.
- Jan 6 16:53:50 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.QaaWNp.mount: Succeeded.
- Jan 6 16:53:50 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.VK88vC.mount: Succeeded.
- Jan 6 16:53:50 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-9b749ed2363d5f7a2b1a186e6e341679625b0439b58f31558c71a1e2b4c71d83-runc.VK88vC.mount: Succeeded.
- Jan 6 16:53:50 kubernetes-master1 systemd[1110]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.Jz5tCB.mount: Succeeded.
- Jan 6 16:53:50 kubernetes-master1 systemd[1]: run-docker-runtime\x2drunc-moby-c73c4d6970a3b6aae6739bf67fadbb830562afd6e0cea3ffc3cb29985ffdd0e7-runc.Jz5tCB.mount: Succeeded.
- Jan 6 16:53:51 kubernetes-master1 kubelet[3514]: E0106 16:53:51.109451 3514 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://kubernetes-cluster.homelab01.local:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:53:52 kubernetes-master1 kubelet[3514]: E0106 16:53:52.840004 3514 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-kubernetes-master1.1657af73f25de7c4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-kubernetes-master1", UID:"2d932d5f886b8cfbaeec38eaf545a088", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-master1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff5950413b447c4, ext:263987658716, loc:(*time.Location)(0x70c9020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff59505155aaea3, ext:268015341240, loc:(*time.Location)(0x70c9020)}}, Count:3, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://kubernetes-cluster.homelab01.local:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-master1.1657af73f25de7c4": EOF'(may retry after sleeping)
- Jan 6 16:53:52 kubernetes-master1 kubelet[3514]: E0106 16:53:52.927042 3514 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master1": Get "https://kubernetes-cluster.homelab01.local:8443/api/v1/nodes/kubernetes-master1?timeout=10s": context deadline exceeded
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: Stopping User Manager for UID 1000...
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Stopped target Main User Target.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Stopped target Basic System.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Stopped target Paths.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Stopped target Sockets.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Stopped target Timers.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: dbus.socket: Succeeded.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Closed D-Bus User Message Bus Socket.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: dirmngr.socket: Succeeded.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Closed GnuPG network certificate management daemon.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: gpg-agent-browser.socket: Succeeded.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Closed GnuPG cryptographic agent and passphrase cache (access for web browsers).
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: gpg-agent-extra.socket: Succeeded.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Closed GnuPG cryptographic agent and passphrase cache (restricted).
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: gpg-agent-ssh.socket: Succeeded.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Closed GnuPG cryptographic agent (ssh-agent emulation).
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: gpg-agent.socket: Succeeded.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Closed GnuPG cryptographic agent and passphrase cache.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: pk-debconf-helper.socket: Succeeded.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Closed debconf communication socket.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: snapd.session-agent.socket: Succeeded.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Closed REST API socket for snapd user session agent.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Reached target Shutdown.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: systemd-exit.service: Succeeded.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Finished Exit the Session.
- Jan 6 16:53:54 kubernetes-master1 systemd[1110]: Reached target Exit the Session.
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: [email protected]: Succeeded.
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: Stopped User Manager for UID 1000.
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: Stopping User Runtime Directory /run/user/1000...
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: run-user-1000.mount: Succeeded.
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: [email protected]: Succeeded.
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: Stopped User Runtime Directory /run/user/1000.
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: Removed slice User Slice of UID 1000.
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: Created slice User Slice of UID 1000.
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: Starting User Runtime Directory /run/user/1000...
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: Finished User Runtime Directory /run/user/1000.
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: Starting User Manager for UID 1000...
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Reached target Paths.
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Reached target Timers.
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Starting D-Bus User Message Bus Socket.
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Listening on GnuPG network certificate management daemon.
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Listening on GnuPG cryptographic agent and passphrase cache.
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Listening on debconf communication socket.
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Listening on REST API socket for snapd user session agent.
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Listening on D-Bus User Message Bus Socket.
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Reached target Sockets.
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Reached target Basic System.
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: Started User Manager for UID 1000.
- Jan 6 16:53:54 kubernetes-master1 systemd[1]: Started Session 4 of user wojcieh.
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Reached target Main User Target.
- Jan 6 16:53:54 kubernetes-master1 systemd[9464]: Startup finished in 130ms.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement