Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- I0215 09:57:31.225776 1 serving.go:331] Generated self-signed cert in-memory
- W0215 09:57:31.928247 1 authentication.go:368] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
- W0215 09:57:31.930929 1 authentication.go:265] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
- W0215 09:57:31.931786 1 authentication.go:289] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
- W0215 09:57:31.933485 1 authorization.go:187] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
- W0215 09:57:31.937560 1 authorization.go:156] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
- I0215 09:57:31.943955 1 controllermanager.go:175] Version: v1.19.7
- I0215 09:57:32.110731 1 tlsconfig.go:200] loaded serving cert ["Generated self signed cert"]: "localhost@1613383051" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer="localhost-ca@1613383050" (2021-02-15 08:57:29 +0000 UTC to 2022-02-15 08:57:29 +0000 UTC (now=2021-02-15 09:57:32.097807419 +0000 UTC))
- I0215 09:57:32.116337 1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1613383051" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1613383051" (2021-02-15 08:57:31 +0000 UTC to 2022-02-15 08:57:31 +0000 UTC (now=2021-02-15 09:57:32.111158652 +0000 UTC))
- I0215 09:57:32.136716 1 tlsconfig.go:240] Starting DynamicServingCertificateController
- I0215 09:57:32.161153 1 secure_serving.go:197] Serving securely on [::]:10257
- I0215 09:57:32.161996 1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
- I0215 09:57:32.203162 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
- I0215 09:57:51.932909 1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
- I0215 09:57:51.938112 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Endpoints" apiVersion="v1" type="Normal" reason="LeaderElection" message="ip-172-20-49-91_13ad088f-867d-4546-8681-391c94e6f514 became leader"
- I0215 09:57:51.938236 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="ip-172-20-49-91_13ad088f-867d-4546-8681-391c94e6f514 became leader"
- I0215 09:57:51.959512 1 controllermanager.go:231] using legacy client builder
- W0215 09:57:52.346319 1 plugins.go:105] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release
- I0215 09:57:52.364184 1 aws.go:1235] Building AWS cloudprovider
- I0215 09:57:52.364953 1 aws.go:1195] Zone not specified in configuration file; querying AWS metadata service
- I0215 09:57:52.691534 1 tags.go:79] AWS cloud filtering on ClusterID: pelocal.k8s.local
- I0215 09:57:52.691732 1 aws.go:786] Setting up informers for Cloud
- I0215 09:57:52.710574 1 shared_informer.go:240] Waiting for caches to sync for tokens
- I0215 09:57:52.712369 1 reflector.go:207] Starting reflector *v1.Node (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:57:52.714398 1 reflector.go:207] Starting reflector *v1.ServiceAccount (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:57:52.715063 1 reflector.go:207] Starting reflector *v1.Secret (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:57:52.719229 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.771763 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.775710 1 controllermanager.go:534] Starting "attachdetach"
- I0215 09:57:52.784071 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.789436 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.797769 1 plugins.go:631] Loaded volume plugin "kubernetes.io/aws-ebs"
- I0215 09:57:52.797889 1 plugins.go:631] Loaded volume plugin "kubernetes.io/gce-pd"
- I0215 09:57:52.797954 1 plugins.go:631] Loaded volume plugin "kubernetes.io/cinder"
- I0215 09:57:52.798051 1 plugins.go:631] Loaded volume plugin "kubernetes.io/azure-disk"
- I0215 09:57:52.798116 1 plugins.go:631] Loaded volume plugin "kubernetes.io/vsphere-volume"
- I0215 09:57:52.799936 1 plugins.go:631] Loaded volume plugin "kubernetes.io/portworx-volume"
- I0215 09:57:52.801137 1 plugins.go:631] Loaded volume plugin "kubernetes.io/scaleio"
- I0215 09:57:52.801312 1 plugins.go:631] Loaded volume plugin "kubernetes.io/storageos"
- I0215 09:57:52.802263 1 plugins.go:631] Loaded volume plugin "kubernetes.io/fc"
- I0215 09:57:52.803126 1 plugins.go:631] Loaded volume plugin "kubernetes.io/iscsi"
- I0215 09:57:52.803974 1 plugins.go:631] Loaded volume plugin "kubernetes.io/rbd"
- I0215 09:57:52.804418 1 plugins.go:631] Loaded volume plugin "kubernetes.io/csi"
- I0215 09:57:52.806091 1 controllermanager.go:549] Started "attachdetach"
- I0215 09:57:52.806179 1 controllermanager.go:534] Starting "persistentvolume-expander"
- I0215 09:57:52.807037 1 attach_detach_controller.go:322] Starting attach detach controller
- I0215 09:57:52.807159 1 shared_informer.go:240] Waiting for caches to sync for attach detach
- W0215 09:57:52.808547 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="ip-172-20-60-16.ap-south-1.compute.internal" does not exist
- W0215 09:57:52.808686 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="ip-172-20-49-91.ap-south-1.compute.internal" does not exist
- W0215 09:57:52.808780 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="ip-172-20-38-224.ap-south-1.compute.internal" does not exist
- I0215 09:57:52.809354 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.811037 1 shared_informer.go:247] Caches are synced for tokens
- I0215 09:57:52.816138 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.816756 1 plugins.go:631] Loaded volume plugin "kubernetes.io/cinder"
- I0215 09:57:52.816881 1 plugins.go:631] Loaded volume plugin "kubernetes.io/azure-disk"
- I0215 09:57:52.816944 1 plugins.go:631] Loaded volume plugin "kubernetes.io/azure-file"
- I0215 09:57:52.817047 1 plugins.go:631] Loaded volume plugin "kubernetes.io/vsphere-volume"
- I0215 09:57:52.817111 1 plugins.go:631] Loaded volume plugin "kubernetes.io/aws-ebs"
- I0215 09:57:52.817195 1 plugins.go:631] Loaded volume plugin "kubernetes.io/gce-pd"
- I0215 09:57:52.817747 1 plugins.go:631] Loaded volume plugin "kubernetes.io/portworx-volume"
- I0215 09:57:52.818824 1 plugins.go:631] Loaded volume plugin "kubernetes.io/glusterfs"
- I0215 09:57:52.818936 1 plugins.go:631] Loaded volume plugin "kubernetes.io/rbd"
- I0215 09:57:52.819001 1 plugins.go:631] Loaded volume plugin "kubernetes.io/scaleio"
- I0215 09:57:52.819138 1 plugins.go:631] Loaded volume plugin "kubernetes.io/storageos"
- I0215 09:57:52.819247 1 plugins.go:631] Loaded volume plugin "kubernetes.io/fc"
- I0215 09:57:52.819406 1 controllermanager.go:549] Started "persistentvolume-expander"
- I0215 09:57:52.819487 1 controllermanager.go:534] Starting "endpointslice"
- I0215 09:57:52.819685 1 expand_controller.go:303] Starting expand controller
- I0215 09:57:52.819805 1 shared_informer.go:240] Waiting for caches to sync for expand
- I0215 09:57:52.821908 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.827277 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.832248 1 controllermanager.go:549] Started "endpointslice"
- I0215 09:57:52.832341 1 controllermanager.go:534] Starting "disruption"
- I0215 09:57:52.832531 1 endpointslice_controller.go:237] Starting endpoint slice controller
- I0215 09:57:52.832651 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
- I0215 09:57:52.834751 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.839753 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.841729 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.846135 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.850172 1 controllermanager.go:549] Started "disruption"
- I0215 09:57:52.850303 1 controllermanager.go:534] Starting "route"
- I0215 09:57:52.851784 1 disruption.go:331] Starting disruption controller
- I0215 09:57:52.851875 1 shared_informer.go:240] Waiting for caches to sync for disruption
- I0215 09:57:52.853501 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.868929 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.869633 1 controllermanager.go:549] Started "route"
- I0215 09:57:52.869815 1 controllermanager.go:534] Starting "namespace"
- I0215 09:57:52.871217 1 route_controller.go:100] Starting route controller
- I0215 09:57:52.871341 1 shared_informer.go:240] Waiting for caches to sync for route
- I0215 09:57:52.871447 1 shared_informer.go:247] Caches are synced for route
- I0215 09:57:52.879168 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.889727 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.928352 1 controllermanager.go:549] Started "namespace"
- I0215 09:57:52.928501 1 controllermanager.go:534] Starting "job"
- I0215 09:57:52.928700 1 namespace_controller.go:200] Starting namespace controller
- I0215 09:57:52.928804 1 shared_informer.go:240] Waiting for caches to sync for namespace
- I0215 09:57:52.931333 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.937028 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.939591 1 controllermanager.go:549] Started "job"
- I0215 09:57:52.939684 1 controllermanager.go:534] Starting "nodeipam"
- I0215 09:57:52.940516 1 job_controller.go:148] Starting job controller
- I0215 09:57:52.940632 1 shared_informer.go:240] Waiting for caches to sync for job
- I0215 09:57:52.942419 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.946854 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:57:52.949322 1 node_ipam_controller.go:91] Sending events to api server.
- I0215 09:57:52.966980 1 route_controller.go:294] set node ip-172-20-38-224.ap-south-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set
- I0215 09:57:52.967112 1 route_controller.go:294] set node ip-172-20-60-16.ap-south-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set
- I0215 09:57:52.967176 1 route_controller.go:294] set node ip-172-20-49-91.ap-south-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set
- I0215 09:58:03.034623 1 range_allocator.go:82] Sending events to api server.
- I0215 09:58:03.039331 1 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses.
- I0215 09:58:03.040104 1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
- I0215 09:58:03.043799 1 route_controller.go:294] set node ip-172-20-60-16.ap-south-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set
- I0215 09:58:03.043899 1 route_controller.go:294] set node ip-172-20-49-91.ap-south-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set
- I0215 09:58:03.043965 1 route_controller.go:294] set node ip-172-20-38-224.ap-south-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set
- I0215 09:58:03.053110 1 controllermanager.go:549] Started "nodeipam"
- I0215 09:58:03.053971 1 controllermanager.go:534] Starting "csrapproving"
- I0215 09:58:03.056642 1 node_ipam_controller.go:159] Starting ipam controller
- I0215 09:58:03.058486 1 shared_informer.go:240] Waiting for caches to sync for node
- I0215 09:58:03.058599 1 shared_informer.go:247] Caches are synced for node
- I0215 09:58:03.058673 1 range_allocator.go:172] Starting range CIDR allocator
- I0215 09:58:03.058734 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
- I0215 09:58:03.058834 1 shared_informer.go:247] Caches are synced for cidrallocator
- I0215 09:58:03.065396 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.109529 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.118971 1 controllermanager.go:549] Started "csrapproving"
- I0215 09:58:03.119065 1 controllermanager.go:534] Starting "csrcleaner"
- I0215 09:58:03.120743 1 certificate_controller.go:118] Starting certificate controller "csrapproving"
- I0215 09:58:03.121267 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
- I0215 09:58:03.121542 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.127575 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.127860 1 controllermanager.go:549] Started "csrcleaner"
- I0215 09:58:03.128617 1 controllermanager.go:534] Starting "persistentvolume-binder"
- I0215 09:58:03.132236 1 cleaner.go:83] Starting CSR cleaner controller
- I0215 09:58:03.133809 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.141449 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.146006 1 plugins.go:631] Loaded volume plugin "kubernetes.io/host-path"
- I0215 09:58:03.146862 1 plugins.go:631] Loaded volume plugin "kubernetes.io/nfs"
- I0215 09:58:03.146958 1 plugins.go:631] Loaded volume plugin "kubernetes.io/glusterfs"
- I0215 09:58:03.147741 1 plugins.go:631] Loaded volume plugin "kubernetes.io/rbd"
- I0215 09:58:03.147832 1 plugins.go:631] Loaded volume plugin "kubernetes.io/quobyte"
- I0215 09:58:03.147893 1 plugins.go:631] Loaded volume plugin "kubernetes.io/cinder"
- I0215 09:58:03.147973 1 plugins.go:631] Loaded volume plugin "kubernetes.io/azure-disk"
- I0215 09:58:03.148050 1 plugins.go:631] Loaded volume plugin "kubernetes.io/azure-file"
- I0215 09:58:03.148110 1 plugins.go:631] Loaded volume plugin "kubernetes.io/vsphere-volume"
- I0215 09:58:03.148188 1 plugins.go:631] Loaded volume plugin "kubernetes.io/aws-ebs"
- I0215 09:58:03.148265 1 plugins.go:631] Loaded volume plugin "kubernetes.io/gce-pd"
- I0215 09:58:03.149068 1 plugins.go:631] Loaded volume plugin "kubernetes.io/flocker"
- I0215 09:58:03.149687 1 plugins.go:631] Loaded volume plugin "kubernetes.io/portworx-volume"
- I0215 09:58:03.150496 1 plugins.go:631] Loaded volume plugin "kubernetes.io/scaleio"
- I0215 09:58:03.150619 1 plugins.go:631] Loaded volume plugin "kubernetes.io/local-volume"
- I0215 09:58:03.151413 1 plugins.go:631] Loaded volume plugin "kubernetes.io/storageos"
- I0215 09:58:03.152895 1 plugins.go:631] Loaded volume plugin "kubernetes.io/csi"
- I0215 09:58:03.153796 1 controllermanager.go:549] Started "persistentvolume-binder"
- I0215 09:58:03.153885 1 controllermanager.go:534] Starting "deployment"
- I0215 09:58:03.154107 1 pv_controller_base.go:303] Starting persistent volume controller
- I0215 09:58:03.154208 1 shared_informer.go:240] Waiting for caches to sync for persistent volume
- I0215 09:58:03.156368 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.169631 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.176677 1 controllermanager.go:549] Started "deployment"
- I0215 09:58:03.176789 1 controllermanager.go:534] Starting "replicaset"
- I0215 09:58:03.177183 1 deployment_controller.go:153] Starting deployment controller
- I0215 09:58:03.177293 1 shared_informer.go:240] Waiting for caches to sync for deployment
- I0215 09:58:03.180610 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.193495 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.195937 1 controllermanager.go:549] Started "replicaset"
- I0215 09:58:03.196043 1 controllermanager.go:534] Starting "cronjob"
- I0215 09:58:03.196435 1 replica_set.go:182] Starting replicaset controller
- I0215 09:58:03.196547 1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
- I0215 09:58:03.210391 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.216419 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.217696 1 controllermanager.go:549] Started "cronjob"
- W0215 09:58:03.217823 1 controllermanager.go:528] "tokencleaner" is disabled
- I0215 09:58:03.217889 1 controllermanager.go:534] Starting "ephemeral-volume"
- W0215 09:58:03.217976 1 controllermanager.go:541] Skipping "ephemeral-volume"
- I0215 09:58:03.218184 1 controllermanager.go:534] Starting "endpoint"
- I0215 09:58:03.218434 1 cronjob_controller.go:96] Starting CronJob Manager
- I0215 09:58:03.233081 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.238979 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.241925 1 controllermanager.go:549] Started "endpoint"
- I0215 09:58:03.242029 1 controllermanager.go:534] Starting "garbagecollector"
- I0215 09:58:03.242473 1 endpoints_controller.go:184] Starting endpoint controller
- I0215 09:58:03.242568 1 shared_informer.go:240] Waiting for caches to sync for endpoint
- I0215 09:58:03.248420 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.258574 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.263811 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.271149 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.280838 1 controllermanager.go:549] Started "garbagecollector"
- I0215 09:58:03.280855 1 controllermanager.go:534] Starting "daemonset"
- I0215 09:58:03.281661 1 garbagecollector.go:128] Starting garbage collector controller
- I0215 09:58:03.281675 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
- I0215 09:58:03.284408 1 graph_builder.go:282] GraphBuilder running
- I0215 09:58:03.284590 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.301009 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.304555 1 controllermanager.go:549] Started "daemonset"
- I0215 09:58:03.304678 1 controllermanager.go:534] Starting "nodelifecycle"
- I0215 09:58:03.304894 1 daemon_controller.go:285] Starting daemon sets controller
- I0215 09:58:03.305952 1 shared_informer.go:240] Waiting for caches to sync for daemon sets
- I0215 09:58:03.313172 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.317666 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.318200 1 node_lifecycle_controller.go:380] Sending events to api server.
- I0215 09:58:03.318532 1 taint_manager.go:163] Sending events to api server.
- I0215 09:58:03.318981 1 node_lifecycle_controller.go:508] Controller will reconcile labels.
- I0215 09:58:03.319344 1 controllermanager.go:549] Started "nodelifecycle"
- I0215 09:58:03.319506 1 controllermanager.go:534] Starting "clusterrole-aggregation"
- I0215 09:58:03.325973 1 node_lifecycle_controller.go:542] Starting node controller
- I0215 09:58:03.326550 1 shared_informer.go:240] Waiting for caches to sync for taint
- I0215 09:58:03.329216 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.357905 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.358393 1 controllermanager.go:549] Started "clusterrole-aggregation"
- I0215 09:58:03.358758 1 controllermanager.go:534] Starting "root-ca-cert-publisher"
- W0215 09:58:03.358884 1 controllermanager.go:541] Skipping "root-ca-cert-publisher"
- I0215 09:58:03.358945 1 controllermanager.go:534] Starting "replicationcontroller"
- I0215 09:58:03.359612 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
- I0215 09:58:03.359704 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
- I0215 09:58:03.408521 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.507761 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.509145 1 controllermanager.go:549] Started "replicationcontroller"
- I0215 09:58:03.509264 1 controllermanager.go:534] Starting "podgc"
- I0215 09:58:03.509430 1 replica_set.go:182] Starting replicationcontroller controller
- I0215 09:58:03.509519 1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
- I0215 09:58:03.558695 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.636062 1 garbagecollector.go:199] syncing garbage collector with updated resources from discovery (attempt 1): added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v1, Resource=horizontalpodautoscalers batch/v1, Resource=jobs batch/v1beta1, Resource=cronjobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1beta1, Resource=endpointslices events.k8s.io/v1, Resource=events extensions/v1beta1, Resource=ingresses networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1beta1, Resource=runtimeclasses policy/v1beta1, Resource=poddisruptionbudgets policy/v1beta1, Resource=podsecuritypolicies rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles scheduling.k8s.io/v1, Resource=priorityclasses storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments], removed: []
- I0215 09:58:03.657761 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.659162 1 controllermanager.go:549] Started "podgc"
- W0215 09:58:03.659311 1 controllermanager.go:528] "bootstrapsigner" is disabled
- I0215 09:58:03.659384 1 controllermanager.go:534] Starting "csrsigning"
- I0215 09:58:03.659515 1 gc_controller.go:89] Starting GC controller
- I0215 09:58:03.659589 1 shared_informer.go:240] Waiting for caches to sync for GC
- I0215 09:58:03.708541 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.807884 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.824830 1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "csr-controller::/srv/kubernetes/ca.crt::/srv/kubernetes/ca.key"
- I0215 09:58:03.825782 1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "csr-controller::/srv/kubernetes/ca.crt::/srv/kubernetes/ca.key"
- I0215 09:58:03.826259 1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "csr-controller::/srv/kubernetes/ca.crt::/srv/kubernetes/ca.key"
- I0215 09:58:03.826752 1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "csr-controller::/srv/kubernetes/ca.crt::/srv/kubernetes/ca.key"
- I0215 09:58:03.828321 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
- I0215 09:58:03.828427 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
- I0215 09:58:03.828544 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
- I0215 09:58:03.828634 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
- I0215 09:58:03.828711 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
- I0215 09:58:03.828823 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
- I0215 09:58:03.828975 1 controllermanager.go:549] Started "csrsigning"
- I0215 09:58:03.829068 1 controllermanager.go:534] Starting "pv-protection"
- I0215 09:58:03.829199 1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
- I0215 09:58:03.829290 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
- I0215 09:58:03.829368 1 dynamic_serving_content.go:130] Starting csr-controller::/srv/kubernetes/ca.crt::/srv/kubernetes/ca.key
- I0215 09:58:03.829538 1 dynamic_serving_content.go:130] Starting csr-controller::/srv/kubernetes/ca.crt::/srv/kubernetes/ca.key
- I0215 09:58:03.829668 1 dynamic_serving_content.go:130] Starting csr-controller::/srv/kubernetes/ca.crt::/srv/kubernetes/ca.key
- I0215 09:58:03.829793 1 dynamic_serving_content.go:130] Starting csr-controller::/srv/kubernetes/ca.crt::/srv/kubernetes/ca.key
- I0215 09:58:03.858648 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.957512 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:03.958889 1 controllermanager.go:549] Started "pv-protection"
- I0215 09:58:03.959024 1 controllermanager.go:534] Starting "endpointslicemirroring"
- I0215 09:58:03.959619 1 pv_protection_controller.go:83] Starting PV protection controller
- I0215 09:58:03.959741 1 shared_informer.go:240] Waiting for caches to sync for PV protection
- I0215 09:58:04.008706 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:04.110933 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:04.114432 1 controllermanager.go:549] Started "endpointslicemirroring"
- I0215 09:58:04.114571 1 controllermanager.go:534] Starting "resourcequota"
- I0215 09:58:04.115744 1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
- I0215 09:58:04.116325 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
- I0215 09:58:04.159335 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:04.257671 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:04.613685 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
- I0215 09:58:04.614033 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
- I0215 09:58:04.614189 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
- I0215 09:58:04.614353 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
- I0215 09:58:04.614480 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
- I0215 09:58:04.615303 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
- I0215 09:58:04.615480 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
- I0215 09:58:04.616032 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
- I0215 09:58:04.616198 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
- I0215 09:58:04.616338 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
- I0215 09:58:04.616455 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
- I0215 09:58:04.616545 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
- I0215 09:58:04.616684 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
- I0215 09:58:04.616801 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
- I0215 09:58:04.617083 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
- I0215 09:58:04.617206 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
- I0215 09:58:04.617321 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
- I0215 09:58:04.617423 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
- I0215 09:58:04.617587 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
- I0215 09:58:04.617719 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
- I0215 09:58:04.617873 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
- I0215 09:58:04.617984 1 controllermanager.go:549] Started "resourcequota"
- I0215 09:58:04.618298 1 controllermanager.go:534] Starting "serviceaccount"
- I0215 09:58:04.619837 1 resource_quota_controller.go:272] Starting resource quota controller
- I0215 09:58:04.619961 1 shared_informer.go:240] Waiting for caches to sync for resource quota
- I0215 09:58:04.620079 1 resource_quota_monitor.go:303] QuotaMonitor running
- I0215 09:58:04.622310 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:04.627520 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:04.628522 1 controllermanager.go:549] Started "serviceaccount"
- I0215 09:58:04.628621 1 controllermanager.go:534] Starting "ttl-after-finished"
- W0215 09:58:04.628963 1 controllermanager.go:541] Skipping "ttl-after-finished"
- I0215 09:58:04.629071 1 controllermanager.go:534] Starting "service"
- I0215 09:58:04.629338 1 serviceaccounts_controller.go:117] Starting service account controller
- I0215 09:58:04.629457 1 shared_informer.go:240] Waiting for caches to sync for service account
- I0215 09:58:04.631330 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:04.635751 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:04.636775 1 controllermanager.go:549] Started "service"
- I0215 09:58:04.636874 1 controllermanager.go:534] Starting "cloud-node-lifecycle"
- I0215 09:58:04.637058 1 controller.go:239] Starting service controller
- I0215 09:58:04.637191 1 shared_informer.go:240] Waiting for caches to sync for service
- I0215 09:58:04.638785 1 controller.go:708] Detected change in list of current cluster nodes. New node set: map[ip-172-20-38-224.ap-south-1.compute.internal:{} ip-172-20-60-16.ap-south-1.compute.internal:{}]
- I0215 09:58:04.640791 1 controller.go:716] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
- I0215 09:58:04.641062 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:04.707247 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:04.708291 1 node_lifecycle_controller.go:77] Sending events to api server
- I0215 09:58:04.708455 1 controllermanager.go:549] Started "cloud-node-lifecycle"
- I0215 09:58:04.708573 1 controllermanager.go:534] Starting "pvc-protection"
- I0215 09:58:04.758393 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:04.858312 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:04.858822 1 controllermanager.go:549] Started "pvc-protection"
- I0215 09:58:04.858972 1 controllermanager.go:534] Starting "horizontalpodautoscaling"
- I0215 09:58:04.859238 1 pvc_protection_controller.go:110] Starting PVC protection controller
- I0215 09:58:04.859359 1 shared_informer.go:240] Waiting for caches to sync for PVC protection
- I0215 09:58:04.908602 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:05.007889 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:05.058334 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:05.159553 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:05.207858 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:05.307473 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:05.357953 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:05.457564 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:05.460719 1 controllermanager.go:549] Started "horizontalpodautoscaling"
- I0215 09:58:05.460826 1 controllermanager.go:534] Starting "statefulset"
- I0215 09:58:05.460973 1 horizontal.go:169] Starting HPA controller
- I0215 09:58:05.461131 1 shared_informer.go:240] Waiting for caches to sync for HPA
- I0215 09:58:05.508622 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:05.609321 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:05.611335 1 controllermanager.go:549] Started "statefulset"
- I0215 09:58:05.611378 1 controllermanager.go:534] Starting "ttl"
- I0215 09:58:05.611966 1 stateful_set.go:146] Starting stateful set controller
- I0215 09:58:05.612583 1 shared_informer.go:240] Waiting for caches to sync for stateful set
- I0215 09:58:05.658852 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:05.757960 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
- I0215 09:58:05.758503 1 controllermanager.go:549] Started "ttl"
- I0215 09:58:05.759615 1 request.go:645] Throttling request took 1.048753557s, request: GET:https://127.0.0.1/apis/networking.k8s.io/v1beta1?timeout=32s
- I0215 09:58:05.760913 1 ttl_controller.go:118] Starting TTL controller
- I0215 09:58:05.761072 1 shared_informer.go:240] Waiting for caches to sync for TTL
- I0215 09:58:05.761175 1 shared_informer.go:247] Caches are synced for TTL
- I0215 09:58:05.763823 1 reflector.go:207] Starting reflector *v1.ReplicaSet (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.766050 1 reflector.go:207] Starting reflector *v1.Job (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.766997 1 reflector.go:207] Starting reflector *v1.HorizontalPodAutoscaler (15s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.767425 1 reflector.go:207] Starting reflector *v1.PersistentVolume (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.767980 1 reflector.go:207] Starting reflector *v1.VolumeAttachment (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.768581 1 reflector.go:207] Starting reflector *v1.Service (30s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.769341 1 reflector.go:207] Starting reflector *v1beta1.EndpointSlice (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.770217 1 reflector.go:207] Starting reflector *v1.ClusterRole (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.770857 1 reflector.go:207] Starting reflector *v1.NetworkPolicy (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.771984 1 reflector.go:207] Starting reflector *v1.PodTemplate (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.772996 1 reflector.go:207] Starting reflector *v1.CSINode (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.773513 1 reflector.go:207] Starting reflector *v1.ReplicationController (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.773939 1 reflector.go:207] Starting reflector *v1.CertificateSigningRequest (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.774408 1 reflector.go:207] Starting reflector *v1.DaemonSet (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.775001 1 reflector.go:207] Starting reflector *v1beta1.CronJob (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.775506 1 reflector.go:207] Starting reflector *v1beta1.Ingress (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.775917 1 reflector.go:207] Starting reflector *v1.Role (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.776386 1 reflector.go:207] Starting reflector *v1.Pod (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.777343 1 reflector.go:207] Starting reflector *v1beta1.PodDisruptionBudget (30s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.777849 1 reflector.go:207] Starting reflector *v1.Namespace (5m0s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.778468 1 reflector.go:207] Starting reflector *v1.ResourceQuota (5m0s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.778958 1 reflector.go:207] Starting reflector *v1.Event (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.779361 1 reflector.go:207] Starting reflector *v1.PersistentVolumeClaim (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.779775 1 reflector.go:207] Starting reflector *v1.Deployment (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.780269 1 reflector.go:207] Starting reflector *v1.StatefulSet (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.780672 1 reflector.go:207] Starting reflector *v1.ConfigMap (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.781645 1 reflector.go:207] Starting reflector *v1.StorageClass (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.782124 1 reflector.go:207] Starting reflector *v1.LimitRange (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.782505 1 reflector.go:207] Starting reflector *v1.Ingress (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.783356 1 reflector.go:207] Starting reflector *v1.RoleBinding (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.783878 1 reflector.go:207] Starting reflector *v1.Lease (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.784786 1 reflector.go:207] Starting reflector *v1.CSIDriver (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.785129 1 reflector.go:207] Starting reflector *v1.Endpoints (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.787469 1 reflector.go:207] Starting reflector *v1.ControllerRevision (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:05.930448 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
- I0215 09:58:05.937028 1 shared_informer.go:247] Caches are synced for expand
- I0215 09:58:05.937967 1 shared_informer.go:247] Caches are synced for service
- I0215 09:58:05.938234 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
- I0215 09:58:05.938472 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
- I0215 09:58:05.938611 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
- I0215 09:58:05.938819 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
- I0215 09:58:05.960222 1 shared_informer.go:247] Caches are synced for PV protection
- I0215 09:58:05.963358 1 shared_informer.go:247] Caches are synced for namespace
- I0215 09:58:05.967804 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
- I0215 09:58:05.973817 1 shared_informer.go:247] Caches are synced for service account
- I0215 09:58:05.999915 1 shared_informer.go:247] Caches are synced for ReplicaSet
- I0215 09:58:06.009929 1 shared_informer.go:247] Caches are synced for daemon sets
- I0215 09:58:06.010751 1 shared_informer.go:247] Caches are synced for attach detach
- I0215 09:58:06.014065 1 shared_informer.go:247] Caches are synced for stateful set
- I0215 09:58:06.014262 1 shared_informer.go:247] Caches are synced for ReplicationController
- I0215 09:58:06.031770 1 shared_informer.go:247] Caches are synced for taint
- I0215 09:58:06.032383 1 node_lifecycle_controller.go:773] Controller observed a new Node: "ip-172-20-38-224.ap-south-1.compute.internal"
- I0215 09:58:06.034984 1 controller_utils.go:172] Recording Registered Node ip-172-20-38-224.ap-south-1.compute.internal in Controller event message for node ip-172-20-38-224.ap-south-1.compute.internal
- I0215 09:58:06.035027 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: ap-south-1::ap-south-1a
- I0215 09:58:06.035049 1 node_lifecycle_controller.go:773] Controller observed a new Node: "ip-172-20-60-16.ap-south-1.compute.internal"
- I0215 09:58:06.035057 1 controller_utils.go:172] Recording Registered Node ip-172-20-60-16.ap-south-1.compute.internal in Controller event message for node ip-172-20-60-16.ap-south-1.compute.internal
- I0215 09:58:06.035068 1 node_lifecycle_controller.go:773] Controller observed a new Node: "ip-172-20-49-91.ap-south-1.compute.internal"
- I0215 09:58:06.035074 1 controller_utils.go:172] Recording Registered Node ip-172-20-49-91.ap-south-1.compute.internal in Controller event message for node ip-172-20-49-91.ap-south-1.compute.internal
- W0215 09:58:06.036692 1 node_lifecycle_controller.go:1044] Missing timestamp for Node ip-172-20-38-224.ap-south-1.compute.internal. Assuming now as a timestamp.
- W0215 09:58:06.038882 1 node_lifecycle_controller.go:1044] Missing timestamp for Node ip-172-20-60-16.ap-south-1.compute.internal. Assuming now as a timestamp.
- W0215 09:58:06.038943 1 node_lifecycle_controller.go:1044] Missing timestamp for Node ip-172-20-49-91.ap-south-1.compute.internal. Assuming now as a timestamp.
- I0215 09:58:06.040552 1 node_lifecycle_controller.go:1245] Controller detected that zone ap-south-1::ap-south-1a is now in state Normal.
- I0215 09:58:06.040616 1 shared_informer.go:247] Caches are synced for endpoint_slice
- I0215 09:58:06.041230 1 shared_informer.go:247] Caches are synced for job
- I0215 09:58:06.044093 1 shared_informer.go:247] Caches are synced for endpoint
- I0215 09:58:06.046860 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
- I0215 09:58:06.047768 1 endpointslicemirroring_controller.go:218] Starting 5 worker threads
- I0215 09:58:06.048156 1 shared_informer.go:247] Caches are synced for resource quota
- I0215 09:58:06.049713 1 taint_manager.go:187] Starting NoExecuteTaintManager
- I0215 09:58:06.055213 1 shared_informer.go:247] Caches are synced for persistent volume
- I0215 09:58:06.070293 1 shared_informer.go:247] Caches are synced for HPA
- I0215 09:58:06.078316 1 event.go:291] "Event occurred" object="ip-172-20-38-224.ap-south-1.compute.internal" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ip-172-20-38-224.ap-south-1.compute.internal event: Registered Node ip-172-20-38-224.ap-south-1.compute.internal in Controller"
- I0215 09:58:06.078630 1 event.go:291] "Event occurred" object="ip-172-20-60-16.ap-south-1.compute.internal" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ip-172-20-60-16.ap-south-1.compute.internal event: Registered Node ip-172-20-60-16.ap-south-1.compute.internal in Controller"
- I0215 09:58:06.079677 1 event.go:291] "Event occurred" object="ip-172-20-49-91.ap-south-1.compute.internal" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ip-172-20-49-91.ap-south-1.compute.internal event: Registered Node ip-172-20-49-91.ap-south-1.compute.internal in Controller"
- I0215 09:58:06.080800 1 shared_informer.go:247] Caches are synced for deployment
- I0215 09:58:06.092645 1 shared_informer.go:247] Caches are synced for disruption
- I0215 09:58:06.092831 1 disruption.go:339] Sending events to api server.
- I0215 09:58:06.095547 1 shared_informer.go:247] Caches are synced for PVC protection
- I0215 09:58:06.096517 1 shared_informer.go:247] Caches are synced for GC
- I0215 09:58:06.613903 1 resource_quota_controller.go:434] syncing resource quota controller with updated resources from discovery: added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v1, Resource=horizontalpodautoscalers batch/v1, Resource=jobs batch/v1beta1, Resource=cronjobs coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1beta1, Resource=endpointslices events.k8s.io/v1, Resource=events extensions/v1beta1, Resource=ingresses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies policy/v1beta1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles], removed: []
- I0215 09:58:06.694093 1 shared_informer.go:240] Waiting for caches to sync for resource quota
- I0215 09:58:06.694287 1 shared_informer.go:247] Caches are synced for resource quota
- I0215 09:58:06.694398 1 resource_quota_controller.go:453] synced quota controller
- I0215 09:58:06.681395 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/kube-dns" err="Operation cannot be fulfilled on replicasets.apps \"kube-dns-696cb84c7\": the object has been modified; please apply your changes to the latest version and try again"
- I0215 09:58:07.304810 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
- I0215 09:58:07.439790 1 reflector.go:207] Starting reflector *v1.ClusterRoleBinding (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:07.476697 1 reflector.go:207] Starting reflector *v1.PartialObjectMetadata (14h23m22.889391581s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90
- I0215 09:58:07.483574 1 reflector.go:207] Starting reflector *v1.PartialObjectMetadata (14h23m22.889391581s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90
- I0215 09:58:07.629602 1 event.go:291] "Event occurred" object="kube-system/kube-dns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kube-dns-696cb84c7 to 2"
- I0215 09:58:07.629920 1 reflector.go:207] Starting reflector *v1.IngressClass (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:07.643106 1 reflector.go:207] Starting reflector *v1.ValidatingWebhookConfiguration (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:07.666501 1 reflector.go:207] Starting reflector *v1.MutatingWebhookConfiguration (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:07.714392 1 reflector.go:207] Starting reflector *v1beta1.PodSecurityPolicy (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:07.714753 1 reflector.go:207] Starting reflector *v1beta1.RuntimeClass (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:07.411986 1 reflector.go:207] Starting reflector *v1.PriorityClass (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:07.755588 1 replica_set.go:559] "Too few replicas" replicaSet="kube-system/kube-dns-696cb84c7" need=2 creating=1
- I0215 09:58:10.289899 1 shared_informer.go:247] Caches are synced for garbage collector
- I0215 09:58:10.267773 1 shared_informer.go:247] Caches are synced for garbage collector
- I0215 09:58:10.495902 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
- I0215 09:58:10.505700 1 garbagecollector.go:240] synced garbage collector
- I0215 09:58:10.692412 1 garbagecollector.go:404] "Processing object" object="kube-system/ip-172-20-38-224.ap-south-1.compute.internal" objectUID=32084c76-f576-41c0-971a-1000a6e7f51f kind="Node"
- I0215 09:58:10.708802 1 garbagecollector.go:404] "Processing object" object="kube-system/kube-dns-696cb84c7" objectUID=766726a5-0f71-4d45-aaf5-14746224d8c4 kind="ReplicaSet"
- I0215 09:58:10.736411 1 garbagecollector.go:404] "Processing object" object="kube-system/dns-controller-8d8889c4b" objectUID=14945960-e59d-4077-9ad9-ca89425db557 kind="ReplicaSet"
- I0215 09:58:10.768311 1 garbagecollector.go:404] "Processing object" object="kube-system/kube-dns" objectUID=6c545af5-7678-473d-b553-50007ad1a0cf kind="Service"
- I0215 09:58:10.810997 1 garbagecollector.go:404] "Processing object" object="kube-system/ip-172-20-49-91.ap-south-1.compute.internal" objectUID=b7353e07-ca1d-43a4-ac35-8ded92e5db96 kind="Node"
- I0215 09:58:10.811504 1 garbagecollector.go:404] "Processing object" object="kube-system/kube-dns-autoscaler-55f8f75459" objectUID=3e624785-a2ae-4b2a-9b97-3e8502ff9d28 kind="ReplicaSet"
- I0215 09:58:10.811941 1 garbagecollector.go:404] "Processing object" object="kube-system/ip-172-20-60-16.ap-south-1.compute.internal" objectUID=1ddf1af0-78a5-4a12-b952-409401ac1f51 kind="Node"
- I0215 09:58:10.812412 1 garbagecollector.go:404] "Processing object" object="kube-system/kops-controller" objectUID=bb7f412b-50f1-43ee-ba5a-fbed96e22cda kind="DaemonSet"
- I0215 09:58:12.833949 1 garbagecollector.go:449] object [v1/Service, namespace: kube-system, name: kube-dns, uid: 6c545af5-7678-473d-b553-50007ad1a0cf]'s doesn't have an owner, continue on next item
- I0215 09:58:12.861776 1 garbagecollector.go:449] object [v1/Node, namespace: kube-system, name: ip-172-20-49-91.ap-south-1.compute.internal, uid: b7353e07-ca1d-43a4-ac35-8ded92e5db96]'s doesn't have an owner, continue on next item
- I0215 09:58:12.765259 1 garbagecollector.go:449] object [v1/Node, namespace: kube-system, name: ip-172-20-60-16.ap-south-1.compute.internal, uid: 1ddf1af0-78a5-4a12-b952-409401ac1f51]'s doesn't have an owner, continue on next item
- I0215 09:58:12.790257 1 garbagecollector.go:449] object [v1/Node, namespace: kube-system, name: ip-172-20-38-224.ap-south-1.compute.internal, uid: 32084c76-f576-41c0-971a-1000a6e7f51f]'s doesn't have an owner, continue on next item
- I0215 09:58:13.146152 1 garbagecollector.go:449] object [apps/v1/DaemonSet, namespace: kube-system, name: kops-controller, uid: bb7f412b-50f1-43ee-ba5a-fbed96e22cda]'s doesn't have an owner, continue on next item
- I0215 09:58:14.464938 1 route_controller.go:294] set node ip-172-20-38-224.ap-south-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set
- I0215 09:58:14.472286 1 route_controller.go:294] set node ip-172-20-60-16.ap-south-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set
- I0215 09:58:14.472351 1 route_controller.go:294] set node ip-172-20-49-91.ap-south-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set
- I0215 09:58:16.407415 1 garbagecollector.go:461] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"dns-controller-8d8889c4b", UID:"14945960-e59d-4077-9ad9-ca89425db557", Controller:(*bool)(0xc00051372e), BlockOwnerDeletion:(*bool)(0xc00051372f)}, Namespace:"kube-system"} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"Deployment", Name:"dns-controller", UID:"a78074ed-00ba-495f-a2db-18d95e4e5c26", Controller:(*bool)(0xc0020e982e), BlockOwnerDeletion:(*bool)(0xc0020e982f)}}, will not garbage collect
- I0215 09:58:16.431270 1 garbagecollector.go:461] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"kube-dns-696cb84c7", UID:"766726a5-0f71-4d45-aaf5-14746224d8c4", Controller:(*bool)(0xc000513377), BlockOwnerDeletion:(*bool)(0xc000513378)}, Namespace:"kube-system"} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"Deployment", Name:"kube-dns", UID:"99c0cc2c-0381-423b-89d0-9b464e577df9", Controller:(*bool)(0xc0020e93e7), BlockOwnerDeletion:(*bool)(0xc0020e93e8)}}, will not garbage collect
- I0215 09:58:16.477492 1 garbagecollector.go:461] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"kube-dns-autoscaler-55f8f75459", UID:"3e624785-a2ae-4b2a-9b97-3e8502ff9d28", Controller:(*bool)(0xc00051314e), BlockOwnerDeletion:(*bool)(0xc00051314f)}, Namespace:"kube-system"} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"Deployment", Name:"kube-dns-autoscaler", UID:"f75baeb6-c014-4ec8-9dcf-da8df1eb9f74", Controller:(*bool)(0xc0020e946e), BlockOwnerDeletion:(*bool)(0xc0020e946f)}}, will not garbage collect
- I0215 09:58:19.190215 1 event.go:291] "Event occurred" object="kube-system/kube-dns-696cb84c7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-dns-696cb84c7-rm7sh"
- E0215 09:58:22.540509 1 leaderelection.go:361] Failed to update lock: resource name may not be empty
- I0215 09:58:22.643176 1 leaderelection.go:278] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition
- I0215 09:58:22.741914 1 reflector.go:213] Stopping reflector *v1.Lease (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.742106 1 reflector.go:213] Stopping reflector *v1.ConfigMap (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.775414 1 reflector.go:213] Stopping reflector *v1.ClusterRoleBinding (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.780854 1 reflector.go:213] Stopping reflector *v1.IngressClass (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.785450 1 reflector.go:213] Stopping reflector *v1.PriorityClass (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.786350 1 reflector.go:213] Stopping reflector *v1.PartialObjectMetadata (14h23m22.889391581s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90
- I0215 09:58:22.786623 1 reflector.go:213] Stopping reflector *v1.ValidatingWebhookConfiguration (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.797738 1 reflector.go:213] Stopping reflector *v1.PartialObjectMetadata (14h23m22.889391581s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90
- I0215 09:58:22.797904 1 reflector.go:213] Stopping reflector *v1.MutatingWebhookConfiguration (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.806540 1 reflector.go:213] Stopping reflector *v1beta1.RuntimeClass (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.806670 1 reflector.go:213] Stopping reflector *v1beta1.PodSecurityPolicy (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.806806 1 reflector.go:213] Stopping reflector *v1.Deployment (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.807022 1 reflector.go:213] Stopping reflector *v1.Event (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.807147 1 reflector.go:213] Stopping reflector *v1.Endpoints (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.813875 1 reflector.go:213] Stopping reflector *v1.ReplicaSet (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.831567 1 reflector.go:213] Stopping reflector *v1beta1.PodDisruptionBudget (30s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.831706 1 reflector.go:213] Stopping reflector *v1beta1.EndpointSlice (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.839279 1 gc_controller.go:100] Shutting down GC controller
- I0215 09:58:22.839397 1 pvc_protection_controller.go:122] Shutting down PVC protection controller
- I0215 09:58:22.914567 1 horizontal.go:180] Shutting down HPA controller
- I0215 09:58:22.914741 1 pv_controller_base.go:319] Shutting down persistent volume controller
- I0215 09:58:22.926809 1 pv_controller_base.go:513] claim worker queue shutting down
- I0215 09:58:22.926861 1 resource_quota_controller.go:291] Shutting down resource quota controller
- I0215 09:58:22.929842 1 endpointslicemirroring_controller.go:224] Shutting down EndpointSliceMirroring controller
- I0215 09:58:22.942235 1 attach_detach_controller.go:361] Shutting down attach detach controller
- I0215 09:58:22.949922 1 endpoints_controller.go:201] Shutting down endpoint controller
- I0215 09:58:22.954313 1 job_controller.go:160] Shutting down job controller
- I0215 09:58:22.954345 1 endpointslice_controller.go:253] Shutting down endpoint slice controller
- I0215 09:58:22.954370 1 node_lifecycle_controller.go:589] Shutting down node controller
- I0215 09:58:22.960992 1 replica_set.go:194] Shutting down replicationcontroller controller
- I0215 09:58:22.961025 1 stateful_set.go:158] Shutting down statefulset controller
- I0215 09:58:22.968951 1 daemon_controller.go:299] Shutting down daemon sets controller
- I0215 09:58:22.968980 1 replica_set.go:194] Shutting down replicaset controller
- I0215 09:58:22.969077 1 reflector.go:213] Stopping reflector *v1.ControllerRevision (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.973892 1 serviceaccounts_controller.go:129] Shutting down service account controller
- I0215 09:58:22.980464 1 clusterroleaggregation_controller.go:161] Shutting down ClusterRoleAggregator
- I0215 09:58:22.985150 1 namespace_controller.go:212] Shutting down namespace controller
- I0215 09:58:22.985294 1 range_allocator.go:184] Shutting down range CIDR allocator
- I0215 09:58:22.985380 1 node_ipam_controller.go:171] Shutting down ipam controller
- I0215 09:58:22.991297 1 route_controller.go:123] Shutting down route controller
- I0215 09:58:22.996555 1 tokens_controller.go:182] Shutting down
- I0215 09:58:22.999136 1 reflector.go:213] Stopping reflector *v1.Secret (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.999274 1 reflector.go:213] Stopping reflector *v1.Node (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- I0215 09:58:22.999699 1 reflector.go:213] Stopping reflector *v1.ServiceAccount (12h30m46.32135761s) from k8s.io/client-go/informers/factory.go:134
- F0215 09:58:22.999877 1 controllermanager.go:293] leaderelection lost
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement