Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- time="2023-12-29T10:57:01.134269154Z" level=info msg="Starting k3s v1.21.7+k3s1 (ac705709)"
- The connection to the server localhost:8080 was refused - did you specify the right host or port?
- time="2023-12-29T10:57:01.142693236Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
- time="2023-12-29T10:57:01.142711000Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
- time="2023-12-29T10:57:01.147254129Z" level=info msg="Database tables and indexes are up to date"
- time="2023-12-29T10:57:01.148206100Z" level=info msg="Kine listening on unix://kine.sock"
- time="2023-12-29T10:57:01.156250548Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.156642516Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.157011580Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.157362371Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.157734661Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.158064061Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.158447202Z" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.159058613Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.159693597Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.160434091Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.160774061Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.161341528Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.307576582Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.307820552Z" level=info msg="Active TLS secret (ver=) (count 11): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.23.0.2:172.23.0.2 listener.cattle.io/cn-k3d-mycluster-server-0:k3d-mycluster-server-0 listener.cattle.io/cn-k3d-mycluster-serverlb:k3d-mycluster-serverlb listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=6396332AF6441DF7C9E095EA813302CA36FAD448]"
- time="2023-12-29T10:57:01.312055832Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
- Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
- I1229 10:57:01.312872 23 server.go:656] external host was not specified, using 172.23.0.2
- I1229 10:57:01.313042 23 server.go:195] Version: v1.21.7+k3s1
- time="2023-12-29T10:57:01.314224122Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
- time="2023-12-29T10:57:01.314293142Z" level=info msg="Waiting for API server to become available"
- time="2023-12-29T10:57:01.314529246Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
- time="2023-12-29T10:57:01.314859027Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
- time="2023-12-29T10:57:01.315307871Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
- time="2023-12-29T10:57:01.315332247Z" level=info msg="To join node to cluster: k3s agent -s https://172.23.0.2:6443 -t ${NODE_TOKEN}"
- time="2023-12-29T10:57:01.315896809Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
- time="2023-12-29T10:57:01.315959297Z" level=info msg="Run: k3s kubectl"
- time="2023-12-29T10:57:01.335687491Z" level=info msg="certificate CN=k3d-mycluster-server-0 signed by CN=k3s-server-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.336982879Z" level=info msg="certificate CN=system:node:k3d-mycluster-server-0,O=system:nodes signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
- time="2023-12-29T10:57:01.389311721Z" level=info msg="Module overlay was already loaded"
- time="2023-12-29T10:57:01.389330486Z" level=info msg="Module nf_conntrack was already loaded"
- time="2023-12-29T10:57:01.389338932Z" level=info msg="Module br_netfilter was already loaded"
- time="2023-12-29T10:57:01.389346817Z" level=info msg="Module iptable_nat was already loaded"
- time="2023-12-29T10:57:01.394469347Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
- time="2023-12-29T10:57:01.394547103Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
- time="2023-12-29T10:57:01.395222584Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
- time="2023-12-29T10:57:01.395312433Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
- I1229 10:57:01.600720 23 shared_informer.go:240] Waiting for caches to sync for node_authorizer
- I1229 10:57:01.601391 23 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
- I1229 10:57:01.601398 23 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
- I1229 10:57:01.602031 23 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
- I1229 10:57:01.602037 23 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
- I1229 10:57:01.614531 23 instance.go:283] Using reconciler: lease
- I1229 10:57:01.638076 23 rest.go:130] the default service ipfamily for this cluster is: IPv4
- W1229 10:57:01.867549 23 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
- W1229 10:57:01.877712 23 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
- W1229 10:57:01.880465 23 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
- W1229 10:57:01.884571 23 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
- W1229 10:57:01.886277 23 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
- W1229 10:57:01.890795 23 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
- W1229 10:57:01.890801 23 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
- I1229 10:57:01.897037 23 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
- I1229 10:57:01.897044 23 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
- time="2023-12-29T10:57:02.396402264Z" level=info msg="Containerd is now running"
- time="2023-12-29T10:57:02.401129680Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
- time="2023-12-29T10:57:02.402459333Z" level=info msg="Handling backend connection request [k3d-mycluster-server-0]"
- time="2023-12-29T10:57:02.402814902Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-mycluster-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
- time="2023-12-29T10:57:02.403213512Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-mycluster-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
- Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
- Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
- Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
- Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
- W1229 10:57:02.403380 23 server.go:224] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
- I1229 10:57:02.403703 23 server.go:436] "Kubelet version" kubeletVersion="v1.21.7+k3s1"
- W1229 10:57:02.403840 23 proxier.go:653] Failed to read file /lib/modules/6.1.63/modules.builtin with error open /lib/modules/6.1.63/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
- W1229 10:57:02.404269 23 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
- W1229 10:57:02.404462 23 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
- W1229 10:57:02.404652 23 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
- W1229 10:57:02.404938 23 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
- W1229 10:57:02.405126 23 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
- E1229 10:57:02.408385 23 node.go:161] Failed to retrieve node info: nodes "k3d-mycluster-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
- W1229 10:57:02.415325 23 manager.go:159] Cannot detect current cgroup on cgroup v2
- I1229 10:57:02.415362 23 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
- I1229 10:57:02.691575 23 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
- I1229 10:57:02.691592 23 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
- I1229 10:57:02.691774 23 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
- I1229 10:57:02.691974 23 secure_serving.go:202] Serving securely on 127.0.0.1:6444
- I1229 10:57:02.692009 23 tlsconfig.go:240] Starting DynamicServingCertificateController
- I1229 10:57:02.692038 23 autoregister_controller.go:141] Starting autoregister controller
- I1229 10:57:02.692041 23 available_controller.go:475] Starting AvailableConditionController
- I1229 10:57:02.692043 23 cache.go:32] Waiting for caches to sync for autoregister controller
- I1229 10:57:02.692045 23 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
- I1229 10:57:02.692079 23 apf_controller.go:307] Starting API Priority and Fairness config controller
- I1229 10:57:02.692098 23 crdregistration_controller.go:111] Starting crd-autoregister controller
- I1229 10:57:02.692104 23 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
- I1229 10:57:02.692112 23 apiservice_controller.go:97] Starting APIServiceRegistrationController
- I1229 10:57:02.692123 23 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
- I1229 10:57:02.692129 23 controller.go:83] Starting OpenAPI AggregationController
- I1229 10:57:02.692125 23 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
- I1229 10:57:02.692282 23 controller.go:86] Starting OpenAPI controller
- I1229 10:57:02.692300 23 establishing_controller.go:76] Starting EstablishingController
- I1229 10:57:02.692316 23 customresource_discovery_controller.go:209] Starting DiscoveryController
- I1229 10:57:02.692426 23 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
- I1229 10:57:02.692458 23 crd_finalizer.go:266] Starting CRDFinalizer
- I1229 10:57:02.692486 23 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
- I1229 10:57:02.692968 23 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
- I1229 10:57:02.692975 23 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
- I1229 10:57:02.692969 23 naming_controller.go:291] Starting NamingConditionController
- I1229 10:57:02.693025 23 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
- I1229 10:57:02.693041 23 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
- I1229 10:57:02.699118 23 controller.go:611] quota admission added evaluator for: namespaces
- I1229 10:57:02.700754 23 shared_informer.go:247] Caches are synced for node_authorizer
- E1229 10:57:02.703281 23 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocated ip:10.43.0.1 with error:cannot allocate resources of type serviceipallocations at this time
- E1229 10:57:02.703770 23 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.23.0.2, ResourceVersion: 0, AdditionalErrorMsg:
- I1229 10:57:02.792919 23 cache.go:39] Caches are synced for APIServiceRegistrationController controller
- I1229 10:57:02.792960 23 cache.go:39] Caches are synced for AvailableConditionController controller
- I1229 10:57:02.792970 23 shared_informer.go:247] Caches are synced for crd-autoregister
- I1229 10:57:02.792975 23 cache.go:39] Caches are synced for autoregister controller
- I1229 10:57:02.792980 23 apf_controller.go:312] Running API Priority and Fairness config worker
- I1229 10:57:02.793155 23 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
- E1229 10:57:03.557826 23 node.go:161] Failed to retrieve node info: nodes "k3d-mycluster-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
- I1229 10:57:03.691936 23 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
- I1229 10:57:03.691946 23 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
- I1229 10:57:03.695429 23 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
- I1229 10:57:03.697463 23 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
- I1229 10:57:03.697473 23 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
- I1229 10:57:03.876629 23 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
- I1229 10:57:03.892100 23 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
- W1229 10:57:03.918812 23 lease.go:233] Resetting endpoints for master service "kubernetes" to [172.23.0.2]
- I1229 10:57:03.919237 23 controller.go:611] quota admission added evaluator for: endpoints
- I1229 10:57:03.921045 23 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
- Error from server (NotFound): nodes "k3d-mycluster-server-0" not found
- time="2023-12-29T10:57:04.698229325Z" level=info msg="Kube API server is now running"
- time="2023-12-29T10:57:04.698260334Z" level=info msg="k3s is up and running"
- time="2023-12-29T10:57:04.698245105Z" level=info msg="Waiting for cloud-controller-manager privileges to become available"
- Flag --address has been deprecated, see --bind-address instead.
- I1229 10:57:04.699754 23 controllermanager.go:175] Version: v1.21.7+k3s1
- I1229 10:57:04.699960 23 deprecated_insecure_serving.go:56] Serving insecurely on 127.0.0.1:10252
- time="2023-12-29T10:57:04.703590734Z" level=info msg="Creating CRD addons.k3s.cattle.io"
- time="2023-12-29T10:57:04.706664348Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
- time="2023-12-29T10:57:04.708601724Z" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io"
- time="2023-12-29T10:57:04.716990299Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
- time="2023-12-29T10:57:05.218529835Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
- time="2023-12-29T10:57:05.218544814Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
- I1229 10:57:05.256516 23 shared_informer.go:240] Waiting for caches to sync for tokens
- I1229 10:57:05.261084 23 controller.go:611] quota admission added evaluator for: serviceaccounts
- I1229 10:57:05.262280 23 controllermanager.go:574] Started "podgc"
- I1229 10:57:05.262335 23 gc_controller.go:89] Starting GC controller
- I1229 10:57:05.262340 23 shared_informer.go:240] Waiting for caches to sync for GC
- I1229 10:57:05.268210 23 controllermanager.go:574] Started "horizontalpodautoscaling"
- I1229 10:57:05.268266 23 horizontal.go:169] Starting HPA controller
- I1229 10:57:05.268270 23 shared_informer.go:240] Waiting for caches to sync for HPA
- W1229 10:57:05.271424 23 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
- I1229 10:57:05.271657 23 controllermanager.go:574] Started "attachdetach"
- I1229 10:57:05.271721 23 attach_detach_controller.go:328] Starting attach detach controller
- I1229 10:57:05.271726 23 shared_informer.go:240] Waiting for caches to sync for attach detach
- I1229 10:57:05.274756 23 controllermanager.go:574] Started "replicationcontroller"
- I1229 10:57:05.274779 23 replica_set.go:182] Starting replicationcontroller controller
- I1229 10:57:05.274788 23 shared_informer.go:240] Waiting for caches to sync for ReplicationController
- I1229 10:57:05.277862 23 controllermanager.go:574] Started "csrcleaner"
- W1229 10:57:05.277868 23 controllermanager.go:553] "bootstrapsigner" is disabled
- W1229 10:57:05.277870 23 controllermanager.go:553] "service" is disabled
- W1229 10:57:05.277873 23 controllermanager.go:553] "route" is disabled
- I1229 10:57:05.277921 23 cleaner.go:82] Starting CSR cleaner controller
- I1229 10:57:05.281091 23 controllermanager.go:574] Started "ephemeral-volume"
- I1229 10:57:05.281148 23 controller.go:170] Starting ephemeral volume controller
- I1229 10:57:05.281152 23 shared_informer.go:240] Waiting for caches to sync for ephemeral
- I1229 10:57:05.284410 23 controllermanager.go:574] Started "deployment"
- I1229 10:57:05.284461 23 deployment_controller.go:153] "Starting controller" controller="deployment"
- I1229 10:57:05.284465 23 shared_informer.go:240] Waiting for caches to sync for deployment
- I1229 10:57:05.289661 23 garbagecollector.go:142] Starting garbage collector controller
- I1229 10:57:05.289666 23 shared_informer.go:240] Waiting for caches to sync for garbage collector
- I1229 10:57:05.289675 23 graph_builder.go:289] GraphBuilder running
- I1229 10:57:05.289700 23 controllermanager.go:574] Started "garbagecollector"
- I1229 10:57:05.357042 23 shared_informer.go:247] Caches are synced for tokens
- I1229 10:57:05.458604 23 controllermanager.go:574] Started "pv-protection"
- I1229 10:57:05.458630 23 pv_protection_controller.go:83] Starting PV protection controller
- I1229 10:57:05.458635 23 shared_informer.go:240] Waiting for caches to sync for PV protection
- I1229 10:57:05.609364 23 controllermanager.go:574] Started "endpoint"
- I1229 10:57:05.609391 23 endpoints_controller.go:189] Starting endpoint controller
- I1229 10:57:05.609396 23 shared_informer.go:240] Waiting for caches to sync for endpoint
- I1229 10:57:05.658513 23 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
- I1229 10:57:05.658521 23 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
- I1229 10:57:05.658535 23 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key
- I1229 10:57:05.658660 23 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
- I1229 10:57:05.658668 23 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
- I1229 10:57:05.658678 23 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
- I1229 10:57:05.658796 23 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
- I1229 10:57:05.658802 23 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
- I1229 10:57:05.658813 23 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
- I1229 10:57:05.658884 23 controllermanager.go:574] Started "csrsigning"
- I1229 10:57:05.658891 23 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
- I1229 10:57:05.658896 23 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
- I1229 10:57:05.658904 23 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
- time="2023-12-29T10:57:05.720385085Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
- time="2023-12-29T10:57:05.720397979Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
- E1229 10:57:05.749058 23 node.go:161] Failed to retrieve node info: nodes "k3d-mycluster-server-0" not found
- I1229 10:57:05.809260 23 controllermanager.go:574] Started "ttl"
- I1229 10:57:05.809341 23 ttl_controller.go:121] Starting TTL controller
- I1229 10:57:05.809346 23 shared_informer.go:240] Waiting for caches to sync for TTL
- I1229 10:57:05.958834 23 controllermanager.go:574] Started "root-ca-cert-publisher"
- I1229 10:57:05.958864 23 publisher.go:102] Starting root CA certificate configmap publisher
- I1229 10:57:05.958869 23 shared_informer.go:240] Waiting for caches to sync for crt configmap
- I1229 10:57:06.212615 23 controllermanager.go:574] Started "namespace"
- I1229 10:57:06.212640 23 namespace_controller.go:200] Starting namespace controller
- I1229 10:57:06.212645 23 shared_informer.go:240] Waiting for caches to sync for namespace
- time="2023-12-29T10:57:06.222054384Z" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"
- time="2023-12-29T10:57:06.225211795Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-9.18.201.tgz"
- time="2023-12-29T10:57:06.225314418Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-9.18.201.tgz"
- time="2023-12-29T10:57:06.239555858Z" level=info msg="Failed to get existing traefik HelmChart" error="helmcharts.helm.cattle.io \"traefik\" not found"
- time="2023-12-29T10:57:06.239649003Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
- time="2023-12-29T10:57:06.239708435Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
- time="2023-12-29T10:57:06.239752608Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
- time="2023-12-29T10:57:06.239792683Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
- time="2023-12-29T10:57:06.239840173Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
- time="2023-12-29T10:57:06.239891790Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
- time="2023-12-29T10:57:06.239935001Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
- time="2023-12-29T10:57:06.239976940Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
- time="2023-12-29T10:57:06.240018398Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
- time="2023-12-29T10:57:06.240059014Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
- time="2023-12-29T10:57:06.240133995Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
- time="2023-12-29T10:57:06.240204067Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
- time="2023-12-29T10:57:06.341309935Z" level=info msg="Starting /v1, Kind=Secret controller"
- time="2023-12-29T10:57:06.342180804Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
- I1229 10:57:06.348742 23 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io
- time="2023-12-29T10:57:06.348802355Z" level=info msg="Active TLS secret k3s-serving (ver=256) (count 11): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.23.0.2:172.23.0.2 listener.cattle.io/cn-k3d-mycluster-server-0:k3d-mycluster-server-0 listener.cattle.io/cn-k3d-mycluster-serverlb:k3d-mycluster-serverlb listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=6396332AF6441DF7C9E095EA813302CA36FAD448]"
- time="2023-12-29T10:57:06.348966524Z" level=info msg="Waiting for control-plane node k3d-mycluster-server-0 startup: nodes \"k3d-mycluster-server-0\" not found"
- time="2023-12-29T10:57:06.350674357Z" level=info msg="Cluster dns configmap has been set successfully"
- time="2023-12-29T10:57:06.352616592Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"d53903d2-a8e6-4432-86b3-a06fac4749f1\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"258\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
- I1229 10:57:06.359794 23 controllermanager.go:574] Started "endpointslicemirroring"
- I1229 10:57:06.359817 23 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
- I1229 10:57:06.359822 23 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
- time="2023-12-29T10:57:06.362981106Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"d53903d2-a8e6-4432-86b3-a06fac4749f1\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"258\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
- time="2023-12-29T10:57:06.369281632Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"b81885bd-80d6-47a8-9e3e-a73861dd64b9\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"267\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
- I1229 10:57:06.393043 23 controller.go:611] quota admission added evaluator for: deployments.apps
- time="2023-12-29T10:57:06.400585308Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"b81885bd-80d6-47a8-9e3e-a73861dd64b9\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"267\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
- time="2023-12-29T10:57:06.404876123Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"07006898-7db5-4b42-9403-1ac864adbb78\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"280\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
- time="2023-12-29T10:57:06.424139893Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"07006898-7db5-4b42-9403-1ac864adbb78\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"280\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
- time="2023-12-29T10:57:06.427538238Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"f8f8f4fb-ad19-4c3d-8a97-120c40003598\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"292\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
- time="2023-12-29T10:57:06.431007827Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"f8f8f4fb-ad19-4c3d-8a97-120c40003598\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"292\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
- time="2023-12-29T10:57:06.434459121Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"c275f5e1-7536-405e-9444-42db065e496c\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"297\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
- time="2023-12-29T10:57:06.437650537Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"c275f5e1-7536-405e-9444-42db065e496c\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"297\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
- I1229 10:57:06.558754 23 controllermanager.go:574] Started "disruption"
- I1229 10:57:06.558771 23 disruption.go:363] Starting disruption controller
- I1229 10:57:06.558780 23 shared_informer.go:240] Waiting for caches to sync for disruption
- time="2023-12-29T10:57:06.643026217Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"b5ccaffd-488e-4980-a4e3-8b34218909bc\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"305\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
- time="2023-12-29T10:57:06.646883526Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"b5ccaffd-488e-4980-a4e3-8b34218909bc\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"305\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
- I1229 10:57:06.708473 23 controllermanager.go:574] Started "statefulset"
- W1229 10:57:06.708481 23 controllermanager.go:553] "tokencleaner" is disabled
- I1229 10:57:06.708489 23 stateful_set.go:146] Starting stateful set controller
- I1229 10:57:06.708495 23 shared_informer.go:240] Waiting for caches to sync for stateful set
- I1229 10:57:06.827995 23 serving.go:354] Generated self-signed cert in-memory
- I1229 10:57:06.858568 23 controllermanager.go:574] Started "endpointslice"
- I1229 10:57:06.858592 23 endpointslice_controller.go:256] Starting endpoint slice controller
- I1229 10:57:06.858597 23 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
- I1229 10:57:07.008881 23 controllermanager.go:574] Started "job"
- I1229 10:57:07.008920 23 job_controller.go:150] Starting job controller
- I1229 10:57:07.008924 23 shared_informer.go:240] Waiting for caches to sync for job
- time="2023-12-29T10:57:07.042700794Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"47271eb1-2368-4f0c-8ea0-28c05eb6f341\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"319\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
- time="2023-12-29T10:57:07.047568384Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"47271eb1-2368-4f0c-8ea0-28c05eb6f341\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"319\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
- W1229 10:57:07.069958 23 authentication.go:308] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
- W1229 10:57:07.069966 23 authentication.go:332] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
- W1229 10:57:07.069974 23 authorization.go:184] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
- I1229 10:57:07.071714 23 controllermanager.go:142] Version: v1.21.7+k3s1
- I1229 10:57:07.072120 23 secure_serving.go:202] Serving securely on 127.0.0.1:10258
- I1229 10:57:07.072179 23 tlsconfig.go:240] Starting DynamicServingCertificateController
- time="2023-12-29T10:57:07.150088994Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
- time="2023-12-29T10:57:07.150117197Z" level=info msg="Starting /v1, Kind=Pod controller"
- time="2023-12-29T10:57:07.150111326Z" level=info msg="Starting /v1, Kind=Service controller"
- time="2023-12-29T10:57:07.150143577Z" level=info msg="Starting batch/v1, Kind=Job controller"
- time="2023-12-29T10:57:07.150128599Z" level=info msg="Starting /v1, Kind=Endpoints controller"
- time="2023-12-29T10:57:07.150128228Z" level=info msg="Starting /v1, Kind=Node controller"
- time="2023-12-29T10:57:07.150138307Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
- I1229 10:57:07.158648 23 node_lifecycle_controller.go:377] Sending events to api server.
- I1229 10:57:07.158742 23 taint_manager.go:163] "Sending events to api server"
- I1229 10:57:07.158775 23 node_lifecycle_controller.go:505] Controller will reconcile labels.
- I1229 10:57:07.158792 23 controllermanager.go:574] Started "nodelifecycle"
- I1229 10:57:07.158815 23 node_lifecycle_controller.go:539] Starting node controller
- I1229 10:57:07.158819 23 shared_informer.go:240] Waiting for caches to sync for taint
- I1229 10:57:07.309175 23 controllermanager.go:574] Started "persistentvolume-expander"
- I1229 10:57:07.309201 23 expand_controller.go:327] Starting expand controller
- I1229 10:57:07.309206 23 shared_informer.go:240] Waiting for caches to sync for expand
- W1229 10:57:07.418000 23 fs.go:214] stat failed on /dev/mapper/enc-physical-vol with error: no such file or directory
- W1229 10:57:07.424918 23 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
- I1229 10:57:07.425168 23 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
- I1229 10:57:07.425255 23 container_manager_linux.go:291] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
- I1229 10:57:07.425297 23 container_manager_linux.go:296] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/k3s SystemCgroupsName: KubeletCgroupsName:/k3s ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
- I1229 10:57:07.425314 23 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
- I1229 10:57:07.425321 23 container_manager_linux.go:327] "Initializing Topology Manager" policy="none" scope="container"
- I1229 10:57:07.425326 23 container_manager_linux.go:332] "Creating device plugin manager" devicePluginEnabled=true
- I1229 10:57:07.425459 23 kubelet.go:404] "Attempting to sync node with API server"
- I1229 10:57:07.425472 23 kubelet.go:272] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
- I1229 10:57:07.425483 23 kubelet.go:283] "Adding apiserver pod source"
- I1229 10:57:07.425492 23 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
- I1229 10:57:07.425872 23 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="v1.4.12-k3s1" apiVersion="v1alpha2"
- I1229 10:57:07.426073 23 server.go:1191] "Started kubelet"
- I1229 10:57:07.426120 23 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
- W1229 10:57:07.426360 23 fs.go:588] stat failed on /dev/mapper/enc-physical-vol with error: no such file or directory
- E1229 10:57:07.426384 23 cri_stats_provider.go:369] "Failed to get the info of the filesystem with mountpoint" err="failed to get device for dir \"/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs\": could not find device with major: 0, minor: 30 in cached partitions map" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
- E1229 10:57:07.426404 23 kubelet.go:1306] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
- I1229 10:57:07.426640 23 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
- I1229 10:57:07.426678 23 volume_manager.go:271] "Starting Kubelet Volume Manager"
- I1229 10:57:07.426718 23 desired_state_of_world_populator.go:141] "Desired state populator starts to run"
- I1229 10:57:07.426894 23 server.go:409] "Adding debug handlers to kubelet server"
- I1229 10:57:07.432332 23 cpu_manager.go:199] "Starting CPU manager" policy="none"
- I1229 10:57:07.432340 23 cpu_manager.go:200] "Reconciling" reconcilePeriod="10s"
- I1229 10:57:07.432351 23 state_mem.go:36] "Initialized new in-memory state store"
- E1229 10:57:07.433276 23 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"k3d-mycluster-server-0\" not found" node="k3d-mycluster-server-0"
- I1229 10:57:07.434463 23 policy_none.go:44] "None policy: Start"
- W1229 10:57:07.434486 23 fs.go:588] stat failed on /dev/mapper/enc-physical-vol with error: no such file or directory
- E1229 10:57:07.434498 23 kubelet.go:1384] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir \"/var/lib/kubelet\": could not find device with major: 0, minor: 30 in cached partitions map"
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
- The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement