Advertisement
gabyx

Untitled

Dec 29th, 2023
19
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 57.39 KB | None | 0 0
  1. time="2023-12-29T10:57:01.134269154Z" level=info msg="Starting k3s v1.21.7+k3s1 (ac705709)"
  2. The connection to the server localhost:8080 was refused - did you specify the right host or port?
  3. time="2023-12-29T10:57:01.142693236Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
  4. time="2023-12-29T10:57:01.142711000Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
  5. time="2023-12-29T10:57:01.147254129Z" level=info msg="Database tables and indexes are up to date"
  6. time="2023-12-29T10:57:01.148206100Z" level=info msg="Kine listening on unix://kine.sock"
  7. time="2023-12-29T10:57:01.156250548Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  8. time="2023-12-29T10:57:01.156642516Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  9. time="2023-12-29T10:57:01.157011580Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  10. time="2023-12-29T10:57:01.157362371Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  11. time="2023-12-29T10:57:01.157734661Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  12. time="2023-12-29T10:57:01.158064061Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  13. time="2023-12-29T10:57:01.158447202Z" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  14. time="2023-12-29T10:57:01.159058613Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  15. time="2023-12-29T10:57:01.159693597Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  16. time="2023-12-29T10:57:01.160434091Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  17. time="2023-12-29T10:57:01.160774061Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  18. time="2023-12-29T10:57:01.161341528Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  19. time="2023-12-29T10:57:01.307576582Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  20. time="2023-12-29T10:57:01.307820552Z" level=info msg="Active TLS secret (ver=) (count 11): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.23.0.2:172.23.0.2 listener.cattle.io/cn-k3d-mycluster-server-0:k3d-mycluster-server-0 listener.cattle.io/cn-k3d-mycluster-serverlb:k3d-mycluster-serverlb listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=6396332AF6441DF7C9E095EA813302CA36FAD448]"
  21. time="2023-12-29T10:57:01.312055832Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
  22. Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
  23. I1229 10:57:01.312872 23 server.go:656] external host was not specified, using 172.23.0.2
  24. I1229 10:57:01.313042 23 server.go:195] Version: v1.21.7+k3s1
  25. time="2023-12-29T10:57:01.314224122Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
  26. time="2023-12-29T10:57:01.314293142Z" level=info msg="Waiting for API server to become available"
  27. time="2023-12-29T10:57:01.314529246Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
  28. time="2023-12-29T10:57:01.314859027Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
  29. time="2023-12-29T10:57:01.315307871Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
  30. time="2023-12-29T10:57:01.315332247Z" level=info msg="To join node to cluster: k3s agent -s https://172.23.0.2:6443 -t ${NODE_TOKEN}"
  31. time="2023-12-29T10:57:01.315896809Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
  32. time="2023-12-29T10:57:01.315959297Z" level=info msg="Run: k3s kubectl"
  33. time="2023-12-29T10:57:01.335687491Z" level=info msg="certificate CN=k3d-mycluster-server-0 signed by CN=k3s-server-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  34. time="2023-12-29T10:57:01.336982879Z" level=info msg="certificate CN=system:node:k3d-mycluster-server-0,O=system:nodes signed by CN=k3s-client-ca@1703847421: notBefore=2023-12-29 10:57:01 +0000 UTC notAfter=2024-12-28 10:57:01 +0000 UTC"
  35. time="2023-12-29T10:57:01.389311721Z" level=info msg="Module overlay was already loaded"
  36. time="2023-12-29T10:57:01.389330486Z" level=info msg="Module nf_conntrack was already loaded"
  37. time="2023-12-29T10:57:01.389338932Z" level=info msg="Module br_netfilter was already loaded"
  38. time="2023-12-29T10:57:01.389346817Z" level=info msg="Module iptable_nat was already loaded"
  39. time="2023-12-29T10:57:01.394469347Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
  40. time="2023-12-29T10:57:01.394547103Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
  41. time="2023-12-29T10:57:01.395222584Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
  42. time="2023-12-29T10:57:01.395312433Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
  43. I1229 10:57:01.600720 23 shared_informer.go:240] Waiting for caches to sync for node_authorizer
  44. I1229 10:57:01.601391 23 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
  45. I1229 10:57:01.601398 23 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
  46. I1229 10:57:01.602031 23 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
  47. I1229 10:57:01.602037 23 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
  48. I1229 10:57:01.614531 23 instance.go:283] Using reconciler: lease
  49. I1229 10:57:01.638076 23 rest.go:130] the default service ipfamily for this cluster is: IPv4
  50. W1229 10:57:01.867549 23 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
  51. W1229 10:57:01.877712 23 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
  52. W1229 10:57:01.880465 23 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
  53. W1229 10:57:01.884571 23 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
  54. W1229 10:57:01.886277 23 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
  55. W1229 10:57:01.890795 23 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
  56. W1229 10:57:01.890801 23 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
  57. I1229 10:57:01.897037 23 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
  58. I1229 10:57:01.897044 23 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
  59. time="2023-12-29T10:57:02.396402264Z" level=info msg="Containerd is now running"
  60. time="2023-12-29T10:57:02.401129680Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
  61. time="2023-12-29T10:57:02.402459333Z" level=info msg="Handling backend connection request [k3d-mycluster-server-0]"
  62. time="2023-12-29T10:57:02.402814902Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-mycluster-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
  63. time="2023-12-29T10:57:02.403213512Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-mycluster-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
  64. Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
  65. Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
  66. Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
  67. Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
  68. W1229 10:57:02.403380 23 server.go:224] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
  69. I1229 10:57:02.403703 23 server.go:436] "Kubelet version" kubeletVersion="v1.21.7+k3s1"
  70. W1229 10:57:02.403840 23 proxier.go:653] Failed to read file /lib/modules/6.1.63/modules.builtin with error open /lib/modules/6.1.63/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
  71. W1229 10:57:02.404269 23 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
  72. W1229 10:57:02.404462 23 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
  73. W1229 10:57:02.404652 23 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
  74. W1229 10:57:02.404938 23 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
  75. W1229 10:57:02.405126 23 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
  76. E1229 10:57:02.408385 23 node.go:161] Failed to retrieve node info: nodes "k3d-mycluster-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
  77. W1229 10:57:02.415325 23 manager.go:159] Cannot detect current cgroup on cgroup v2
  78. I1229 10:57:02.415362 23 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
  79. I1229 10:57:02.691575 23 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
  80. I1229 10:57:02.691592 23 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
  81. I1229 10:57:02.691774 23 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
  82. I1229 10:57:02.691974 23 secure_serving.go:202] Serving securely on 127.0.0.1:6444
  83. I1229 10:57:02.692009 23 tlsconfig.go:240] Starting DynamicServingCertificateController
  84. I1229 10:57:02.692038 23 autoregister_controller.go:141] Starting autoregister controller
  85. I1229 10:57:02.692041 23 available_controller.go:475] Starting AvailableConditionController
  86. I1229 10:57:02.692043 23 cache.go:32] Waiting for caches to sync for autoregister controller
  87. I1229 10:57:02.692045 23 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
  88. I1229 10:57:02.692079 23 apf_controller.go:307] Starting API Priority and Fairness config controller
  89. I1229 10:57:02.692098 23 crdregistration_controller.go:111] Starting crd-autoregister controller
  90. I1229 10:57:02.692104 23 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
  91. I1229 10:57:02.692112 23 apiservice_controller.go:97] Starting APIServiceRegistrationController
  92. I1229 10:57:02.692123 23 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
  93. I1229 10:57:02.692129 23 controller.go:83] Starting OpenAPI AggregationController
  94. I1229 10:57:02.692125 23 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
  95. I1229 10:57:02.692282 23 controller.go:86] Starting OpenAPI controller
  96. I1229 10:57:02.692300 23 establishing_controller.go:76] Starting EstablishingController
  97. I1229 10:57:02.692316 23 customresource_discovery_controller.go:209] Starting DiscoveryController
  98. I1229 10:57:02.692426 23 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
  99. I1229 10:57:02.692458 23 crd_finalizer.go:266] Starting CRDFinalizer
  100. I1229 10:57:02.692486 23 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
  101. I1229 10:57:02.692968 23 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
  102. I1229 10:57:02.692975 23 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
  103. I1229 10:57:02.692969 23 naming_controller.go:291] Starting NamingConditionController
  104. I1229 10:57:02.693025 23 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
  105. I1229 10:57:02.693041 23 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
  106. I1229 10:57:02.699118 23 controller.go:611] quota admission added evaluator for: namespaces
  107. I1229 10:57:02.700754 23 shared_informer.go:247] Caches are synced for node_authorizer
  108. E1229 10:57:02.703281 23 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocated ip:10.43.0.1 with error:cannot allocate resources of type serviceipallocations at this time
  109. E1229 10:57:02.703770 23 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.23.0.2, ResourceVersion: 0, AdditionalErrorMsg:
  110. I1229 10:57:02.792919 23 cache.go:39] Caches are synced for APIServiceRegistrationController controller
  111. I1229 10:57:02.792960 23 cache.go:39] Caches are synced for AvailableConditionController controller
  112. I1229 10:57:02.792970 23 shared_informer.go:247] Caches are synced for crd-autoregister
  113. I1229 10:57:02.792975 23 cache.go:39] Caches are synced for autoregister controller
  114. I1229 10:57:02.792980 23 apf_controller.go:312] Running API Priority and Fairness config worker
  115. I1229 10:57:02.793155 23 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
  116. E1229 10:57:03.557826 23 node.go:161] Failed to retrieve node info: nodes "k3d-mycluster-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
  117. I1229 10:57:03.691936 23 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
  118. I1229 10:57:03.691946 23 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
  119. I1229 10:57:03.695429 23 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
  120. I1229 10:57:03.697463 23 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
  121. I1229 10:57:03.697473 23 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
  122. I1229 10:57:03.876629 23 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
  123. I1229 10:57:03.892100 23 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
  124. W1229 10:57:03.918812 23 lease.go:233] Resetting endpoints for master service "kubernetes" to [172.23.0.2]
  125. I1229 10:57:03.919237 23 controller.go:611] quota admission added evaluator for: endpoints
  126. I1229 10:57:03.921045 23 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
  127. Error from server (NotFound): nodes "k3d-mycluster-server-0" not found
  128. time="2023-12-29T10:57:04.698229325Z" level=info msg="Kube API server is now running"
  129. time="2023-12-29T10:57:04.698260334Z" level=info msg="k3s is up and running"
  130. time="2023-12-29T10:57:04.698245105Z" level=info msg="Waiting for cloud-controller-manager privileges to become available"
  131. Flag --address has been deprecated, see --bind-address instead.
  132. I1229 10:57:04.699754 23 controllermanager.go:175] Version: v1.21.7+k3s1
  133. I1229 10:57:04.699960 23 deprecated_insecure_serving.go:56] Serving insecurely on 127.0.0.1:10252
  134. time="2023-12-29T10:57:04.703590734Z" level=info msg="Creating CRD addons.k3s.cattle.io"
  135. time="2023-12-29T10:57:04.706664348Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
  136. time="2023-12-29T10:57:04.708601724Z" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io"
  137. time="2023-12-29T10:57:04.716990299Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
  138. time="2023-12-29T10:57:05.218529835Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
  139. time="2023-12-29T10:57:05.218544814Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
  140. I1229 10:57:05.256516 23 shared_informer.go:240] Waiting for caches to sync for tokens
  141. I1229 10:57:05.261084 23 controller.go:611] quota admission added evaluator for: serviceaccounts
  142. I1229 10:57:05.262280 23 controllermanager.go:574] Started "podgc"
  143. I1229 10:57:05.262335 23 gc_controller.go:89] Starting GC controller
  144. I1229 10:57:05.262340 23 shared_informer.go:240] Waiting for caches to sync for GC
  145. I1229 10:57:05.268210 23 controllermanager.go:574] Started "horizontalpodautoscaling"
  146. I1229 10:57:05.268266 23 horizontal.go:169] Starting HPA controller
  147. I1229 10:57:05.268270 23 shared_informer.go:240] Waiting for caches to sync for HPA
  148. W1229 10:57:05.271424 23 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
  149. I1229 10:57:05.271657 23 controllermanager.go:574] Started "attachdetach"
  150. I1229 10:57:05.271721 23 attach_detach_controller.go:328] Starting attach detach controller
  151. I1229 10:57:05.271726 23 shared_informer.go:240] Waiting for caches to sync for attach detach
  152. I1229 10:57:05.274756 23 controllermanager.go:574] Started "replicationcontroller"
  153. I1229 10:57:05.274779 23 replica_set.go:182] Starting replicationcontroller controller
  154. I1229 10:57:05.274788 23 shared_informer.go:240] Waiting for caches to sync for ReplicationController
  155. I1229 10:57:05.277862 23 controllermanager.go:574] Started "csrcleaner"
  156. W1229 10:57:05.277868 23 controllermanager.go:553] "bootstrapsigner" is disabled
  157. W1229 10:57:05.277870 23 controllermanager.go:553] "service" is disabled
  158. W1229 10:57:05.277873 23 controllermanager.go:553] "route" is disabled
  159. I1229 10:57:05.277921 23 cleaner.go:82] Starting CSR cleaner controller
  160. I1229 10:57:05.281091 23 controllermanager.go:574] Started "ephemeral-volume"
  161. I1229 10:57:05.281148 23 controller.go:170] Starting ephemeral volume controller
  162. I1229 10:57:05.281152 23 shared_informer.go:240] Waiting for caches to sync for ephemeral
  163. I1229 10:57:05.284410 23 controllermanager.go:574] Started "deployment"
  164. I1229 10:57:05.284461 23 deployment_controller.go:153] "Starting controller" controller="deployment"
  165. I1229 10:57:05.284465 23 shared_informer.go:240] Waiting for caches to sync for deployment
  166. I1229 10:57:05.289661 23 garbagecollector.go:142] Starting garbage collector controller
  167. I1229 10:57:05.289666 23 shared_informer.go:240] Waiting for caches to sync for garbage collector
  168. I1229 10:57:05.289675 23 graph_builder.go:289] GraphBuilder running
  169. I1229 10:57:05.289700 23 controllermanager.go:574] Started "garbagecollector"
  170. I1229 10:57:05.357042 23 shared_informer.go:247] Caches are synced for tokens
  171. I1229 10:57:05.458604 23 controllermanager.go:574] Started "pv-protection"
  172. I1229 10:57:05.458630 23 pv_protection_controller.go:83] Starting PV protection controller
  173. I1229 10:57:05.458635 23 shared_informer.go:240] Waiting for caches to sync for PV protection
  174. I1229 10:57:05.609364 23 controllermanager.go:574] Started "endpoint"
  175. I1229 10:57:05.609391 23 endpoints_controller.go:189] Starting endpoint controller
  176. I1229 10:57:05.609396 23 shared_informer.go:240] Waiting for caches to sync for endpoint
  177. I1229 10:57:05.658513 23 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
  178. I1229 10:57:05.658521 23 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
  179. I1229 10:57:05.658535 23 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key
  180. I1229 10:57:05.658660 23 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
  181. I1229 10:57:05.658668 23 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
  182. I1229 10:57:05.658678 23 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
  183. I1229 10:57:05.658796 23 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
  184. I1229 10:57:05.658802 23 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
  185. I1229 10:57:05.658813 23 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
  186. I1229 10:57:05.658884 23 controllermanager.go:574] Started "csrsigning"
  187. I1229 10:57:05.658891 23 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
  188. I1229 10:57:05.658896 23 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
  189. I1229 10:57:05.658904 23 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
  190. time="2023-12-29T10:57:05.720385085Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
  191. time="2023-12-29T10:57:05.720397979Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
  192. E1229 10:57:05.749058 23 node.go:161] Failed to retrieve node info: nodes "k3d-mycluster-server-0" not found
  193. I1229 10:57:05.809260 23 controllermanager.go:574] Started "ttl"
  194. I1229 10:57:05.809341 23 ttl_controller.go:121] Starting TTL controller
  195. I1229 10:57:05.809346 23 shared_informer.go:240] Waiting for caches to sync for TTL
  196. I1229 10:57:05.958834 23 controllermanager.go:574] Started "root-ca-cert-publisher"
  197. I1229 10:57:05.958864 23 publisher.go:102] Starting root CA certificate configmap publisher
  198. I1229 10:57:05.958869 23 shared_informer.go:240] Waiting for caches to sync for crt configmap
  199. I1229 10:57:06.212615 23 controllermanager.go:574] Started "namespace"
  200. I1229 10:57:06.212640 23 namespace_controller.go:200] Starting namespace controller
  201. I1229 10:57:06.212645 23 shared_informer.go:240] Waiting for caches to sync for namespace
  202. time="2023-12-29T10:57:06.222054384Z" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"
  203. time="2023-12-29T10:57:06.225211795Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-9.18.201.tgz"
  204. time="2023-12-29T10:57:06.225314418Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-9.18.201.tgz"
  205. time="2023-12-29T10:57:06.239555858Z" level=info msg="Failed to get existing traefik HelmChart" error="helmcharts.helm.cattle.io \"traefik\" not found"
  206. time="2023-12-29T10:57:06.239649003Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
  207. time="2023-12-29T10:57:06.239708435Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
  208. time="2023-12-29T10:57:06.239752608Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
  209. time="2023-12-29T10:57:06.239792683Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
  210. time="2023-12-29T10:57:06.239840173Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
  211. time="2023-12-29T10:57:06.239891790Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
  212. time="2023-12-29T10:57:06.239935001Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
  213. time="2023-12-29T10:57:06.239976940Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
  214. time="2023-12-29T10:57:06.240018398Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
  215. time="2023-12-29T10:57:06.240059014Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
  216. time="2023-12-29T10:57:06.240133995Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
  217. time="2023-12-29T10:57:06.240204067Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
  218. time="2023-12-29T10:57:06.341309935Z" level=info msg="Starting /v1, Kind=Secret controller"
  219. time="2023-12-29T10:57:06.342180804Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
  220. I1229 10:57:06.348742 23 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io
  221. time="2023-12-29T10:57:06.348802355Z" level=info msg="Active TLS secret k3s-serving (ver=256) (count 11): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.23.0.2:172.23.0.2 listener.cattle.io/cn-k3d-mycluster-server-0:k3d-mycluster-server-0 listener.cattle.io/cn-k3d-mycluster-serverlb:k3d-mycluster-serverlb listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=6396332AF6441DF7C9E095EA813302CA36FAD448]"
  222. time="2023-12-29T10:57:06.348966524Z" level=info msg="Waiting for control-plane node k3d-mycluster-server-0 startup: nodes \"k3d-mycluster-server-0\" not found"
  223. time="2023-12-29T10:57:06.350674357Z" level=info msg="Cluster dns configmap has been set successfully"
  224. time="2023-12-29T10:57:06.352616592Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"d53903d2-a8e6-4432-86b3-a06fac4749f1\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"258\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
  225. I1229 10:57:06.359794 23 controllermanager.go:574] Started "endpointslicemirroring"
  226. I1229 10:57:06.359817 23 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
  227. I1229 10:57:06.359822 23 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
  228. time="2023-12-29T10:57:06.362981106Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"d53903d2-a8e6-4432-86b3-a06fac4749f1\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"258\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
  229. time="2023-12-29T10:57:06.369281632Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"b81885bd-80d6-47a8-9e3e-a73861dd64b9\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"267\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
  230. I1229 10:57:06.393043 23 controller.go:611] quota admission added evaluator for: deployments.apps
  231. time="2023-12-29T10:57:06.400585308Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"b81885bd-80d6-47a8-9e3e-a73861dd64b9\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"267\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
  232. time="2023-12-29T10:57:06.404876123Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"07006898-7db5-4b42-9403-1ac864adbb78\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"280\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
  233. time="2023-12-29T10:57:06.424139893Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"07006898-7db5-4b42-9403-1ac864adbb78\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"280\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
  234. time="2023-12-29T10:57:06.427538238Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"f8f8f4fb-ad19-4c3d-8a97-120c40003598\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"292\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
  235. time="2023-12-29T10:57:06.431007827Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"f8f8f4fb-ad19-4c3d-8a97-120c40003598\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"292\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
  236. time="2023-12-29T10:57:06.434459121Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"c275f5e1-7536-405e-9444-42db065e496c\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"297\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
  237. time="2023-12-29T10:57:06.437650537Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"c275f5e1-7536-405e-9444-42db065e496c\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"297\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
  238. I1229 10:57:06.558754 23 controllermanager.go:574] Started "disruption"
  239. I1229 10:57:06.558771 23 disruption.go:363] Starting disruption controller
  240. I1229 10:57:06.558780 23 shared_informer.go:240] Waiting for caches to sync for disruption
  241. time="2023-12-29T10:57:06.643026217Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"b5ccaffd-488e-4980-a4e3-8b34218909bc\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"305\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
  242. time="2023-12-29T10:57:06.646883526Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"b5ccaffd-488e-4980-a4e3-8b34218909bc\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"305\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
  243. I1229 10:57:06.708473 23 controllermanager.go:574] Started "statefulset"
  244. W1229 10:57:06.708481 23 controllermanager.go:553] "tokencleaner" is disabled
  245. I1229 10:57:06.708489 23 stateful_set.go:146] Starting stateful set controller
  246. I1229 10:57:06.708495 23 shared_informer.go:240] Waiting for caches to sync for stateful set
  247. I1229 10:57:06.827995 23 serving.go:354] Generated self-signed cert in-memory
  248. I1229 10:57:06.858568 23 controllermanager.go:574] Started "endpointslice"
  249. I1229 10:57:06.858592 23 endpointslice_controller.go:256] Starting endpoint slice controller
  250. I1229 10:57:06.858597 23 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
  251. I1229 10:57:07.008881 23 controllermanager.go:574] Started "job"
  252. I1229 10:57:07.008920 23 job_controller.go:150] Starting job controller
  253. I1229 10:57:07.008924 23 shared_informer.go:240] Waiting for caches to sync for job
  254. time="2023-12-29T10:57:07.042700794Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"47271eb1-2368-4f0c-8ea0-28c05eb6f341\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"319\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
  255. time="2023-12-29T10:57:07.047568384Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"47271eb1-2368-4f0c-8ea0-28c05eb6f341\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"319\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
  256. W1229 10:57:07.069958 23 authentication.go:308] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
  257. W1229 10:57:07.069966 23 authentication.go:332] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
  258. W1229 10:57:07.069974 23 authorization.go:184] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
  259. I1229 10:57:07.071714 23 controllermanager.go:142] Version: v1.21.7+k3s1
  260. I1229 10:57:07.072120 23 secure_serving.go:202] Serving securely on 127.0.0.1:10258
  261. I1229 10:57:07.072179 23 tlsconfig.go:240] Starting DynamicServingCertificateController
  262. time="2023-12-29T10:57:07.150088994Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
  263. time="2023-12-29T10:57:07.150117197Z" level=info msg="Starting /v1, Kind=Pod controller"
  264. time="2023-12-29T10:57:07.150111326Z" level=info msg="Starting /v1, Kind=Service controller"
  265. time="2023-12-29T10:57:07.150143577Z" level=info msg="Starting batch/v1, Kind=Job controller"
  266. time="2023-12-29T10:57:07.150128599Z" level=info msg="Starting /v1, Kind=Endpoints controller"
  267. time="2023-12-29T10:57:07.150128228Z" level=info msg="Starting /v1, Kind=Node controller"
  268. time="2023-12-29T10:57:07.150138307Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
  269. I1229 10:57:07.158648 23 node_lifecycle_controller.go:377] Sending events to api server.
  270. I1229 10:57:07.158742 23 taint_manager.go:163] "Sending events to api server"
  271. I1229 10:57:07.158775 23 node_lifecycle_controller.go:505] Controller will reconcile labels.
  272. I1229 10:57:07.158792 23 controllermanager.go:574] Started "nodelifecycle"
  273. I1229 10:57:07.158815 23 node_lifecycle_controller.go:539] Starting node controller
  274. I1229 10:57:07.158819 23 shared_informer.go:240] Waiting for caches to sync for taint
  275. I1229 10:57:07.309175 23 controllermanager.go:574] Started "persistentvolume-expander"
  276. I1229 10:57:07.309201 23 expand_controller.go:327] Starting expand controller
  277. I1229 10:57:07.309206 23 shared_informer.go:240] Waiting for caches to sync for expand
  278. W1229 10:57:07.418000 23 fs.go:214] stat failed on /dev/mapper/enc-physical-vol with error: no such file or directory
  279. W1229 10:57:07.424918 23 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
  280. I1229 10:57:07.425168 23 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
  281. I1229 10:57:07.425255 23 container_manager_linux.go:291] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
  282. I1229 10:57:07.425297 23 container_manager_linux.go:296] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/k3s SystemCgroupsName: KubeletCgroupsName:/k3s ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
  283. I1229 10:57:07.425314 23 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
  284. I1229 10:57:07.425321 23 container_manager_linux.go:327] "Initializing Topology Manager" policy="none" scope="container"
  285. I1229 10:57:07.425326 23 container_manager_linux.go:332] "Creating device plugin manager" devicePluginEnabled=true
  286. I1229 10:57:07.425459 23 kubelet.go:404] "Attempting to sync node with API server"
  287. I1229 10:57:07.425472 23 kubelet.go:272] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
  288. I1229 10:57:07.425483 23 kubelet.go:283] "Adding apiserver pod source"
  289. I1229 10:57:07.425492 23 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
  290. I1229 10:57:07.425872 23 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="v1.4.12-k3s1" apiVersion="v1alpha2"
  291. I1229 10:57:07.426073 23 server.go:1191] "Started kubelet"
  292. I1229 10:57:07.426120 23 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
  293. W1229 10:57:07.426360 23 fs.go:588] stat failed on /dev/mapper/enc-physical-vol with error: no such file or directory
  294. E1229 10:57:07.426384 23 cri_stats_provider.go:369] "Failed to get the info of the filesystem with mountpoint" err="failed to get device for dir \"/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs\": could not find device with major: 0, minor: 30 in cached partitions map" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
  295. E1229 10:57:07.426404 23 kubelet.go:1306] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
  296. I1229 10:57:07.426640 23 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
  297. I1229 10:57:07.426678 23 volume_manager.go:271] "Starting Kubelet Volume Manager"
  298. I1229 10:57:07.426718 23 desired_state_of_world_populator.go:141] "Desired state populator starts to run"
  299. I1229 10:57:07.426894 23 server.go:409] "Adding debug handlers to kubelet server"
  300. I1229 10:57:07.432332 23 cpu_manager.go:199] "Starting CPU manager" policy="none"
  301. I1229 10:57:07.432340 23 cpu_manager.go:200] "Reconciling" reconcilePeriod="10s"
  302. I1229 10:57:07.432351 23 state_mem.go:36] "Initialized new in-memory state store"
  303. E1229 10:57:07.433276 23 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"k3d-mycluster-server-0\" not found" node="k3d-mycluster-server-0"
  304. I1229 10:57:07.434463 23 policy_none.go:44] "None policy: Start"
  305. W1229 10:57:07.434486 23 fs.go:588] stat failed on /dev/mapper/enc-physical-vol with error: no such file or directory
  306. E1229 10:57:07.434498 23 kubelet.go:1384] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir \"/var/lib/kubelet\": could not find device with major: 0, minor: 30 in cached partitions map"
  307. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  308. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  309. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  310. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  311. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  312. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  313. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  314. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  315. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  316. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  317. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  318. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  319. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  320. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  321. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  322. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  323. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  324. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  325. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  326. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  327. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  328. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  329. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  330. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  331. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  332. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  333. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  334. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  335. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  336. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  337. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  338. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  339. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  340. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  341. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  342. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  343. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  344. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  345. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  346. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  347. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  348. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  349. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  350. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  351. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  352. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  353. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  354. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  355. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  356. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  357. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  358. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  359. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  360. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  361. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  362. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  363. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  364. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  365. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  366. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  367. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  368. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  369. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  370. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  371. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  372. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  373. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  374. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  375. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  376. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  377. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  378. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  379. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  380. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  381. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  382. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  383. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  384. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  385. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  386. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  387. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  388. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  389. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  390. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  391. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  392. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  393. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  394. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  395. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  396. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  397. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  398. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  399. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  400. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  401. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  402. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  403. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
  404. The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement