Advertisement
Guest User

Untitled

a guest
Nov 26th, 2020
71
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 5.99 KB | None | 0 0
  1. sudo kubeadm init --control-plane-endpoint="192.168.20.10:6443" --upload-certs --apiserver-advertise-address=192.168.20.21 --pod-network-cidr=10.100.0.0/16
  2. W1126 21:56:30.244529 57406 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  3. [init] Using Kubernetes version: v1.19.4
  4. [preflight] Running pre-flight checks
  5. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  6. [preflight] Pulling images required for setting up a Kubernetes cluster
  7. [preflight] This might take a minute or two, depending on the speed of your internet connection
  8. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  9. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  10. [certs] Generating "ca" certificate and key
  11. [certs] Generating "apiserver" certificate and key
  12. [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-master-1 kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.20.21 192.168.20.10]
  13. [certs] Generating "apiserver-kubelet-client" certificate and key
  14. [certs] Generating "front-proxy-ca" certificate and key
  15. [certs] Generating "front-proxy-client" certificate and key
  16. [certs] Generating "etcd/ca" certificate and key
  17. [certs] Generating "etcd/server" certificate and key
  18. [certs] etcd/server serving cert is signed for DNS names [kubernetes-master-1 localhost] and IPs [192.168.20.21 127.0.0.1 ::1]
  19. [certs] Generating "etcd/peer" certificate and key
  20. [certs] etcd/peer serving cert is signed for DNS names [kubernetes-master-1 localhost] and IPs [192.168.20.21 127.0.0.1 ::1]
  21. [certs] Generating "etcd/healthcheck-client" certificate and key
  22. [certs] Generating "apiserver-etcd-client" certificate and key
  23. [certs] Generating "sa" key and public key
  24. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  25. [kubeconfig] Writing "admin.conf" kubeconfig file
  26. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  27. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  28. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  29. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  30. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  31. [kubelet-start] Starting the kubelet
  32. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  33. [control-plane] Creating static Pod manifest for "kube-apiserver"
  34. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  35. [control-plane] Creating static Pod manifest for "kube-scheduler"
  36. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  37. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  38. [apiclient] All control plane components are healthy after 21.038355 seconds
  39. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  40. [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
  41. [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
  42. [upload-certs] Using certificate key:
  43. 57d92a387afbd601fba5da9e310523fa5ac8dfcdf0fd70dd8624a9950ce06457
  44. [mark-control-plane] Marking the node kubernetes-master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  45. [mark-control-plane] Marking the node kubernetes-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  46. [bootstrap-token] Using token: c2p4af.9s3aapujrfjkjlho
  47. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  48. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  49. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  50. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  51. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  52. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  53. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  54. [addons] Applied essential addon: CoreDNS
  55. [addons] Applied essential addon: kube-proxy
  56.  
  57. Your Kubernetes control-plane has initialized successfully!
  58.  
  59. To start using your cluster, you need to run the following as a regular user:
  60.  
  61. mkdir -p $HOME/.kube
  62. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  63. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  64.  
  65. You should now deploy a pod network to the cluster.
  66. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  67. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  68.  
  69. You can now join any number of the control-plane node running the following command on each as root:
  70.  
  71. kubeadm join 192.168.20.10:6443 --token c2p4af.9s3aapujrfjkjlho \
  72. --discovery-token-ca-cert-hash sha256:ff3fc8d5e1a7ee16e2d48362cef4e3fa53df4c8fd672e69c8fe2c9e5826ab0c9 \
  73. --control-plane --certificate-key 57d92a387afbd601fba5da9e310523fa5ac8dfcdf0fd70dd8624a9950ce06457
  74.  
  75. Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
  76. As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
  77. "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
  78.  
  79. Then you can join any number of worker nodes by running the following on each as root:
  80.  
  81. kubeadm join 192.168.20.10:6443 --token c2p4af.9s3aapujrfjkjlho \
  82. --discovery-token-ca-cert-hash sha256:ff3fc8d5e1a7ee16e2d48362cef4e3fa53df4c8fd672e69c8fe2c9e5826ab0c9
  83.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement