使用kubeadm安裝k8s叢集v1.20.1
一、安裝環境準備
1.機器列表
主機名 | IP | 作業系統 | 角色 | 安裝軟體 |
master | 192.168.0.100 | CentOS 7 | 管理節點 |
docker |
node1 | 192.168.0.101 | CentOS 7 | 工作節點 |
docker |
node2 | 192.168.0.102 | CentOS 7 | 工作節點 |
docker |
2.環境初始化
注意:以下操作在三臺機器上都要執行
2.1關閉防火牆及selinux
systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i 's/=enforcing/=disabled/g' /etc/selinux/config
2.2關閉swap分割槽
swapoff -a #臨時sed -i '/swap/s/^/#/' /etc/fstab #永久
預設情況下,kubelet不允許所在的主機存在交換分割槽,後期規劃的時候,可以考慮在系統安裝的時候不建立交換分割槽,針對已經存在交換分割槽的可以設定忽略禁止使 用swap的限制,不然無法啟動kubelet。一般直接禁用swap就可以了,不需要執行此步驟。
vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false"
2.3新增yum倉庫
docker-ce倉庫
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
k8s倉庫
cat /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg enabled=1
2.4安裝docker和kubeadm
預設安裝最新版,也可以手動指定版本,如kubelet-1.20.1
yum install docker-ce kubelet kubeadm kubectl -y
2.5啟動docker和kubelet
systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet
注意,此時kubelet是無法正常啟動的,可以檢視/var/log/messages有報錯資訊,等待master節點初始化之後即可正常執行。
2.6提前下載所需映象
vim k8s-image-download.sh #!/bin/bash # download k8s 1.20.1 images # get image-list by 'kubeadm config images list --kubernetes-version=v1.20.1' # gcr.azk8s.cn/google-containers == k8s.gcr.io if [ $# -ne 1 ];then echo "USAGE: bash `basename $0` KUBERNETES-VERSION" exit 1 fi version=$1 images=`kubeadm config images list --kubernetes-version=${version} |awk -F'/' '{print $2}'` for imageName in ${images[@]};do docker pull registry.aliyuncs.com/google_containers/$imageName # docker pull gcr.azk8s.cn/google-containers/$imageName # docker tag gcr.azk8s.cn/google-containers/$imageName k8s.gcr.io/$imageName # docker rmi gcr.azk8s.cn/google-containers/$imageName done
二、叢集搭建
1.master節點執行
kubeadm init --kubernetes-version=v1.20.1 \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.96.0.0/12 \ --apiserver-advertise-address=192.168.0.10 \ --ignore-preflight-errors=Swap \ --ignore-preflight-errors=NumCPU \ --image-repository registry.aliyuncs.com/google_containers
引數說明
- --kubernetes-version=v1.20.1:指定要安裝的版本號。
- --apiserver-advertise-address:指定用 Master 的哪個IP地址與 Cluster的其他節點通訊。
- --service-cidr:指定Service網路的範圍,即負載均衡VIP使用的IP地址段。
- --pod-network-cidr:指定Pod網路的範圍,即Pod的IP地址段。
- --ignore-preflight-errors=:忽略執行時的錯誤,例如執行時存在[ERROR NumCPU]和[ERROR Swap],忽略這兩個報錯就是增加--ignore-preflight-errors=NumCPU 和--ignore-preflight-errors=Swap的配置即可。
- --image-repository:Kubenetes預設Registries地址是 k8s.gcr.io,一般在國內並不能訪問 gcr.io,可以將其指定為阿里雲映象地址:registry.aliyuncs.com/google_containers。
如果有多個網絡卡,最好指定一下 apiserver-advertise 地址
執行過程顯示如下:
[init] Using Kubernetes version: v1.20.1 [preflight] Running pre-flight checks [WARNING NumCPU]: the number of available CPUs 1 is less than the required 2 [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [node-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 1.1.1.101] [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [node-1 localhost] and IPs [1.1.1.101 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [node-1 localhost] and IPs [1.1.1.101 127.0.0.1 ::1] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 22.503724 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node node-1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node node-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: z1609x.bg2tkrsrfwlrl3rb [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.100:6443 --token z1609x.bg2tkrsrfwlrl3rb \ --discovery-token-ca-cert-hash sha256:0753a3d2f04c6c34c5ad88d4be3bc508b1e5b9d00908b29442f7068645521703
初始化操作主要經歷了下面15個步驟,每個階段均輸出均使用[步驟名稱]作為開頭:
- [init]:指定版本進行初始化操作
- [preflight] :初始化前的檢查和下載所需要的Docker映象檔案。
- [kubelet-start] :生成kubelet的配置檔案”/var/lib/kubelet/config.yaml”,沒有這個檔案kubelet無法啟動,所以初始化之前的kubelet實際上啟動失敗。
- [certificates]:生成Kubernetes使用的證書,存放在/etc/kubernetes/pki目錄中。
- [kubeconfig] :生成 KubeConfig 檔案,存放在/etc/kubernetes目錄中,元件之間通訊需要使用對應檔案。
- [control-plane]:使用/etc/kubernetes/manifest目錄下的YAML檔案,安裝 Master 元件。
- [etcd]:使用/etc/kubernetes/manifest/etcd.yaml安裝Etcd服務。
- [wait-control-plane]:等待control-plan部署的Master元件啟動。
- [apiclient]:檢查Master元件服務狀態。
- [upload-config]:更新配置
- [kubelet]:使用configMap配置kubelet。
- [patchnode]:更新CNI資訊到Node上,通過註釋的方式記錄。
- [mark-control-plane]:為當前節點打標籤,打了角色Master,和不可排程標籤,這樣預設就不會使用Master節點來執行Pod。
- [bootstrap-token]:生成token記錄下來,後邊使用kubeadm join往叢集中新增節點時會用到
- [addons]:安裝附加元件CoreDNS和kube-proxy
kubectl預設會在執行的使用者家目錄下面的.kube目錄下尋找config檔案。這裡是將在初始化時[kubeconfig]步驟生成的admin.conf拷貝到.kube/config。
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
2.安裝網路外掛flannel
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
3.節點加入叢集
在各node節點上執行
kubeadm join 192.168.0.100:6443 --token z1609x.bg2tkrsrfwlrl3rb \ --discovery-token-ca-cert-hash sha256:0753a3d2f04c6c34c5ad88d4be3bc508b1e5b9d00908b29442f7068645521703
4.檢視叢集狀態
kubectl get node
kubectl get pod --all-namespaces -o wide
各節點都是 Ready 狀態,各Pod都是 Running 狀態,表示叢集正常執行。
5.測試DNS解析是否正常
kubectl run -it busybox --image=radial/busyboxplus:curl [ root@busybox:/ ]$ nslookup kubernetes Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local [ root@busybox:/ ]$ nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local [ root@busybox:/ ]$
6.測試叢集
在kubernetes叢集中建立一個pod,然後暴露埠,驗證是否正常訪問:
kubectl create deployment nginx-deploy --image=nginx kubectl expose deployment nginx-deploy --port=80 --type=NodePort kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-deploy-8588f9dfb-q9qqd 1/1 Running 0 8h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9h service/nginx-deploy NodePort 10.104.148.168 <none> 80:31629/TCP 8h
訪問地址:http://NodeIP:Port ,此例就是:http://192.168.0.100:31629
說明:
預設token的有效期為24小時,過期之後,該token就不可用了,
如果後續有nodes節點加入,可以重新生成新的token,解決方法如下:
#生成token kubeadm token create 0w3a92.ijgba9ia0e3scicg #檢視token kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 0w3a92.ijgba9ia0e3scicg 23h 2019-09-08T22:02:40+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token t0ehj8.k4ef3gq0icr3etl0 22h 2019-09-08T20:58:34+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token #獲取ca證書sha256編碼hash值 openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' #節點加入叢集 kubeadm join --token aa78f6.8b4cafc8ed26c34f --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538 192.168.73.138:6443 --skip-preflight-chec