Kubernetes實戰總結 - 阿里雲ECS自建K8S叢集
一、概述
詳情參考阿里雲說明:https://help.aliyun.com/document_detail/98886.html?spm=a2c4g.11186623.6.1078.323b1c9bpVKOry
我的專案資源分配(資料庫、中介軟體除外):
二、部署映象倉庫
1) 部署docker-compose,然後參考下文部署docker。
$ sudo curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose $ sudo chmod +x /usr/local/bin/docker-compose $ sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose $ docker-compose --version docker-compose version 1.26.2, build 1110ad01
2) 建立映象倉庫域名證書。
mkdir -p /data/cert && chmod -R 777 /data/cert && cd /data/cert openssl req -x509 -sha256 -nodes -days 3650 -newkey rsa:2048-keyout harbor.key -out harbor.crt -subj "/CN=hub.jhmy.com"
3) 下載harbor離線包,編輯harbor.yml,修改主機地址、證書路徑、倉庫密碼。
4) 執行install.sh部署,完成之後訪問 https://hostip 即可。
部署流程:檢查環境 -> 匯入映象 -> 準備環境 -> 準備配置 -> 開始啟動
三、 系統初始化
1) 設定主機名以及域名解析
hostnamectl set-hostname k8s101 cat >> /etc/hosts <<EOF 172.1.1.114 hub.jhmy.com 172.1.1.101 k8s101 172.1.1.102 k8s102 172.1.1.103 k8s103 172.1.1.104 k8s104
…… 172.1.1.99 k8sapi EOF
2) 節點之前建立無密登入
ssh-keygen ssh-copy-id -i .ssh/id_rsa.pub root@k8s-node1
3) 安裝依賴包、常用軟體,以及同步時間時區
yum -y install vim curl wget unzip ntpdate net-tools ipvsadm ipset sysstat conntrack libseccomp ntpdate ntp1.aliyun.com && ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
4) 關閉swap、selinux、firewalld
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config systemctl stop firewalld && systemctl disable firewalld
5) 調整系統核心引數
cat > /etc/sysctl.d/kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv6.conf.all.disable_ipv6=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 fs.file-max=2000000 fs.nr_open=2000000 fs.inotify.max_user_instances=512 fs.inotify.max_user_watches=1280000 net.netfilter.nf_conntrack_max=524288 EOF modprobe br_netfilter && sysctl -p /etc/sysctl.d/kubernetes.conf
6) 載入系統ipvs相關模組
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules sh /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_
7) 安裝nfs檔案共享服務
yum -y install nfs-common nfs-utils rpcbind systemctl start nfs && systemctl enable nfs systemctl start rpcbind && systemctl enable rpcbind
四、 部署高可用叢集
1) 安裝部署docker
# 設定映象源,安裝docker及元件 yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y docker-ce-19.03.5 docker-ce-cli-19.03.5 # 設定映象加速,倉庫地址,日誌模式 mkdir /etc/docker cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://jc3y13r3.mirror.aliyuncs.com"], "insecure-registries":["hub.jhmy.com"],
"data-root": "/data/docker", "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" } } EOF # 重啟docker,設定啟動 mkdir -p /etc/systemd/system/docker.service.d systemctl daemon-reload && systemctl restart docker && systemctl enable docker
2) 安裝部署kubernetes
# 設定kubernetes映象源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # 安裝kubeadm、kebelet、kubectl yum -y install kubeadm-1.17.5 kubelet-1.17.5 kubectl-1.17.5 --setopt=obsoletes=0 systemctl enable kubelet.service
3) 初始化管理節點
任選一臺master節點,修改當前master節點 /etc/hosts,把 k8sapi 對應解析地址修改為當前節點地址(系統初始化時我們統一配置成slb負載地址了)。
雖然我們打算利用阿里雲的SLB進行kube-apiserver負載,但是此時叢集未啟動,無法監聽k8sapi埠,也就是還無法訪問到SLB負載的埠,
那麼叢集初始化將會失敗,所以我們暫時先用當前節點地址作為負載地址,也就是自己負載自己,來先實現叢集初始化。
注意:因為是正式環境,我們儘量修改一些預設值,比如:token、apiserver埠、etcd資料路徑、podip網段等。
# kubeadm config print init-defaults > kubeadm-config.yaml
# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: token0.123456789kubeadm ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.1.1.101 bindPort: 6333 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s101 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: "k8sapi:6333" controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /data/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.17.5 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 10.233.0.0/16 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration featureGates: SupportIPVSProxyMode: true mode: ipvs
# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
k8s主節點初始化完成後,開啟阿里雲負載均衡配置,增加SLB內網對kube-apiserver負載配置(這裡只能用四層TCP)。
暫且只配置當前master地址,等待其他master節點加入成功後再新增,因為其他兩臺master還未加入,此時如果配置其他master地址,SLB負載均衡狀態將會異常,那其他節點嘗試加入叢集將會失敗。
4) 加入其餘管理節點和工作節點
# 根據初始化日誌提示,執行kubeadm join命令加入其他管理節點。 kubeadm join 192.168.17.100:6444 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:56d53268517... \ --experimental-control-plane --certificate-key c4d1525b6cce4.... # 根據日誌提示,所有管理節點執行以下命令,賦予使用者命令許可權。 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # 根據初始化日誌提示,執行kubeadm join命令加入其他工作節點。 kubeadm join 192.168.17.100:6444 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:260796226d…………
注意:token有效期為24小時,失效後請在主節點使用以下命令重新生成
kubeadm token create --print-join-command
修改新加入master節點apiserver埠,以及補全阿里雲SLB apiserver負載地址。
# 修改kube-apiserver監聽埠 sed -i 's/6443/6333/g' /etc/kubernetes/manifests/kube-apiserver.yaml # 重啟kube-apiserver容器 docker restart `docker ps | grep k8s_kube-apiserver | awk '{print $1}'` # 檢視kube-apiserver監聽埠 ss -anp | grep "apiserver" | grep 'LISTEN'
注意:如果忘記修改,後面部署可能會出現錯誤,比如kube-prometheus
[root@ymt-130 manifests]# kubectl -n monitoring logs pod/prometheus-operator-5bd99d6457-8dv29 ts=2020-08-27T07:00:51.38650537Z caller=main.go:199 msg="Starting Prometheus Operator version '0.34.0'." ts=2020-08-27T07:00:51.38962086Z caller=main.go:96 msg="Staring insecure server on :8080" ts=2020-08-27T07:00:51.39038717Z caller=main.go:315 msg="Unhandled error received. Exiting..." err="communicating with server failed: Get https://10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: connect: connection refused"
5) 部署網路,檢查叢集健康狀況
# 執行準備好的yaml部署檔案 kubectl apply -f kube-flannel.yaml # 檢查叢集部署情況 kubectl get cs && kubectl get nodes && kubectl get pod --all-namespaces # 檢查etcd叢集健康狀態(需要上傳etcdctl二進位制檔案) [root@k8s101 ~]# etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --endpoints https://172.1.1.101:2379,https://172.1.1.102:2379,https://172.1.1.103:2379 --insecure-skip-tls-verify endpoint health https://172.1.1.101:2379 is healthy: successfully committed proposal: took = 12.396169ms https://172.1.1.102:2379 is healthy: successfully committed proposal: took = 12.718211ms https://172.1.1.103:2379 is healthy: successfully committed proposal: took = 13.174164ms
6) Kubelet驅逐策略優化
# 修改工作節點kubelet啟動引數,更改Pod驅逐策略
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf Environment="EVICTION_HARD=--eviction-hard=memory.available<2Gi,nodefs.available<5Gi,imagefs.available<100Gi" Environment="EVICTION_RECLAIM=--eviction-minimum-reclaim=memory.available=0Mi,nodefs.available=1Gi,imagefs.available=2Gi"
# 重啟kubelet容器,並檢視kubelet程序啟動引數
[root@k8s104 ~]# systemctl daemon-reload && systemctl restart kubelet [root@k8s104 ~]# ps -ef | grep kubelet | grep -v grep [root@k8s104 ~]# ps -ef | grep "/usr/bin/kubelet" | grep -v grep root 24941 1 2 Aug27 ? 03:00:12 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf
--config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.1
--eviction-hard=memory.available<2Gi,nodefs.available<5Gi,imagefs.available<100Gi --eviction-minimum-reclaim=memory.available=0Mi,nodefs.available=1Gi,imagefs.available=2Gi
更多資訊:Kubelet 對資源緊缺狀況的應對
五、 部署功能元件
1) 部署七層路由Ingress
# 部署Ingress路由和基礎元件轉發規則
kubectl apply -f nginx-ingress
# 通過修改nginx-config來配置負載地址和最大連線數
kubectl edit cm nginx-config -n nginx-ingress
# 可以適當調整Ingress對外開放埠,然後進行阿里雲SLB外網工作負載配置(所有工作節點)
更多詳情:Nginx全域性配置
2) 部署頁面工具Dashboard
# 執行準備好的yaml部署檔案
kubectl apply -f kube-dashboard.yml
# 等待部署完成
kubectl get pod -n kubernetes-dashboard
# 通過域名登入控制頁面, Token需要使用命令檢視(本地需要配置域名解析)
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep dashboard-admin | awk '{print $1}')
https://k8s.dashboard.com:IngressPort
3) 部署日誌收集Filebeat
# 修改匹配日誌、logstash地址、宿主機目錄
# 然後執行部署即可
kubectl apply -f others/kube-filebeat.yml
--- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeat data: filebeat.yml: |- filebeat.inputs: - type: log paths: - /home/ymt/logs/appdatamonitor/warn.log output.logstash: hosts: ["10.88.88.169:5044"] --- # filebeat.config: # inputs: # # Mounted `filebeat-inputs` configmap: # path: ${path.config}/inputs.d/*.yml # # Reload inputs configs as they change: # reload.enabled: false # modules: # path: ${path.config}/modules.d/*.yml # # Reload module configs as they change: # reload.enabled: false # To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this: #filebeat.autodiscover: # providers: # - type: kubernetes # hints.enabled: true # processors: # - add_cloud_metadata: # cloud.id: ${ELASTIC_CLOUD_ID} # cloud.auth: ${ELASTIC_CLOUD_AUTH} # output.elasticsearch: # hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}'] # username: ${ELASTICSEARCH_USERNAME} # password: ${ELASTICSEARCH_PASSWORD} --- # apiVersion: v1 # kind: ConfigMap # metadata: # name: filebeat-inputs # namespace: kube-system # labels: # k8s-app: filebeat # data: # kubernetes.yml: |- # - type: docker # containers.ids: # - "*" # processors: # - add_kubernetes_metadata: # in_cluster: true --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat spec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 containers: - name: filebeat # image: docker.elastic.co/beats/filebeat:6.7.2 image: registry.cn-shanghai.aliyuncs.com/leozhanggg/elastic/filebeat:6.7.1 args: [ "-c", "/etc/filebeat.yml", "-e", ] # env: # - name: ELASTICSEARCH_HOST # value: elasticsearch # - name: ELASTICSEARCH_PORT # value: "9200" # - name: ELASTICSEARCH_USERNAME # value: elastic # - name: ELASTICSEARCH_PASSWORD # value: changeme # - name: ELASTIC_CLOUD_ID # value: # - name: ELASTIC_CLOUD_AUTH # value: securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml # - name: inputs # mountPath: /usr/share/filebeat/inputs.d # readOnly: true - name: data mountPath: /usr/share/filebeat/data - name: ymtlogs mountPath: /home/ymt/logs readOnly: true # - name: varlibdockercontainers # mountPath: /var/lib/docker/containers # readOnly: true volumes: - name: config configMap: defaultMode: 0600 name: filebeat-config - name: ymtlogs hostPath: path: /home/ymt/logs # - name: varlibdockercontainers # hostPath: # path: /var/lib/docker/containers # - name: inputs # configMap: # defaultMode: 0600 # name: filebeat-inputs # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: data hostPath: path: /var/lib/filebeat-data type: DirectoryOrCreate --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: filebeat subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: filebeat labels: k8s-app: filebeat rules: - apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods verbs: - get - watch - list --- apiVersion: v1 kind: ServiceAccount metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat ---kube-filebeat.yaml
注意:因為我們logstash和ES均部署在外部,所以這裡k8s叢集僅部署了filebeat,用於收集日誌傳輸到叢集外部logstash。
4) 部署監控平臺Prometheus
# 先部署預設元件
cd kube-prometheus-0.3.0/manifests kubectl create -f setup && sleep 5 && kubectl create -f .
# 等待部署完成
kubectl get pod -n monitoring
# 然後修改自定義監控配置,執行升級指令碼
cd custom && sh upgrade.sh
* 告警配置:alertmanager.yaml* 預設告警規則:prometheus-rules.yaml* 新增告警規則:prometheus-additional-rules.yaml* 新增監控項配置:prometheus-additional.yaml #調整監控項及地址* 監控配置:prometheus-prometheus.yaml #調整副本數和資源限制
# 通過域名登入監控頁面(本地需要配置域名解析)
http://k8s.grafana.com:IngressPort # 預設使用者和密碼都是admin
http://k8s.prometheus.com:IngressPort
http://k8s.alertmanager.com:IngressPort
# 點選新增按鈕 ->Import ->Upload .json file,匯入監控儀表板。
* k8s-model.json* node-model.json
詳情參考:Kubernetes實戰總結 - 自定義Prometheus
五、 其他問題說明
1) Kubectl命令使用
# 命令自動部署設定 yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc
官方文件:Kubernetes kubectl 命令表
網路博文:kubernetes常用命令整理
2) 延長證書有效期
# 檢視證書有效期 kubeadm alpha certs check-expiration # 重新生成所有證書 kubeadm alpha certs renew all # 分別重啟所有主節點元件容器 docker ps | \ grep -E 'k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd' | \ awk -F ' ' '{print $1}' |xargs docker restart
3) 解除安裝k8s叢集節點
# 將要解除安裝的節點標記為不能再排程 kubectl cordon k8s-node1 # 將該節點上執行的容器平滑遷移到其他節點上 kubectl drain nodeA --delete-local-data --force # 從叢集上刪除該節點 kubectl delete node k8s-node1 # 在刪除的節點上重置配置 kubeadm reset # 根據提示手動刪除相應檔案 rm -rf /etc/cni/net.d ipvsadm --clear rm -rf /root/.kube/ # 停止 kubelet服務 systemctl stop kubelet # 檢視安裝過的k8s的軟體包 yum list installed | grep 'kube' # 解除安裝k8s相關安裝包 yum remove kubeadm.x86_64 kubectl.x86_64 cri-tools.x86_64 kubernetes-cni.x86_64 kubelet.x86_64
4) 徹底清除節點網路
# 重置節點 kubeadm reset -f # 清除配置 rm -rf $HOME/.kube/config /etc/cni/net.d && ipvsadm --clear # 停止docker systemctl stop kubelet && systemctl stop docker # 刪除網路配置和路由記錄 rm -rf /var/lib/cni/ ip link delete cni0 ip link delete flannel.1 ip link delete dummy0 ip link delete kube-ipvs0 # 重啟docker和network systemctl restart docker && systemctl restart kubelet && systemctl restart network
# 有時候更換網路外掛可能會出現podcidr錯誤,可以進行手動更改 kubectl describe node k8s112 | grep PodCIDR kubectl patch node k8s112 -p '{"spec":{"podCIDR":"10.233.0.0/16"}}'
5) 部署應用到master節點
#增加不可排程容忍和主節點節點親和性 tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/master operator: Exists
注意:我們部署k8s dashboard時有時發現使用主節點地址開啟特別的卡,但是我們使用部署的節點開啟就非常的流暢,
那麼我們只需要給dashboard增加此配置,即讓dashboard部署在主節點,這樣使用主節點開啟就會非常的流暢了。
6) 修改k8s節點名稱
# 阿里雲自建K8S叢集可能會出現連線apiserver失敗情況,一般是由於K8S在做DNS名稱解析的時候出現了較長的解析請求,可以通過修改node名稱的方式解決。 hostname ymt-140 vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf Environment="KUBELET_HOSTNAME=--hostname-override=ymt-140" $KUBELET_HOSTNAME
systemctl daemon-reload && systemctl restart kubelet && ps -ef | grep /usr/bin/kubelet | grep -v grep journalctl -xe -u kubelet
7) 部署日誌記錄
[root@k8s101 ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log W0819 09:24:09.326568 28880 validation.go:28] Cannot validate kube-proxy config - no validator is available W0819 09:24:09.326626 28880 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.5 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s101 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8sapi] and IPs [10.96.0.1 172.1.1.101] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s101 localhost] and IPs [172.1.1.101 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s101 localhost] and IPs [172.1.1.101 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0819 09:24:14.028737 28880 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0819 09:24:14.029728 28880 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 16.502551 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 8782750a5ffd83f0fdbe635eced5e6b1fc4acd73a2a13721664494170a154a01 [mark-control-plane] Marking the node k8s101 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s101 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: zwx051.085210868chiscdc [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join k8sapi:6333 --token zwx051.085210868chiscdc \ --discovery-token-ca-cert-hash sha256:de4d9a37423fecd5313a76d99ad60324cdb0ca6a38254de549394afa658c98b2 \ --control-plane --certificate-key 8782750a5ffd83f0fdbe635eced5e6b1fc4acd73a2a13721664494170a154a01 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8sapi:6333 --token zwx051.085210868chiscdc \ --discovery-token-ca-cert-hash sha256:de4d9a37423fecd5313a76d99ad60324cdb0ca6a38254de549394afa658c98b2 [root@k8s102 ~]# kubeadm join k8sapi:6333 --token zwx051.085210868chiscdc \ > --discovery-token-ca-cert-hash sha256:de4d9a37423fecd5313a76d99ad60324cdb0ca6a38254de549394afa658c98b2 \ > --control-plane --certificate-key 8782750a5ffd83f0fdbe635eced5e6b1fc4acd73a2a13721664494170a154a01 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s101 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8sapi] and IPs [10.96.0.1 172.1.1.102] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s101 localhost] and IPs [172.1.1.102 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s101 localhost] and IPs [172.1.1.102 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" W0819 10:31:17.604671 4058 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0819 10:31:17.612645 4058 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0819 10:31:17.613524 4058 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s {"level":"warn","ts":"2020-08-19T10:31:31.039+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.1.1.102:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"} [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [mark-control-plane] Marking the node k8s101 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s101 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane (master) label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster.kubeadm-init.log
作者:Leozhanggg
出處:https://www.cnblogs.com/leozhanggg/p/13522155.html
本文版權歸作者和部落格園共有,歡迎轉載,但未經作者同意必須保留此段宣告,且在文章頁面明顯位置給出原文連線,否則保留追究法律責任的權利。