1. 程式人生 > 實用技巧 >Kubernetes基礎

Kubernetes基礎

開啟更多功能,提升辦公效能

Kubernetes基礎篇
Kubernetes 是一個可移植的、可擴充套件的開源平臺,用於管理容器化的工作負載和服務,可促進宣告式配置和自動化。Kubernetes 擁有一個龐大且快速增長的生態系統。Kubernetes 的服務、支援和工具廣泛可用。

一、簡介
Kubernetes是一個全新的基於容器技術的分散式領先方案。簡稱:K8S。它是Google開源的容器叢集管理系統,它的設計靈感來自於Google內部的一個叫作Borg的容器管理系統。繼承了Google十餘年的容器叢集使用經驗。它為容器化的應用提供了部署執行、資源排程、服務發現和動態伸縮等一些列完整的功能,極大地提高了大規模容器叢集管理的便捷性。

kubernetes是一個完備的分散式系統支撐平臺。具有完備的叢集管理能力,多擴多層次的安全防護和准入機制、多租戶應用支撐能力、透明的服務註冊和發現機制、內建智慧負載均衡器、強大的故障發現和自我修復能力、服務滾動升級和線上擴容能力、可擴充套件的資源自動排程機制以及多粒度的資源配額管理能力。

在叢集管理方面,Kubernetes將叢集中的機器劃分為一個Master節點和一群工作節點Node,其中,在Master節點執行著叢集管理相關的一組程序kube-apiserver、kube-controller-manager和kube-scheduler,這些程序實現了整個叢集的資源管理、Pod排程、彈性伸縮、安全控制、系統監控和糾錯等管理能力,並且都是全自動完成的。Node作為叢集中的工作節點,執行真正的應用程式,在Node上Kubernetes管理的最小執行單元是Pod。Node上執行著Kubernetes的kubelet、kube-proxy服務程序,這些服務程序負責Pod的建立、啟動、監控、重啟、銷燬以及實現軟體模式的負載均衡器。

在Kubernetes叢集中,它解決了傳統IT系統中服務擴容和升級的兩大難題。如果今天的軟體並不是特別複雜並且需要承載的峰值流量不是特別多,那麼後端專案的部署其實也只需要在虛擬機器上安裝一些簡單的依賴,將需要部署的專案編譯後執行就可以了。但是隨著軟體變得越來越複雜,一個完整的後端服務不再是單體服務,而是由多個職責和功能不同的服務組成,服務之間複雜的拓撲關係以及單機已經無法滿足的效能需求使得軟體的部署和運維工作變得非常複雜,這也就使得部署和運維大型叢集變成了非常迫切的需求。

Kubernetes 的出現不僅主宰了容器編排的市場,更改變了過去的運維方式,不僅將開發與運維之間邊界變得更加模糊,而且讓 DevOps 這一角色變得更加清晰,每一個軟體工程師都可以通過 Kubernetes 來定義服務之間的拓撲關係、線上的節點個數、資源使用量並且能夠快速實現水平擴容、藍綠部署等在過去複雜的運維操作。

二、架構
Kubernetes 遵循非常傳統的客戶端服務端架構,客戶端通過 RESTful 介面或者直接使用 kubectl 與 Kubernetes 叢集進行通訊,這兩者在實際上並沒有太多的區別,後者也只是對 Kubernetes 提供的 RESTful API 進行封裝並提供出來。每一個 Kubernetes 叢集都由一組 Master 節點和一系列的 Worker 節點組成,其中 Master 節點主要負責儲存叢集的狀態併為 Kubernetes 物件分配和排程資源。

Master

它主要負責接收客戶端的請求,安排容器的執行並且執行控制迴圈,將叢集的狀態向目標狀態進行遷移,Master 節點內部由三個元件構成:
API Server
負責處理來自使用者的請求,其主要作用就是對外提供 RESTful 的介面,包括用於檢視叢集狀態的讀請求以及改變叢集狀態的寫請求,也是唯一一個與 etcd 叢集通訊的元件。

Controller
Controller 管理器運行了一系列的控制器程序,這些程序會按照使用者的期望狀態在後臺不斷地調節整個叢集中的物件,當服務的狀態發生了改變,控制器就會發現這個改變並且開始向目標狀態遷移。

Scheduler
Scheduler 排程器其實為 Kubernetes 中執行的 Pod 選擇部署的 Worker 節點,它會根據使用者的需要選擇最能滿足請求的節點來執行 Pod,它會在每次需要排程 Pod 時執行。

Node

Node節點實現相對簡單一點,主要是由kubelet和kube-proxy兩部分組成:

kubelet 是一個節點上的主要服務,它週期性地從 API Server 接受新的或者修改的 Pod 規範並且保證節點上的 Pod 和其中容器的正常執行,還會保證節點會向目標狀態遷移,該節點仍然會向 Master 節點發送宿主機的健康狀況。

kube-proxy 負責宿主機的子網管理,同時也能將服務暴露給外部,其原理就是在多個隔離的網路中把請求轉發給正確的 Pod 或者容器。

Kubernetes架構圖

在這張系統架構圖中,我們把服務分為執行在工作節點上的服務和組成叢集級別控制板的服務。

Kubernetes主要由以下幾個核心元件組成:

etcd儲存了整個叢集的狀態;
apiserver提供了資源操作的唯一入口,並提供認證、授權、訪問控制、API註冊和發現等機制;
controller manager負責維護叢集的狀態,比如故障檢測、自動擴充套件、滾動更新等;
scheduler負責資源的排程,按照預定的排程策略將Pod排程到相應的機器上;
kubelet負責維護容器的生命週期,同時也負責Volume(CVI)和網路(CNI)的管理;
Container runtime負責映象管理以及Pod和容器的真正執行(CRI);
kube-proxy負責為Service提供cluster內部的服務發現和負載均衡;

除了核心元件,還有一些推薦的元件:

kube-dns負責為整個叢集提供DNS服務
Ingress Controller為服務提供外網入口
Heapster提供資源監控
Dashboard提供GUI
Federation提供跨可用區的叢集
Fluentd-elasticsearch提供叢集日誌採集、儲存與查詢

三、安裝
部署Kubernetes有兩種方式,第一種是二進位制的方式,可定製但是部署複雜容易出錯;第二種是kubeadm工具安裝,部署簡單,不可定製化。
二進位制安裝
環境準備
主機名
IP
角色
kubernetes-master-01
172.26.203.203
Master-01
kubernetes-masert-02
172.26.203.199
Master-02
kubernetes-master-03
172.26.203.204
Master-03
kubernetes-node-01
172.26.203.202
Node-01
kubernetes-node-02
172.26.203.200
Node-02
kubernetes-master-vip
172.26.203.201
Master-vip

升級核心

升級核心

wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-ml{,-devel}-5.8.3-1.el7.elrepo.x86_64.rpm
yum install -y kernel-ml{,-devel}-5.8.3-1.el7.elrepo.x86_64.rpm

調整預設核心

cat /boot/grub2/grub.cfg |grep menuentry
grub2-set-default "CentOS Linux (5.8.3-1.el7.elrepo.x86_64) 7 (Core)"

檢查是否修改正確

grub2-editenv list
reboot

IPVS的支援開啟

確認核心版本後,開啟 IPVS

cat > /etc/sysconfig/modules/ipvs.modules <<EOF

!/bin/bash

ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

設定 iptables
echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
""" > /etc/sysctl.conf
sysctl -p

同步時間
crontab -e # 加入定時任務
*/5 * * * * ntpdate ntp.aliyun.com 1 > /dev/null

同步祕鑰
for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03 kubernetes-node-01 kubernetes-node-02 kubernetes-master-vip; do
ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i ;
done

簽發證書
安裝簽發軟體
chmod +x cfssl cfssljson
mv cfssl cfssljson /usr/local/bin/

需要簽發證書的元件:
admin user
kubelet
kube-controller-manager
kube-proxy
kube-scheduler
kube-api

建立證書
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF

CA證書籤名
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF

生成CA證書
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

生成admin使用者證書
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF

生成admin使用者證書
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-profile=kubernetes
admin-csr.json | cfssljson -bare admin

kubelet授權

kubernetes-node-01

cat > kubernetes-node-01-csr.json <<EOF
{
"CN": "system:node:kubernetes-node-01",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-hostname=kubernetes-node-01,172.26.203.202
-profile=kubernetes
kubernetes-node-01-csr.json | cfssljson -bare kubernetes-node-01

kubernetes-node-02

cat > kubernetes-node-02-csr.json <<EOF
{
"CN": "system:node:kubernetes-node-02",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-hostname=kubernetes-node-02,172.26.203.200
-profile=kubernetes
kubernetes-node-02-csr.json | cfssljson -bare kubernetes-node-02

Controller Manager客戶端證書
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-profile=kubernetes
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

Kube Proxy客戶端證書
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-profile=kubernetes
kube-proxy-csr.json | cfssljson -bare kube-proxy

scheduler客戶端證書
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-profile=kubernetes
kube-scheduler-csr.json | cfssljson -bare kube-scheduler

kuberentes API 證書
CERT_HOSTNAME=10.32.0.1,172.26.203.203,kubernetes-master-01,172.26.203.199,kubernetes-masert-02,172.26.203.204,kubernetes-master-03,172.26.203.202,kubernetes-node-01,172.26.203.200,kubernetes-node-02,172.26.203.201,kubernetes-master-vip,127.0.0.1,localhost,kubernetes.default
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-hostname=${CERT_HOSTNAME}
-profile=kubernetes
kubernetes-csr.json | cfssljson -bare kubernetes

服務賬戶證書
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-profile=kubernetes
service-account-csr.json | cfssljson -bare service-account

將證書複製到各個節點。

建立配置
kubeconfig常用來在kubernetes元件之間和使用者到kubernetes之間。
Entity
解釋
叢集
api-Server的IP及以base64位編碼的證書
使用者
使用者相關的資訊,比如認證使用者名稱,它的證書和key或者服務帳戶的token
上下文
擁有叢集和證書的引用,假如你有多個叢集和和戶,那麼使用上下文將會變得非常方便

生成kubelet的kubeconfig
chmod +x kubectl
cp kubectl /usr/local/bin/
for instance in kubernetes-node-01 kubernetes-node-02; do
kubectl config set-cluster kubernetes-the-hard-way
--certificate-authority=ca.pem
--embed-certs=true
--server=https://172.26.203.201:6443
--kubeconfig=${instance}.kubeconfig
kubectl config set-credentials system:node:${instance}
--client-certificate=${instance}.pem
--client-key=${instance}-key.pem
--embed-certs=true
--kubeconfig=${instance}.kubeconfig
kubectl config set-context default
--cluster=kubernetes-the-hard-way
--user=system:node:${instance}
--kubeconfig=${instance}.kubeconfig
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done

生成kube-proxy kubeconfig
kubectl config set-cluster kubernetes-the-hard-way
--certificate-authority=ca.pem
--embed-certs=true
--server=https://172.26.203.201:6443
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy
--client-certificate=kube-proxy.pem
--client-key=kube-proxy-key.pem
--embed-certs=true
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default
--cluster=kubernetes-the-hard-way
--user=system:kube-proxy
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

生成kube-controller-manager kubeconfig
kubectl config set-cluster kubernetes-the-hard-way
--certificate-authority=ca.pem
--embed-certs=true
--server=https://127.0.0.1:6443
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager
--client-certificate=kube-controller-manager.pem
--client-key=kube-controller-manager-key.pem
--embed-certs=true
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default
--cluster=kubernetes-the-hard-way
--user=system:kube-controller-manager
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig

產生kube-scheduler kubeconfig
kubectl config set-cluster kubernetes-the-hard-way
--certificate-authority=ca.pem
--embed-certs=true
--server=https://127.0.0.1:6443
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler
--client-certificate=kube-scheduler.pem
--client-key=kube-scheduler-key.pem
--embed-certs=true
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default
--cluster=kubernetes-the-hard-way
--user=system:kube-scheduler
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig

生成admin的kubeconfig
kubectl config set-cluster kubernetes-the-hard-way
--certificate-authority=ca.pem
--embed-certs=true
--server=https://127.0.0.1:6443
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin
--client-certificate=admin.pem
--client-key=admin-key.pem
--embed-certs=true
--kubeconfig=admin.kubeconfig
kubectl config set-context default
--cluster=kubernetes-the-hard-way
--user=admin
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig

複製kubeconfig到各個節點
for i in kubernetes-node-02 kubernetes-node-01; do
scp $i.kubeconfig kube-proxy.kubeconfig root@$i:/root/
done
for i in kubernetes-master-03 kubernetes-master-02 kubernetes-master-01 ; do
scp etcd-v3.4.10-linux-amd64.tar.gz root@$i:/opt/
done

加密config
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:

  • resources:
    • secrets
      providers:
    • aescbc:
      keys:
      - name: key1
      secret: ${ENCRYPTION_KEY}
    • identity: {}
      EOF
      for i in kubernetes-master-03 kubernetes-master-02 kubernetes-master-01 ; do
      scp encryption-config.yaml root@$i:/opt/
      done

部署etcd叢集
wget https://mirrors.huaweicloud.com/etcd/v3.4.10/etcd-v3.4.10-linux-amd64.tar.gz
mv etcd-v3.4.10-linux-amd64/etcd* /usr/local/bin/
mkdir -p /etc/etcd /var/lib/etcd
cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
ETCD_NAME=hostname
INTERNAL_IP=hostname -i
INITIAL_CLUSTER=kubernetes-master-01=https://172.26.203.203:2380,kubernetes-master-02=https://172.26.203.199:2380,kubernetes-master-03=https://172.26.203.204:2380
cat << EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \
--name ${ETCD_NAME} \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \
--listen-peer-urls https://${INTERNAL_IP}:2380 \
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \
--advertise-client-urls https://${INTERNAL_IP}:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster ${INITIAL_CLUSTER} \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

啟動ETCD

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

部署Master節點

安裝必要外掛
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

kubernetes API 伺服器配置
mkdir -p /var/lib/kubernetes/
mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem
service-account-key.pem service-account.pem
encryption-config.yaml /var/lib/kubernetes/

啟動kube-apiservice

CONTROLLER0_IP=172.26.203.203
CONTROLLER1_IP=172.26.203.199
CONTROLLER2_IP=172.26.203.204
INTERNAL_IP=hostname -i
cat << EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--advertise-address=${INTERNAL_IP} \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/audit.log \
--authorization-mode=Node,RBAC \
--bind-address=0.0.0.0 \
--client-ca-file=/var/lib/kubernetes/ca.pem \
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--etcd-cafile=/var/lib/kubernetes/ca.pem \
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
--etcd-servers=https://$CONTROLLER0_IP:2379,https://$CONTROLLER1_IP:2379,https://$CONTROLLER2_IP:2379 \
--event-ttl=1h \
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
--kubelet-https=true \
--runtime-config=api/all=true \
--service-account-key-file=/var/lib/kubernetes/service-account.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
--v=2 \
--kubelet-preferred-address-types=InternalIP,InternalDNS,Hostname,ExternalIP,ExternalDNS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

建立kube-controller-manager伺服器的服務配置檔案
mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--address=0.0.0.0 \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
--leader-elect=true \
--root-ca-file=/var/lib/kubernetes/ca.pem \
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--use-service-account-credentials=true \
--allocate-node-cidrs=true
--cluster-cidr=10.100.0.0/16
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

kube-scheduler 配置
mv kube-scheduler.kubeconfig /var/lib/kubernetes/
mkdir -p /etc/kubernetes/config
cat <<EOF | tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
cat <<EOF | tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--config=/etc/kubernetes/config/kube-scheduler.yaml \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

啟動服務
$ systemctl daemon-reload
$ systemctl enable kube-apiserver kube-controller-manager kube-scheduler
$ systemctl start kube-apiserver kube-controller-manager kube-scheduler

HTTP健康檢查
kubectl get componentstatuses --kubeconfig admin.kubeconfig
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}

kubele授權
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:

  • apiGroups:
    • ""
      resources:
    • nodes/proxy
    • nodes/stats
    • nodes/log
    • nodes/spec
    • nodes/metrics
      verbs:
    • "*"
      EOF

kube-api服務認證kubelet
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:

  • apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
    EOF

配置負載均衡
yum install haproxy -y
cat <<EOF | tee /etc/haproxy/haproxy.cfg
frontend k8s-api
bind 192.168.20.116:6443
bind 192.168.20.116:443
mode tcp
option tcplog
default_backend k8s-api
backend k8s-api
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-api-1 192.168.20.111:6443 check
server k8s-api-2 192.168.20.112:6443 check
server k8s-api-3 192.168.20.113:6443 check
EOF

啟動服務

systemctl start haproxy
systemctl enable haproxy

如果配置完全Ok,應該會看到如下資訊

curl --cacert ca.pem https://172.26.203.201:6443/version
{
"major": "1",
"minor": "17",
"gitVersion": "v1.17.0",
"gitCommit": "70132b0f130acc0bed193d9ba59dd186f0e634cf",
"gitTreeState": "clean",
"buildDate": "2019-12-07T21:12:17Z",
"goVersion": "go1.13.4",
"compiler": "gc",
"platform": "linux/amd64"
}

部署Node節點

安裝Docker

step 1: 安裝必要的一些系統工具

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

Step 2: 新增軟體源資訊

sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

Step 3: 更新並安裝Docker-CE

sudo yum makecache fast
sudo yum -y install docker-ce

Step 4: 開啟Docker服務

sudo service docker start

kubelet配置
mkdir -p /var/lib/kubelet
mkdir -p /var/lib/kubernetes
mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
mv ca.pem /var/lib/kubernetes/

設定kubelet配置檔案
cat <<EOF | tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:

  • "10.32.0.10"
    podCIDR: "10.100.0.0/16"

resolvConf: "/run/systemd/resolve/resolv.conf"

runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/c720114.xiodi.cn.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/c720114.xiodi.cn-key.pem"
EOF

建立kubelet服務配置檔案
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet
--config=/var/lib/kubelet/kubelet-config.yaml
--docker=unix:///var/run/docker.sock
--docker-endpoint=unix:///var/run/docker.sock
--image-pull-progress-deadline=2m
--network-plugin=cni
--kubeconfig=/var/lib/kubelet/kubeconfig
--register-node=true
--cgroup-driver=systemd
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

kube-proxy配置
mkdir /var/lib/kube-proxy -p
mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: ""
clusterCIDR: "10.100.0.0/16"
EOF

kube-proxy服務配置
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

啟動kubelet服務及Kube-proxy服務並校驗
systemctl daemon-reload
systemctl enable kubelet kube-proxy
systemctl start kubelet kube-proxy

部署網路外掛
部署CNI網路
mkdir /opt/cni/bin /etc/cni/net.d -p
cd /opt/cni/bin
wget https://github.com/containernetworking/plugins/releases/download/v0.8.3/cni-plugins-linux-amd64-v0.8.3.tgz
tar zxvf cni-plugins-linux-amd64-v0.8.3.tgz -C /opt/cni/bin

安裝網路外掛

在主節點下執行

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ym

部署DNS外掛
部署DNS外掛
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml

校驗DNS pods的狀態
kubectl get pods -l k8s-app=kube-dns -n kube-system
kubeadm安裝
Master節點安裝
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl

Node節點安裝
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubect

設定開機自啟動
systemctl enable docker.service kubelet.service

初始化
kubeadm init
--image-repository=registry.aliyuncs.com/google_containers
--service-cidr=10.96.0.0/12
--pod-network-cidr=10.244.0.0/16
如下圖即為安裝成功

根據命令提示執行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u)