1. 程式人生 > 實用技巧 >二進位制部署kubernetes v1.16.1單master點

二進位制部署kubernetes v1.16.1單master點

一、初始化系統和全域性變數

1)叢集規劃

OS hostname memory CPU Role
Centos 7.5 k8s-01 4G 4核心 master
Centos 7.5 k8s-02 4G 4核心 master
Centos 7.5 k8s-03 4G 4核心 master
Centos 7.5 k8s-04 4G 4核心 node

如果沒有特殊要求,均在所有節點上執行!

2)配置host檔案、免密登入

該操作僅在k8s-01節點上執行即可!

$ cat >> /etc/hosts <<EOF
192.168.1.1  k8s-01
192.168.1.2  k8s-02
192.168.1.3  k8s-03
192.168.1.4  k8s-04
EOF
$ yum -y install expect
$ vim mianmi.sh
#! /usr/bin/env bash
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
for i in k8s-01 k8s-02 k8s-03 k8s-04;do
expect -c "
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
        expect {
                \"*yes/no*\" {send \"yes\r\"; exp_continue}
                \"*password*\" {send \"123456\r\"; exp_continue}
                \"*Password*\" {send \"123456\r\";}
        } "
done 
$ sh mianmi.sh
# 測試密碼是123456  大家按照自己主機的密碼進行修改就可以

3)更新PATH變數

$ echo 'PATH=/opt/k8s/bin:$PATH' >> /root/.bashrc
$ source /root/.bashrc
  • /opt/k8s/bin 目錄儲存本文件下載安裝的程式;

4)設定yum源,安裝常用工具

$ wget -O /etc/yum.repos.d/CentOS-Base.repo  http://mirrors.aliyun.com/repo/Centos-7.repo
$ wget -O /etc/yum.repos.d/epel.repo  http://mirrors.aliyun.com/repo/epel-7.repo
$ yum clean all
$ yum makecache
$ yum install -y chrony conntrack ipvsadm ipset jq iptables curl lrzsz sysstat libseccomp socat git
  • 本文件的 kube-proxy 使用 ipvs 模式,ipvsadm 為 ipvs 的管理工具;
  • etcd 叢集各機器需要時間同步,chrony 用於系統時間同步;

5)關閉防火牆、SELinux、swap

  • 關閉防火牆,清理防火牆規則,設定預設轉發策略;
  • 關閉 SELinux,否則 kubelet 掛載目錄時可能報錯 Permission denied
  • 如果開啟了swap分割槽,kubelet會啟動失敗(可以通過設定引數——-fail-swap-on設定為false) ;
$ systemctl stop firewalld && systemctl disable firewalld
$ sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config  
$ iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
$ iptables -P FORWARD ACCEPT
$ swapoff -a
$ sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab 

6)系統核心引數優化

$ cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
net.ipv4.neigh.default.gc_thresh1=1024
net.ipv4.neigh.default.gc_thresh1=2048
net.ipv4.neigh.default.gc_thresh1=4096
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
$ cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
$ sysctl -p /etc/sysctl.d/kubernetes.conf

關閉 tcp_tw_recycle,否則與 NAT 衝突,可能導致服務不通;

7)同步系統時間

$ timedatectl set-timezone Asia/Shanghai
# 設定系統時區
$ systemctl start chronyd && systemctl enable chronyd
$ timedatectl status     # 檢視同步狀態
$ timedatectl set-local-rtc 0   
# 將當前的 UTC 時間寫入硬體時鐘
$ systemctl restart rsyslog && systemctl restart crond
# 重啟依賴於系統時間的服務

8)建立相關目錄

$ mkdir -p /opt/k8s/{bin,work} /etc/{kubernetes,etcd}/cert

9)分發叢集配置引數指令碼

後續使用的環境變數都定義在檔案environment.sh中,請根據自己的機器、網路情況修改。然後拷貝到所有節點:

$ vim environment.sh
#!/usr/bin/env bash

# 生成 EncryptionConfig 所需的加密 key
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

# 叢集各機器 IP 陣列
export NODE_IPS=(192.168.1.1 192.168.1.2 192.168.1.3)

# 叢集各 IP 對應的主機名陣列
export NODE_NAMES=(k8s-01 k8s-02 k8s-03)

# etcd 叢集服務地址列表
export ETCD_ENDPOINTS="https://192.168.1.1:2379,https://192.168.1.2:2379,https://192.168.1.3:2379"

# etcd 叢集間通訊的 IP 和埠
export ETCD_NODES="k8s-01=https://192.168.1.1:2380,k8s-02=https://192.168.1.2:2380,k8s-03=https://192.168.1.3:2380"

# kube-apiserver 的反向代理(kube-nginx)地址埠
export KUBE_APISERVER="https://127.0.0.1:8443"

# 節點間網際網路絡介面名稱
export IFACE="ens33"

# etcd 資料目錄
export ETCD_DATA_DIR="/data/k8s/etcd/data"

# etcd WAL 目錄,建議是 SSD 磁碟分割槽,或者和 ETCD_DATA_DIR 不同的磁碟分割槽
export ETCD_WAL_DIR="/data/k8s/etcd/wal"

# k8s 各元件資料目錄
export K8S_DIR="/data/k8s/k8s"

## DOCKER_DIR 和 CONTAINERD_DIR 二選一
# docker 資料目錄
export DOCKER_DIR="/data/k8s/docker"

# containerd 資料目錄
export CONTAINERD_DIR="/data/k8s/containerd"

## 以下引數一般不需要修改

# TLS Bootstrapping 使用的 Token,可以使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
BOOTSTRAP_TOKEN=`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`

# 最好使用 當前未用的網段 來定義服務網段和 Pod 網段

# 服務網段,部署前路由不可達,部署後集群內路由可達(kube-proxy 保證)
SERVICE_CIDR="10.254.0.0/16"

# Pod 網段,建議 /16 段地址,部署前路由不可達,部署後集群內路由可達(flanneld 保證)
CLUSTER_CIDR="172.30.0.0/16"

# 服務埠範圍 (NodePort Range)
export NODE_PORT_RANGE="30000-32767"

# kubernetes 服務 IP (一般是 SERVICE_CIDR 中第一個IP)
export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"

# 叢集 DNS 服務 IP (從 SERVICE_CIDR 中預分配)
export CLUSTER_DNS_SVC_IP="10.254.0.2"

# 叢集 DNS 域名(末尾不帶點號)
export CLUSTER_DNS_DOMAIN="cluster.local"

# 將二進位制目錄 /opt/k8s/bin 加到 PATH 中
export PATH=/opt/k8s/bin:$PATH

$ source environment.sh

$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp environment.sh root@${node_ip}:/opt/k8s/bin/
    ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

10)升級核心

CentOS 7.x 系統自帶的 3.10.x 核心存在一些 Bugs,導致執行的 Docker、Kubernetes 不穩定,例如:

  1. 高版本的 docker(1.13 以後) 啟用了 3.10 kernel 實驗支援的 kernel memory account 功能(無法關閉),當節點壓力大如頻繁啟動和停止容器時會導致 cgroup memory leak
  2. 網路裝置引用計數洩漏,會導致類似於報錯:"kernel:unregister_netdevice: waiting for eth0 to become free. Usage count = 1";

解決方案如下:

  1. 升級核心到 4.4.X 以上;
  2. 或者,手動編譯核心,disable CONFIG_MEMCG_KMEM 特性;
  3. 或者,安裝修復了該問題的 Docker 18.09.1 及以上的版本。但由於 kubelet 也會設定 kmem(它 vendor 了 runc),所以需要重新編譯 kubelet 並指定 GOFLAGS="-tags=nokmem"
$ rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
$ yum --enablerepo=elrepo-kernel install -y kernel-lt
# 安裝完成後檢查 /boot/grub2/grub.cfg 中對應核心 menuentry 中是否包含 initrd16 配置,如果沒有,再安裝一次!
$ grub2-set-default 0
# 設定開機從新核心啟動

完成上述工作後,重啟:

$ sync
$ reboot

二、建立CA根證書和祕鑰

為確保安全,kubernetes 系統各元件需要使用 x509 證書對通訊進行加密和認證。

CA (Certificate Authority) 是自簽名的根證書,用來簽名後續建立的其它證書。

CA 證書是叢集所有節點共享的,只需要建立一次,後續用它簽名其它所有證書。

本文件使用 CloudFlare 的 PKI 工具集 cfssl 建立所有證書。

注意:如果沒有特殊指明,本文件的所有操作均在k8s-01 節點上執行

1)安裝cfssl工具集

$ wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64 -O /opt/k8s/bin/cfssl
$ wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64 -O /opt/k8s/bin/cfssljson
$ wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl-certinfo_1.4.1_linux_amd64 -O /opt/k8s/bin/cfssl-certinfo
$ chmod +x /opt/k8s/bin/*
$ export PATH=/opt/k8s/bin:$PATH

2)建立配置檔案

CA 配置檔案用於配置根證書的使用場景 (profile) 和具體引數 (usage,過期時間、服務端認證、客戶端認證、加密等):

$ cd /opt/k8s/work
$ cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF
  • signing:表示該證書可用於簽名其它證書(生成的 ca.pem 證書中 CA=TRUE);
  • server auth:表示 client 可以用該該證書對 server 提供的證書進行驗證;
  • client auth:表示 server 可以用該該證書對 client 提供的證書進行驗證;
  • "expiry": "876000h":證書有效期設定為 100 年;

3)建立證書籤名請求檔案

$ cd /opt/k8s/work
$ cat > ca-csr.json <<EOF
{
  "CN": "kubernetes-ca",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ],
  "ca": {
    "expiry": "876000h"
 }
}
EOF
  • CN:Common Name:kube-apiserver 從證書中提取該欄位作為請求的使用者名稱 (User Name),瀏覽器使用該欄位驗證網站是否合法;
  • O:Organization:kube-apiserver 從證書中提取該欄位作為請求使用者所屬的組 (Group)
  • kube-apiserver 將提取的 User、Group 作為 RBAC 授權的使用者標識;

注意:

  1. 不同證書 csr 檔案的 CN、C、ST、L、O、OU 組合必須不同,否則可能出現 PEER'S CERTIFICATE HAS AN INVALID SIGNATURE 錯誤;
  2. 後續建立證書的 csr 檔案時,CN 都不相同(C、ST、L、O、OU 相同),以達到區分的目的;

4)生成CA證書和祕鑰

$ cd /opt/k8s/work
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ ls ca*pem
ca-key.pem  ca.pem

5)分發證書檔案

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert"
    scp ca*.pem ca-config.json root@${node_ip}:/etc/kubernetes/cert
  done

三、安裝配置kubectl

注意:

  1. 如果沒有特殊指明,本文件的所有操作均在 k8s-01 節點上執行
  2. 本文件只需要部署一次,生成的 kubeconfig 檔案是通用的,可以拷貝到需要執行 kubectl 命令的機器的 ~/.kube/config 位置;

1)下載、分發kubectl二進位制檔案

$ cd /opt/k8s/work
$ wget https://dl.k8s.io/v1.16.6/kubernetes-client-linux-amd64.tar.gz # 自行解決翻牆下載問題
$ tar -xzvf kubernetes-client-linux-amd64.tar.gz

分發到所有使用 kubectl 工具的節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kubernetes/client/bin/kubectl root@${node_ip}:/opt/k8s/bin/
    ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

2)建立admin證書和私鑰

kubectl 使用 https 協議與 kube-apiserver 進行安全通訊,kube-apiserver 對 kubectl 請求包含的證書進行認證和授權。

kubectl 後續用於叢集管理,所以這裡建立具有最高許可權的admin證書。

建立簽名證書請求:

$ cd /opt/k8s/work
$ cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "opsnull"
    }
  ]
}
EOF
  • O: system:masters:kube-apiserver 收到使用該證書的客戶端請求後,為請求新增組(Group)認證標識 system:masters
  • 預定義的 ClusterRoleBinding cluster-admin 將 Group system:masters 與 Role cluster-admin 繫結,該 Role 授予操作叢集所需的最高許可權;
  • 該證書只會被 kubectl 當做 client 證書使用,所以 hosts 欄位為空;

生成證書和私鑰:

$ cd /opt/k8s/work
$ cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes admin-csr.json | cfssljson -bare admin
$ ls admin*pem
admin-key.pem  admin.pem

忽略警告訊息 [WARNING] This certificate lacks a "hosts" field.

3)建立kubeconfig檔案

kubectl 使用 kubeconfig 檔案訪問 apiserver,該檔案包含 kube-apiserver 的地址和認證資訊(CA 證書和客戶端證書):

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=https://${NODE_IPS[0]}:6443 \
  --kubeconfig=kubectl.kubeconfig
# 設定叢集引數

$ kubectl config set-credentials admin \
  --client-certificate=/opt/k8s/work/admin.pem \
  --client-key=/opt/k8s/work/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig
# 設定客戶端認證引數  

$ kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kubectl.kubeconfig
# 設定上下文引數  

$ kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
# 設定預設上下文

  • --certificate-authority:驗證 kube-apiserver 證書的根證書;
  • --client-certificate--client-key:剛生成的 admin 證書和私鑰,與 kube-apiserver https 通訊時使用;
  • --embed-certs=true:將 ca.pem 和 admin.pem 證書內容嵌入到生成的 kubectl.kubeconfig 檔案中(否則,寫入的是證書檔案路徑,後續拷貝 kubeconfig 到其它機器時,還需要單獨拷貝證書檔案,不方便。);
  • --server:指定 kube-apiserver 的地址,這裡指向第一個節點上的服務;

4)分發kubeconfig檔案

分發到所有使用 kubectl 命令的節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ~/.kube"
    scp kubectl.kubeconfig root@${node_ip}:~/.kube/config
  done

四、搭建etcd叢集

本文件介紹部署一個三節點高可用 etcd 叢集的步驟:

  • 下載和分發 etcd 二進位制檔案;
  • 建立 etcd 叢集各節點的 x509 證書,用於加密客戶端(如 etcdctl) 與 etcd 叢集、etcd 叢集之間的通訊;
  • 建立 etcd 的 systemd unit 檔案,配置服務引數;
  • 檢查叢集工作狀態;

etcd 叢集節點名稱和 IP 如下:

  • k8s-01:192.168.1.1
  • k8s-02:192.168.1.2
  • k8s-03:192.168.1.3

注意:

  1. 如果沒有特殊指明,本文件的所有操作均在 k8s-01 節點上執行
  2. flanneld 與本文件安裝的 etcd v3.4.x 不相容,如果要安裝 flanneld(本文件使用 calio),則需要將 etcd 降級到 v3.3.x 版本

1)下載、分發etcd二進位制檔案

到 etcd 的 release 頁面 下載最新版本的釋出包:

$ cd /opt/k8s/work
$ wget https://github.com/coreos/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz
$ tar -xvf etcd-v3.4.3-linux-amd64.tar.gz

分發二進位制檔案到叢集所有節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp etcd-v3.4.3-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin
    ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

2)建立etcd證書和私鑰

建立證書籤名請求:

$ cd /opt/k8s/work
$ cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.1.1",
    "192.168.1.2",
    "192.168.1.3"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ]
}
EOF

hosts:指定授權使用該證書的 etcd 節點 IP 列表,需要將 etcd 叢集所有節點 IP 都列在其中

生成證書和私鑰:

$ cd /opt/k8s/work
$ cfssl gencert -ca=/opt/k8s/work/ca.pem \
    -ca-key=/opt/k8s/work/ca-key.pem \
    -config=/opt/k8s/work/ca-config.json \
    -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
$ ls etcd*pem
etcd-key.pem  etcd.pem

分發生成的證書和私鑰到各 etcd 節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/etcd/cert"
    scp etcd*.pem root@${node_ip}:/etc/etcd/cert/
  done

3)建立etcd的systemd unit模板檔案

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ cat > etcd.service.template <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=${ETCD_DATA_DIR}
ExecStart=/opt/k8s/bin/etcd \\
  --data-dir=${ETCD_DATA_DIR} \\
  --wal-dir=${ETCD_WAL_DIR} \\
  --name=##NODE_NAME## \\
  --cert-file=/etc/etcd/cert/etcd.pem \\
  --key-file=/etc/etcd/cert/etcd-key.pem \\
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-cert-file=/etc/etcd/cert/etcd.pem \\
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --listen-peer-urls=https://##NODE_IP##:2380 \\
  --initial-advertise-peer-urls=https://##NODE_IP##:2380 \\
  --listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://##NODE_IP##:2379 \\
  --initial-cluster-token=etcd-cluster-0 \\
  --initial-cluster=${ETCD_NODES} \\
  --initial-cluster-state=new \\
  --auto-compaction-mode=periodic \\
  --auto-compaction-retention=1 \\
  --max-request-bytes=33554432 \\
  --quota-backend-bytes=6442450944 \\
  --heartbeat-interval=250 \\
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

  • WorkingDirectory--data-dir:指定工作目錄和資料目錄為 ${ETCD_DATA_DIR},需在啟動服務前建立這個目錄;
  • --wal-dir:指定 wal 目錄,為了提高效能,一般使用 SSD 或者和 --data-dir 不同的磁碟;
  • --name:指定節點名稱,當 --initial-cluster-state 值為 new 時,--name 的引數值必須位於 --initial-cluster 列表中;
  • --cert-file--key-file:etcd server 與 client 通訊時使用的證書和私鑰;
  • --trusted-ca-file:簽名 client 證書的 CA 證書,用於驗證 client 證書;
  • --peer-cert-file--peer-key-file:etcd 與 peer 通訊使用的證書和私鑰;
  • --peer-trusted-ca-file:簽名 peer 證書的 CA 證書,用於驗證 peer 證書;

4)各個節點建立和分發etcd systemd unit檔案

替換模板檔案中的變數,為各節點建立 systemd unit 檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service 
  done
$ ls *.service
etcd-192.168.1.1.service  etcd-192.168.1.2.service  etcd-192.168.1.3.service

  • NODE_NAMES 和 NODE_IPS 為相同長度的 bash 陣列,分別為節點名稱和對應的 IP;

分發生成的 systemd unit 檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
  done

5)啟動etcd服務

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " &
  done

  • 必須先建立 etcd 資料目錄和工作目錄;
  • etcd 程序首次啟動時會等待其它節點的 etcd 加入叢集,命令 systemctl start etcd 會卡住一段時間,為正常現象;

6)檢查啟動結果

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status etcd|grep Active"
  done

確保狀態為 active (running),否則檢視日誌,確認原因:

$ journalctl -u etcd

7)驗證服務狀態

部署完 etcd 集群后,在任一 etcd 節點上執行如下命令:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    /opt/k8s/bin/etcdctl \
    --endpoints=https://${node_ip}:2379 \
    --cacert=/etc/kubernetes/cert/ca.pem \
    --cert=/etc/etcd/cert/etcd.pem \
    --key=/etc/etcd/cert/etcd-key.pem endpoint health
  done

  • 3.4.3 版本的 etcd/etcdctl 預設啟用了 V3 API,所以執行 etcdctl 命令時不需要再指定環境變數 ETCDCTL_API=3
  • 從 K8S 1.13 開始,不再支援 v2 版本的 etcd;

輸出效果如下:

>>> 192.168.1.1
https://192.168.1.1:2379 is healthy: successfully committed proposal: took = 7.074246ms
>>> 192.168.1.2
https://192.168.1.2:2379 is healthy: successfully committed proposal: took = 7.255615ms
>>> 192.168.1.3
https://192.168.1.3:2379 is healthy: successfully committed proposal: took = 8.147528ms

輸出均為 healthy 時表示叢集服務正常。

8)檢視當前的leader

$ source /opt/k8s/bin/environment.sh
$ /opt/k8s/bin/etcdctl \
  -w table --cacert=/etc/kubernetes/cert/ca.pem \
  --cert=/etc/etcd/cert/etcd.pem \
  --key=/etc/etcd/cert/etcd-key.pem \
  --endpoints=${ETCD_ENDPOINTS} endpoint status 

輸出效果如下:

+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|         ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.1.1:2379 | 5c636e94d29b6a17 |   3.4.3 |   20 kB |     false |      false |         2 |          8 |                  8 |        |
| https://192.168.1.2:2379 | 861b0df73b6dfe36 |   3.4.3 |   20 kB |      true |      false |         2 |          8 |                  8 |        |
| https://192.168.1.3:2379 | b49efd0caebd99b3 |   3.4.3 |   20 kB |     false |      false |         2 |          8 |                  8 |        |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

可見,當前的 leader 為 192.168.1.2。

五、部署master節點

kubernetes master 節點執行如下元件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

kube-apiserver、kube-scheduler 和 kube-controller-manager 均以多例項模式執行:

  1. kube-scheduler 和 kube-controller-manager 會自動選舉產生一個 leader 例項,其它例項處於阻塞模式,當 leader 掛了後,重新選舉產生新的 leader,從而保證服務可用性;
  2. kube-apiserver 是無狀態的,可以通過 kube-nginx 進行代理訪問,從而保證服務可用性;

注意:如果沒有特殊指明,本文件的所有操作均在k8s01 節點上執行

下載最新版本的二進位制檔案

$ cd /opt/k8s/work
$ wget https://dl.k8s.io/v1.16.6/kubernetes-server-linux-amd64.tar.gz  # 自行解決翻牆問題
$ tar -xzvf kubernetes-server-linux-amd64.tar.gz
$ cd kubernetes
$ tar -xzvf  kubernetes-src.tar.gz

將二進位制檔案拷貝到所有 master 節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kubernetes/server/bin/{apiextensions-apiserver,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} root@${node_ip}:/opt/k8s/bin/
    ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

(1)部署kube-apiserver叢集

部署一個三例項 kube-apiserver 叢集!

注意:如果沒有特殊指明,本文件的所有操作均在 k8s-01 節點上執行

1)建立kubernetes-master證書和私鑰

建立證書籤名請求:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes-master",
  "hosts": [
    "127.0.0.1",
    "192.168.1.1",
    "192.168.1.2",
    "192.168.1.3",
    "${CLUSTER_KUBERNETES_SVC_IP}",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local.",
    "kubernetes.default.svc.${CLUSTER_DNS_DOMAIN}."
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ]
}
EOF

  • hosts 欄位指定授權使用該證書的 IP 和域名列表,這裡列出了 master 節點 IP、kubernetes 服務的 IP 和域名;

生成證書和私鑰:

$ cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
$ ls kubernetes*pem
kubernetes-key.pem  kubernetes.pem

將生成的證書和私鑰檔案拷貝到所有 master 節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert"
    scp kubernetes*.pem root@${node_ip}:/etc/kubernetes/cert/
  done

2)建立加密配置檔案

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

將加密配置檔案拷貝到 master 節點的 /etc/kubernetes 目錄下:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp encryption-config.yaml root@${node_ip}:/etc/kubernetes/
  done

3)建立審計策略檔案

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ cat > audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
  # The following requests were manually identified as high-volume and low-risk, so drop them.
  - level: None
    resources:
      - group: ""
        resources:
          - endpoints
          - services
          - services/status
    users:
      - 'system:kube-proxy'
    verbs:
      - watch

  - level: None
    resources:
      - group: ""
        resources:
          - nodes
          - nodes/status
    userGroups:
      - 'system:nodes'
    verbs:
      - get

  - level: None
    namespaces:
      - kube-system
    resources:
      - group: ""
        resources:
          - endpoints
    users:
      - 'system:kube-controller-manager'
      - 'system:kube-scheduler'
      - 'system:serviceaccount:kube-system:endpoint-controller'
    verbs:
      - get
      - update

  - level: None
    resources:
      - group: ""
        resources:
          - namespaces
          - namespaces/status
          - namespaces/finalize
    users:
      - 'system:apiserver'
    verbs:
      - get

  # Don't log HPA fetching metrics.
  - level: None
    resources:
      - group: metrics.k8s.io
    users:
      - 'system:kube-controller-manager'
    verbs:
      - get
      - list

  # Don't log these read-only URLs.
  - level: None
    nonResourceURLs:
      - '/healthz*'
      - /version
      - '/swagger*'

  # Don't log events requests.
  - level: None
    resources:
      - group: ""
        resources:
          - events

  # node and pod status calls from nodes are high-volume and can be large, don't log responses
  # for expected updates from nodes
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    users:
      - kubelet
      - 'system:node-problem-detector'
      - 'system:serviceaccount:kube-system:node-problem-detector'
    verbs:
      - update
      - patch

  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    userGroups:
      - 'system:nodes'
    verbs:
      - update
      - patch

  # deletecollection calls can be large, don't log responses for expected namespace deletions
  - level: Request
    omitStages:
      - RequestReceived
    users:
      - 'system:serviceaccount:kube-system:namespace-controller'
    verbs:
      - deletecollection

  # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
  # so only log at the Metadata level.
  - level: Metadata
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - secrets
          - configmaps
      - group: authentication.k8s.io
        resources:
          - tokenreviews
  # Get repsonses can be large; skip them.
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
    verbs:
      - get
      - list
      - watch

  # Default level for known APIs
  - level: RequestResponse
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
      
  # Default level for all other requests.
  - level: Metadata
    omitStages:
      - RequestReceived
EOF

分發審計策略檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp audit-policy.yaml root@${node_ip}:/etc/kubernetes/audit-policy.yaml
  done

4)建立後續訪問metrics-server或kube-prometheus使用的證書

建立證書籤名請求:

$ cd /opt/k8s/work
$ cat > proxy-client-csr.json <<EOF
{
  "CN": "aggregator",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ]
}
EOF

  • CN 名稱需要位於 kube-apiserver 的 --requestheader-allowed-names 引數中,否則後續訪問 metrics 時會提示許可權不足。

生成證書和私鑰:

$ cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem  \
  -config=/etc/kubernetes/cert/ca-config.json  \
  -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client
$ ls proxy-client*.pem
proxy-client-key.pem  proxy-client.pem

將生成的證書和私鑰檔案拷貝到所有 master 節點:

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp proxy-client*.pem root@${node_ip}:/etc/kubernetes/cert/
  done

5)建立kube-apiserver systemd unit模板檔案

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ cat > kube-apiserver.service.template <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=${K8S_DIR}/kube-apiserver
ExecStart=/opt/k8s/bin/kube-apiserver \\
  --advertise-address=##NODE_IP## \\
  --default-not-ready-toleration-seconds=360 \\
  --default-unreachable-toleration-seconds=360 \\
  --feature-gates=DynamicAuditing=true \\
  --max-mutating-requests-inflight=2000 \\
  --max-requests-inflight=4000 \\
  --default-watch-cache-size=200 \\
  --delete-collection-workers=2 \\
  --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \\
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\
  --etcd-servers=${ETCD_ENDPOINTS} \\
  --bind-address=##NODE_IP## \\
  --secure-port=6443 \\
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\
  --insecure-port=0 \\
  --audit-dynamic-configuration \\
  --audit-log-maxage=15 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-truncate-enabled \\
  --audit-log-path=${K8S_DIR}/kube-apiserver/audit.log \\
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
  --profiling \\
  --anonymous-auth=false \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --enable-bootstrap-token-auth \\
  --requestheader-allowed-names="aggregator" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --service-account-key-file=/etc/kubernetes/cert/ca.pem \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=api/all=true \\
  --enable-admission-plugins=NodeRestriction \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --event-ttl=168h \\
  --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\
  --kubelet-https=true \\
  --kubelet-timeout=10s \\
  --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \\
  --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --service-node-port-range=${NODE_PORT_RANGE} \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

  • --advertise-address:apiserver 對外通告的 IP(kubernetes 服務後端節點 IP);
  • --default-*-toleration-seconds:設定節點異常相關的閾值;
  • --max-*-requests-inflight:請求相關的最大閾值;
  • --etcd-*:訪問 etcd 的證書和 etcd 伺服器地址;
  • --bind-address: https 監聽的 IP,不能為 127.0.0.1,否則外界不能訪問它的安全埠 6443;
  • --secret-port:https 監聽埠;
  • --insecure-port=0:關閉監聽 http 非安全埠(8080);
  • --tls-*-file:指定 apiserver 使用的證書、私鑰和 CA 檔案;
  • --audit-*:配置審計策略和審計日誌檔案相關的引數;
  • --client-ca-file:驗證 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)請求所帶的證書;
  • --enable-bootstrap-token-auth:啟用 kubelet bootstrap 的 token 認證;
  • --requestheader-*:kube-apiserver 的 aggregator layer 相關的配置引數,proxy-client & HPA 需要使用;
  • --requestheader-client-ca-file:用於簽名 --proxy-client-cert-file--proxy-client-key-file 指定的證書;在啟用了 metric aggregator 時使用;
  • --requestheader-allowed-names:不能為空,值為逗號分割的 --proxy-client-cert-file 證書的 CN 名稱,這裡設定為 "aggregator";
  • --service-account-key-file:簽名 ServiceAccount Token 的公鑰檔案,kube-controller-manager 的 --service-account-private-key-file 指定私鑰檔案,兩者配對使用;
  • --runtime-config=api/all=true: 啟用所有版本的 APIs,如 autoscaling/v2alpha1;
  • --authorization-mode=Node,RBAC--anonymous-auth=false: 開啟 Node 和 RBAC 授權模式,拒絕未授權的請求;
  • --enable-admission-plugins:啟用一些預設關閉的 plugins;
  • --allow-privileged:執行執行 privileged 許可權的容器;
  • --apiserver-count=3:指定 apiserver 例項的數量;
  • --event-ttl:指定 events 的儲存時間;
  • --kubelet-*:如果指定,則使用 https 訪問 kubelet APIs;需要為證書對應的使用者(上面 kubernetes*.pem 證書的使用者為 kubernetes) 使用者定義 RBAC 規則,否則訪問 kubelet API 時提示未授權;
  • --proxy-client-*:apiserver 訪問 metrics-server 使用的證書;
  • --service-cluster-ip-range: 指定 Service Cluster IP 地址段;
  • --service-node-port-range: 指定 NodePort 的埠範圍;

如果 kube-apiserver 機器沒有執行 kube-proxy,則還需要新增 --enable-aggregator-routing=true 引數;

關於 --requestheader-XXX 相關引數,參考:

注意:

  1. --requestheader-client-ca-file 指定的 CA 證書,必須具有 client auth and server auth
  2. 如果 --requestheader-allowed-names 不為空,且 --proxy-client-cert-file 證書的 CN 名稱不在 allowed-names 中,則後續檢視 node 或 pods 的 metrics 失敗,提示:
$ kubectl top nodes
Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "aggregator" cannot list resource "nodes" in API group "metrics.k8s.io" at the cluster scope

6)為各節點建立和分發kube-apiserver systemd unit檔案

替換模板檔案中的變數,為各節點生成 systemd unit 檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${NODE_IPS[i]}.service 
  done
$ ls kube-apiserver*.service
kube-apiserver-192.168.1.1.service  kube-apiserver-192.168.1.3.service
kube-apiserver-192.168.1.2.service

  • NODE_NAMES 和 NODE_IPS 為相同長度的 bash 陣列,分別為節點名稱和對應的 IP;

分發生成的 systemd unit 檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-apiserver-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-apiserver.service
  done

7)啟動kube-apiserver服務

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-apiserver"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
  done

8)檢查kube-apiserver執行狀態

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'"
  done

確保狀態為 active (running),否則檢視日誌,確認原因:

$ journalctl -u kube-apiserver

9)檢查叢集狀態

$ kubectl cluster-info
Kubernetes master is running at https://192.168.1.1:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.254.0.1   <none>        443/TCP   3m53s

$ kubectl get componentstatuses
NAME                 AGE
controller-manager   <unknown>
scheduler            <unknown>
etcd-0               <unknown>
etcd-2               <unknown>
etcd-1               <unknown>

  • Kubernetes 1.16.6 存在 Bugs 導致返回結果一直為 <unknown>,但 kubectl get cs -o yaml 可以返回正確結果;

10)檢查kube-apiserver監聽的埠

$ netstat -lnpt|grep kube
tcp        0      0 192.168.1.1:6443     0.0.0.0:*               LISTEN      101442/kube-apiserv

  • 6443: 接收 https 請求的安全埠,對所有請求做認證和授權;
  • 由於關閉了非安全埠,故沒有監聽 8080;

(2)部署高可用 kube-controller-manager 叢集

該叢集包含 3 個節點,啟動後將通過競爭選舉機制產生一個 leader 節點,其它節點為阻塞狀態。當 leader 節點不可用時,阻塞的節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。

為保證通訊安全,本文件先生成 x509 證書和私鑰,kube-controller-manager 在如下兩種情況下使用該證書:

  1. 與 kube-apiserver 的安全埠通訊;
  2. 安全埠(https,10252) 輸出 prometheus 格式的 metrics;

注意:如果沒有特殊指明,本文件的所有操作均在k8s-01 節點上執行

1)建立 kube-controller-manager 證書和私鑰

建立證書籤名請求:

$ cd /opt/k8s/work
$ cat > kube-controller-manager-csr.json <<EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.1.1",
      "192.168.1.2",
      "192.168.1.3"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-controller-manager",
        "OU": "opsnull"
      }
    ]
}
EOF

  • hosts 列表包含所有 kube-controller-manager 節點 IP;
  • CN 和 O 均為 system:kube-controller-manager,kubernetes 內建的 ClusterRoleBindings system:kube-controller-manager 賦予 kube-controller-manager 工作所需的許可權。

生成證書和私鑰:

$ cd /opt/k8s/work
$ cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
$ ls kube-controller-manager*pem

將生成的證書和私鑰分發到所有 master 節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-controller-manager*.pem root@${node_ip}:/etc/kubernetes/cert/
  done

2)建立和分發 kubeconfig 檔案

kube-controller-manager 使用 kubeconfig 檔案訪問 apiserver,該檔案提供了 apiserver 地址、嵌入的 CA 證書和 kube-controller-manager 證書等資訊:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server="https://##NODE_IP##:6443" \
  --kubeconfig=kube-controller-manager.kubeconfig

$ kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

$ kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

$ kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

  • kube-controller-manager 與 kube-apiserver 混布,故直接通過節點 IP 訪問 kube-apiserver;

分發 kubeconfig 到所有 master 節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kube-controller-manager.kubeconfig > kube-controller-manager-${node_ip}.kubeconfig
    scp kube-controller-manager-${node_ip}.kubeconfig root@${node_ip}:/etc/kubernetes/kube-controller-manager.kubeconfig
  done

3)建立 kube-controller-manager systemd unit 模板檔案

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ cat > kube-controller-manager.service.template <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=${K8S_DIR}/kube-controller-manager
ExecStart=/opt/k8s/bin/kube-controller-manager \\
  --profiling \\
  --cluster-name=kubernetes \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --kube-api-qps=1000 \\
  --kube-api-burst=2000 \\
  --leader-elect \\
  --use-service-account-credentials\\
  --concurrent-service-syncs=2 \\
  --bind-address=##NODE_IP## \\
  --secure-port=10252 \\
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\
  --port=0 \\
  --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-allowed-names="aggregator" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --experimental-cluster-signing-duration=876000h \\
  --horizontal-pod-autoscaler-sync-period=10s \\
  --concurrent-deployment-syncs=10 \\
  --concurrent-gc-syncs=30 \\
  --node-cidr-mask-size=24 \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --pod-eviction-timeout=6m \\
  --terminated-pod-gc-threshold=10000 \\
  --root-ca-file=/etc/kubernetes/cert/ca.pem \\
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

  • --port=0:關閉監聽非安全埠(http),同時 --address 引數無效,--bind-address 引數有效;
  • --secure-port=10252--bind-address=0.0.0.0: 在所有網路介面監聽 10252 埠的 https /metrics 請求;
  • --kubeconfig:指定 kubeconfig 檔案路徑,kube-controller-manager 使用它連線和驗證 kube-apiserver;
  • --authentication-kubeconfig--authorization-kubeconfig:kube-controller-manager 使用它連線 apiserver,對 client 的請求進行認證和授權。kube-controller-manager 不再使用 --tls-ca-file 對請求 https metrics 的 Client 證書進行校驗。如果沒有配置這兩個 kubeconfig 引數,則 client 連線 kube-controller-manager https 埠的請求會被拒絕(提示許可權不足)。
  • --cluster-signing-*-file:簽名 TLS Bootstrap 建立的證書;
  • --experimental-cluster-signing-duration:指定 TLS Bootstrap 證書的有效期;
  • --root-ca-file:放置到容器 ServiceAccount 中的 CA 證書,用來對 kube-apiserver 的證書進行校驗;
  • --service-account-private-key-file:簽名 ServiceAccount 中 Token 的私鑰檔案,必須和 kube-apiserver 的 --service-account-key-file 指定的公鑰檔案配對使用;
  • --service-cluster-ip-range :指定 Service Cluster IP 網段,必須和 kube-apiserver 中的同名引數一致;
  • --leader-elect=true:叢集執行模式,啟用選舉功能;被選為 leader 的節點負責處理工作,其它節點為阻塞狀態;
  • --controllers=*,bootstrapsigner,tokencleaner:啟用的控制器列表,tokencleaner 用於自動清理過期的 Bootstrap token;
  • --horizontal-pod-autoscaler-*:custom metrics 相關引數,支援 autoscaling/v2alpha1;
  • --tls-cert-file--tls-private-key-file:使用 https 輸出 metrics 時使用的 Server 證書和祕鑰;
  • --use-service-account-credentials=true: kube-controller-manager 中各 controller 使用 serviceaccount 訪問 kube-apiserver;

4)為各節點建立和分發 kube-controller-mananger systemd unit 檔案

替換模板檔案中的變數,為各節點建立 systemd unit 檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-controller-manager.service.template > kube-controller-manager-${NODE_IPS[i]}.service 
  done
$ ls kube-controller-manager*.service

分發到所有 master 節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-controller-manager-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-controller-manager.service
  done

5)啟動 kube-controller-manager 服務

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
  done

6)檢查服務執行狀態

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-controller-manager|grep Active"
  done

  • 確保狀態為 active (running),否則檢視日誌,確認原因:
$ journalctl -u kube-controller-manager

kube-controller-manager 監聽 10252 埠,接收 https 請求:

$ netstat -lnpt | grep kube-cont
tcp        0      0 192.168.1.1:10252    0.0.0.0:*               LISTEN      108977/kube-control

7)檢視輸出的 metrics

注意:以下命令在 kube-controller-manager 節點上執行!

$ curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.1.1:10252/metrics |head
pt/k8s/work/admin-key.pem https://192.168.1.1:10252/metrics |head
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

8)檢視當前的 leader

$ kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-01_b7329fab-fa43-4646-9f10-47e84bc1b23a","leaseDurationSeconds":15,"acquireTime":"2020-05-17T12:31:18Z","renewTime":"2020-05-17T12:32:48Z","leaderTransitions":0}'
  creationTimestamp: "2020-05-17T12:31:18Z"
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "383"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: decd4217-7287-4f11-92dd-ed67dd0644f7

可見,當前的 leader 為k8s-01 節點!

9)測試 kube-controller-manager 叢集的高可用

停掉一個或兩個節點的 kube-controller-manager 服務,觀察其它節點的日誌,看是否獲取了 leader 許可權。

(3)部署高可用 kube-scheduler 叢集

該叢集包含 3 個節點,啟動後將通過競爭選舉機制產生一個 leader 節點,其它節點為阻塞狀態。當 leader 節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。

為保證通訊安全,本文件先生成 x509 證書和私鑰,kube-scheduler 在如下兩種情況下使用該證書:

  1. 與 kube-apiserver 的安全埠通訊;
  2. 安全埠(https,10251) 輸出 prometheus 格式的 metrics;

注意:如果沒有特殊指明,本文件的所有操作均在 k8s-01 節點上執行

1)建立 kube-scheduler 證書和私鑰

建立證書籤名請求:

$ cd /opt/k8s/work
$ cat > kube-scheduler-csr.json <<EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.1.1",
      "192.168.1.2",
      "192.168.1.3"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "opsnull"
      }
    ]
}
EOF

  • hosts 列表包含所有 kube-scheduler 節點 IP;
  • CN 和 O 均為 system:kube-scheduler,kubernetes 內建的 ClusterRoleBindings system:kube-scheduler 將賦予 kube-scheduler 工作所需的許可權;

生成證書和私鑰:

$ cd /opt/k8s/work
$ cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
$ ls kube-scheduler*pem

將生成的證書和私鑰分發到所有 master 節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-scheduler*.pem root@${node_ip}:/etc/kubernetes/cert/
  done

2)建立和分發 kubeconfig 檔案

kube-scheduler 使用 kubeconfig 檔案訪問 apiserver,該檔案提供了 apiserver 地址、嵌入的 CA 證書和 kube-scheduler 證書:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server="https://##NODE_IP##:6443" \
  --kubeconfig=kube-scheduler.kubeconfig

$ kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

$ kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

$ kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

分發 kubeconfig 到所有 master 節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kube-scheduler.kubeconfig > kube-scheduler-${node_ip}.kubeconfig
    scp kube-scheduler-${node_ip}.kubeconfig root@${node_ip}:/etc/kubernetes/kube-scheduler.kubeconfig
  done

3)建立 kube-scheduler 配置檔案

$ cd /opt/k8s/work
$ cat >kube-scheduler.yaml.template <<EOF
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
bindTimeoutSeconds: 600
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"
  qps: 100
enableContentionProfiling: false
enableProfiling: true
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: ##NODE_IP##:10251
leaderElection:
  leaderElect: true
metricsBindAddress: ##NODE_IP##:10251
EOF

  • --kubeconfig:指定 kubeconfig 檔案路徑,kube-scheduler 使用它連線和驗證 kube-apiserver;
  • --leader-elect=true:叢集執行模式,啟用選舉功能;被選為 leader 的節點負責處理工作,其它節點為阻塞狀態;

替換模板檔案中的變數:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-scheduler.yaml.template > kube-scheduler-${NODE_IPS[i]}.yaml
  done
$ ls kube-scheduler*.yaml

  • NODE_NAMES 和 NODE_IPS 為相同長度的 bash 陣列,分別為節點名稱和對應的 IP;

分發 kube-scheduler 配置檔案到所有 master 節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-scheduler-${node_ip}.yaml root@${node_ip}:/etc/kubernetes/kube-scheduler.yaml
  done

  • 重新命名為 kube-scheduler.yaml;

4)建立 kube-scheduler systemd unit 模板檔案

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ cat > kube-scheduler.service.template <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=${K8S_DIR}/kube-scheduler
ExecStart=/opt/k8s/bin/kube-scheduler \\
  --config=/etc/kubernetes/kube-scheduler.yaml \\
  --bind-address=##NODE_IP## \\
  --secure-port=10259 \\
  --port=0 \\
  --tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \\
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-allowed-names="" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF

5)為各節點建立和分發 kube-scheduler systemd unit 檔案

替換模板檔案中的變數,為各節點建立 systemd unit 檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-scheduler.service.template > kube-scheduler-${NODE_IPS[i]}.service 
  done
$ ls kube-scheduler*.service

分發 systemd unit 檔案到所有 master 節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-scheduler-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-scheduler.service
  done

6)啟動 kube-scheduler 服務

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-scheduler"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
  done

7)檢查服務執行狀態

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-scheduler|grep Active"
  done

確保狀態為 active (running),否則檢視日誌,確認原因:

$ journalctl -u kube-scheduler

8)檢視輸出的 metrics

注意:以下命令在 kube-scheduler 節點上執行。

kube-scheduler 監聽 10251 和 10259 埠:

  • 10251:接收 http 請求,非安全埠,不需要認證授權;
  • 10259:接收 https 請求,安全埠,需要認證授權;

兩個介面都對外提供 /metrics/healthz 的訪問。

$ netstat -lnpt |grep kube-sch
tcp        0      0 192.168.1.1:10251    0.0.0.0:*               LISTEN      114702/kube-schedul
tcp        0      0 192.168.1.1:10259    0.0.0.0:*               LISTEN      114702/kube-schedul


$ curl -s http://192.168.1.1:10251/metrics |head
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0


$ curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.1.1:10259/metrics |head
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

9)檢視當前的 leader

$ kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"zhangjun-k8s-01_ce04632e-64e4-477e-b8f0-4e69020cd996","leaseDurationSeconds":15,"acquireTime":"2020-02-07T07:05:00Z","renewTime":"2020-02-07T07:05:28Z","leaderTransitions":0}'
  creationTimestamp: "2020-02-07T07:05:00Z"
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "756"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
  uid: 1b687724-a6e2-4404-9efb-a1f0e201fecc

可見,當前的 leader 為k8s-01 節點。

10)測試 kube-scheduler 叢集的高可用

隨便找一個或兩個 master 節點,停掉 kube-scheduler 服務,看其它節點是否獲取了 leader 許可權。

六、 部署 worker 節點

kubernetes worker 節點執行如下元件:

  • containerd
  • kubelet
  • kube-proxy
  • calico
  • kube-nginx

注意:如果沒有特殊指明,本文件的所有操作均在k8s-01 節點上執行

安裝依賴包

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "yum install -y epel-release" &
    ssh root@${node_ip} "yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat git" &
  done

(1) apiserver 高可用

主要使用 nginx 4 層透明代理功能實現 Kubernetes worker 節點元件高可用訪問 kube-apiserver 叢集的步驟。

注意:如果沒有特殊指明,本文件的所有操作均在k8s-01節點上執行。

1) 基於 nginx 代理的 kube-apiserver 高可用方案

  • 控制節點的 kube-controller-manager、kube-scheduler 是多例項部署且連線本機的 kube-apiserver,所以只要有一個例項正常,就可以保證高可用;
  • 叢集內的 Pod 使用 K8S 服務域名 kubernetes 訪問 kube-apiserver, kube-dns 會自動解析出多個 kube-apiserver 節點的 IP,所以也是高可用的;
  • 在每個節點起一個 nginx 程序,後端對接多個 apiserver 例項,nginx 對它們做健康檢查和負載均衡;
  • kubelet、kube-proxy 通過本地的 nginx(監聽 127.0.0.1)訪問 kube-apiserver,從而實現 kube-apiserver 的高可用;

2)下載和編譯 nginx

下載原始碼:

$ cd /opt/k8s/work
$ wget http://nginx.org/download/nginx-1.15.3.tar.gz
$ tar -xzvf nginx-1.15.3.tar.gz

配置編譯引數:

$ cd /opt/k8s/work/nginx-1.15.3
$ mkdir nginx-prefix
$ yum install -y gcc make
$ ./configure --with-stream --without-http --prefix=$(pwd)/nginx-prefix --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module

  • --with-stream:開啟 4 層透明轉發(TCP Proxy)功能;
  • --without-xxx:關閉所有其他功能,這樣生成的動態連結二進位制程式依賴最小;

編譯和安裝:

$ cd /opt/k8s/work/nginx-1.15.3
$ make && make install

3)驗證編譯的 nginx

$ cd /opt/k8s/work/nginx-1.15.3
$ ./nginx-prefix/sbin/nginx -v

輸出:

nginx version: nginx/1.15.3

4)安裝和部署 nginx

建立目錄結構:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
  done

拷貝二進位制程式:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
    scp /opt/k8s/work/nginx-1.15.3/nginx-prefix/sbin/nginx  root@${node_ip}:/opt/k8s/kube-nginx/sbin/kube-nginx
    ssh root@${node_ip} "chmod a+x /opt/k8s/kube-nginx/sbin/*"
  done

  • 重新命名二進位制檔案為 kube-nginx;

配置 nginx,開啟 4 層透明轉發功能:

cd /opt/k8s/work
cat > kube-nginx.conf << \EOF
worker_processes 1;

events {
    worker_connections  1024;
}

stream {
    upstream backend {
        hash $remote_addr consistent;
        server 192.168.1.1:6443        max_fails=3 fail_timeout=30s;
        server 192.168.1.2:6443        max_fails=3 fail_timeout=30s;
        server 192.168.1.3:6443        max_fails=3 fail_timeout=30s;
    }

    server {
        listen 127.0.0.1:8443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}
EOF

  • upstream backend中的server列表為叢集中各 kube-apiserver 的節點 IP,需要根據實際情況修改;

分發配置檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-nginx.conf  root@${node_ip}:/opt/k8s/kube-nginx/conf/kube-nginx.conf
  done

5)配置 systemd unit 檔案,啟動服務

配置 kube-nginx systemd unit 檔案:

$ cd /opt/k8s/work
$ cat > kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStartPre=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -t
ExecStart=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx
ExecReload=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

分發 systemd unit 檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-nginx.service  root@${node_ip}:/etc/systemd/system/
  done

啟動 kube-nginx 服務:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-nginx && systemctl restart kube-nginx"
  done

6)檢查 kube-nginx 服務執行狀態

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-nginx |grep 'Active:'"
  done

確保狀態為 active (running),否則檢視日誌,確認原因:

$ journalctl -u kube-nginx

(2)部署 containerd 元件

containerd 實現了 kubernetes 的 Container Runtime Interface (CRI) 介面,提供容器執行時核心功能,如映象管理、容器管理等,相比 dockerd 更加簡單、健壯和可移植。

注意:如果沒有特殊指明,本文件的所有操作均在 k8s01 節點上執行。

1)下載和分發二進位制檔案

下載和分發二進位制檔案:

$ cd /opt/k8s/work
$ wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.17.0/crictl-v1.17.0-linux-amd64.tar.gz \
  https://github.com/opencontainers/runc/releases/download/v1.0.0-rc10/runc.amd64 \
  https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz \
  https://github.com/containerd/containerd/releases/download/v1.3.3/containerd-1.3.3.linux-amd64.tar.gz 

解壓:

$ cd /opt/k8s/work
$ mkdir containerd
$ tar -xvf containerd-1.3.3.linux-amd64.tar.gz -C containerd
$ tar -xvf crictl-v1.17.0-linux-amd64.tar.gz
$ mkdir cni-plugins
$ tar -xvf cni-plugins-linux-amd64-v0.8.5.tgz -C cni-plugins
$ mv runc.amd64 runc

分發二進位制檔案到所有 worker 節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp containerd/bin/*  crictl  cni-plugins/*  runc  root@${node_ip}:/opt/k8s/bin
    ssh root@${node_ip} "chmod a+x /opt/k8s/bin/* && mkdir -p /etc/cni/net.d"
  done

2)建立和分發 containerd 配置檔案

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ cat << EOF | sudo tee containerd-config.toml
version = 2
root = "${CONTAINERD_DIR}/root"
state = "${CONTAINERD_DIR}/state"

[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    sandbox_image = "registry.cn-beijing.aliyuncs.com/images_k8s/pause-amd64:3.1"
    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/k8s/bin"
      conf_dir = "/etc/cni/net.d"
  [plugins."io.containerd.runtime.v1.linux"]
    shim = "containerd-shim"
    runtime = "runc"
    runtime_root = ""
    no_shim = false
    shim_debug = false
EOF


$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/containerd/ ${CONTAINERD_DIR}/{root,state}"
    scp containerd-config.toml root@${node_ip}:/etc/containerd/config.toml
  done

3)建立 containerd systemd unit 檔案

$ cd /opt/k8s/work
$ cat <<EOF | sudo tee containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStartPre=/sbin/modprobe overlay
ExecStart=/opt/k8s/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF

4)分發 systemd unit 檔案,啟動 containerd 服務

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp containerd.service root@${node_ip}:/etc/systemd/system
    ssh root@${node_ip} "systemctl enable containerd && systemctl restart containerd"
  done

5)建立和分發 crictl 配置檔案

crictl 是相容 CRI 容器執行時的命令列工具,提供類似於 docker 命令的功能。具體參考官方文件。

$ cd /opt/k8s/work
$ cat << EOF | sudo tee crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

分發到所有 worker 節點:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp crictl.yaml root@${node_ip}:/etc/crictl.yaml
  done

(3)部署 kubelet 元件

kubelet 執行在每個 worker 節點上,接收 kube-apiserver 傳送的請求,管理 Pod 容器,執行互動式命令,如 exec、run、logs 等。

kubelet 啟動時自動向 kube-apiserver 註冊節點資訊,內建的 cadvisor 統計和監控節點的資源使用情況。

為確保安全,部署時關閉了 kubelet 的非安全 http 埠,對請求進行認證和授權,拒絕未授權的訪問(如 apiserver、heapster 的請求)。

注意:如果沒有特殊指明,本文件的所有操作均在 k8s-01 節點上執行。

1)建立 kubelet bootstrap kubeconfig 檔案

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"

    # 建立 token
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:${node_name} \
      --kubeconfig ~/.kube/config)

    # 設定叢集引數
    kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

    # 設定客戶端認證引數
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

    # 設定上下文引數
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

    # 設定預設上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
  done

  • 向 kubeconfig 寫入的是 token,bootstrap 結束後 kube-controller-manager 為 kubelet 建立 client 和 server 證書;

檢視 kubeadm 為各節點建立的 token:

$ kubeadm token list --kubeconfig ~/.kube/config
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS
ok11ux.7qsk8vm5lz8nbmvm   23h       2020-05-18T21:34:33+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-02
ra4tgg.jz6gi5c0fm0ogbpj   23h       2020-05-18T21:34:34+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-03
wajxpt.a9oevk0pp82o3y3z   23h       2020-05-18T21:34:33+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-01

  • token 有效期為 1 天,超期後將不能再被用來 boostrap kubelet,且會被 kube-controller-manager 的 tokencleaner 清理;
  • kube-apiserver 接收 kubelet 的 bootstrap token 後,將請求的 user 設定為 system:bootstrap:<Token ID>,group 設定為 system:bootstrappers,後續將為這個 group 設定 ClusterRoleBinding;

2)分發 bootstrap kubeconfig 檔案到所有 worker 節點

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"
    scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  done

3)建立和分發 kubelet 引數配置檔案

從 v1.10 開始,部分 kubelet 引數需在配置檔案中配置,kubelet --help 會提示:

DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag

建立 kubelet 引數配置檔案模板(可配置項參考程式碼中註釋):

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ cat > kubelet-config.yaml.template <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "##NODE_IP##"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
  mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "##NODE_IP##"
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:
  - "${CLUSTER_DNS_SVC_IP}"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: cgroupfs
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "${CLUSTER_CIDR}"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:
  memory.available:  "100Mi"
  nodefs.available:  "10%"
  nodefs.inodesFree: "5%"
  imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF

  • address:kubelet 安全埠(https,10250)監聽的地址,不能為 127.0.0.1,否則 kube-apiserver、heapster 等不能呼叫 kubelet 的 API;
  • readOnlyPort=0:關閉只讀埠(預設 10255),等效為未指定;
  • authentication.anonymous.enabled:設定為 false,不允許匿名訪問 10250 埠;
  • authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啟 HTTP 證書認證;
  • authentication.webhook.enabled=true:開啟 HTTPs bearer token 認證;
  • 對於未通過 x509 證書和 webhook 認證的請求(kube-apiserver 或其他客戶端),將被拒絕,提示 Unauthorized;
  • authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查詢 kube-apiserver 某 user、group 是否具有操作資源的許可權(RBAC);
  • featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自動 rotate 證書,證書的有效期取決於 kube-controller-manager 的 --experimental-cluster-signing-duration 引數;
  • 需要 root 賬戶執行;

為各節點建立和分發 kubelet 配置檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do 
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template
    scp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
  done

4)建立和分發 kubelet systemd unit 檔案

建立 kubelet systemd unit 檔案模板:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/opt/k8s/bin/kubelet \\
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/cert \\
  --network-plugin=cni \\
  --cni-conf-dir=/etc/cni/net.d \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  --root-dir=${K8S_DIR}/kubelet \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --config=/etc/kubernetes/kubelet-config.yaml \\
  --hostname-override=##NODE_NAME## \\
  --image-pull-progress-deadline=15m \\
  --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF

  • 如果設定了 --hostname-override 選項,則 kube-proxy 也需要設定該選項,否則會出現找不到 Node 的情況;
  • --bootstrap-kubeconfig:指向 bootstrap kubeconfig 檔案,kubelet 使用該檔案中的使用者名稱和 token 向 kube-apiserver 傳送 TLS Bootstrapping 請求;
  • K8S approve kubelet 的 csr 請求後,在 --cert-dir 目錄建立證書和私鑰檔案,然後寫入 --kubeconfig 檔案;
  • --pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 映象,它不能回收容器的殭屍;

為各節點建立和分發 kubelet systemd unit 檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_name in ${NODE_NAMES[@]}
  do 
    echo ">>> ${node_name}"
    sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service
    scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
  done

5)授予 kube-apiserver 訪問 kubelet API 的許可權

在執行 kubectl exec、run、logs 等命令時,apiserver 會將請求轉發到 kubelet 的 https 埠。這裡定義 RBAC 規則,授權 apiserver 使用的證書(kubernetes.pem)使用者名稱(CN:kuberntes-master)訪問 kubelet API 的許可權:

$ kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-master

6)Bootstrap Token Auth 和授予許可權

kubelet 啟動時查詢 --kubeletconfig 引數對應的檔案是否存在,如果不存在則使用 --bootstrap-kubeconfig 指定的 kubeconfig 檔案向 kube-apiserver 傳送證書籤名請求 (CSR)。

kube-apiserver 收到 CSR 請求後,對其中的 Token 進行認證,認證通過後將請求的 user 設定為 system:bootstrap:<Token ID>,group 設定為 system:bootstrappers,這一過程稱為 Bootstrap Token Auth

預設情況下,這個 user 和 group 沒有建立 CSR 的許可權,kubelet 啟動失敗,錯誤日誌如下:

$ journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'
May 26 12:13:41 zhangjun-k8s-01 kubelet[128468]: I0526 12:13:41.798230  128468 certificate_manager.go:366] Rotating certificates
May 26 12:13:41 zhangjun-k8s-01 kubelet[128468]: E0526 12:13:41.801997  128468 certificate_manager.go:385] Failed while requesting a signed certificate from the master: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:82jfrm" cannot create resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope

解決辦法是:建立一個 clusterrolebinding,將 group system:bootstrappers 和 clusterrole system:node-bootstrapper 繫結:

$ kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

7)自動 approve CSR 請求,生成 kubelet client 證書

kubelet 建立 CSR 請求後,下一步需要建立被 approve,有兩種方式:

  1. kube-controller-manager 自動 aprrove;
  2. 手動使用命令 kubectl certificate approve

CSR 被 approve 後,kubelet 向 kube-controller-manager 請求建立 client 證書,kube-controller-manager 中的 csrapproving controller 使用 SubjectAccessReview API 來檢查 kubelet 請求(對應的 group 是 system:bootstrappers)是否具有相應的許可權。

建立三個 ClusterRoleBinding,分別授予 group system:bootstrappers 和 group system:nodes 進行 approve client、renew client、renew server 證書的許可權(server csr 是手動 approve 的,見後文):

$ cd /opt/k8s/work
$ cat > csr-crb.yaml <<EOF
 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF
$ kubectl apply -f csr-crb.yaml

  • auto-approve-csrs-for-group:自動 approve node 的第一次 CSR; 注意第一次 CSR 時,請求的 Group 為 system:bootstrappers;
  • node-client-cert-renewal:自動 approve node 後續過期的 client 證書,自動生成的證書 Group 為 system:nodes;
  • node-server-cert-renewal:自動 approve node 後續過期的 server 證書,自動生成的證書 Group 為 system:nodes;

8)啟動 kubelet 服務

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
    ssh root@${node_ip} "/usr/sbin/swapoff -a"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
  done

  • 啟動服務前必須先建立工作目錄;
  • 關閉 swap 分割槽,否則 kubelet 會啟動失敗;

kubelet 啟動後使用 --bootstrap-kubeconfig 向 kube-apiserver 傳送 CSR 請求,當這個 CSR 被 approve 後,kube-controller-manager 為 kubelet 建立 TLS 客戶端證書、私鑰和 --kubeletconfig 檔案。

注意:kube-controller-manager 需要配置 --cluster-signing-cert-file--cluster-signing-key-file 引數,才會為 TLS Bootstrap 建立證書和私鑰。

9)檢視 kubelet 情況

稍等一會,三個節點的 CSR 都被自動 approved:

$ kubectl get csr
NAME        AGE   REQUESTOR                     CONDITION
csr-5rwzm   43s   system:node:k8s-01   Pending
csr-65nms   55s   system:bootstrap:2sb8wy       Approved,Issued
csr-8t5hj   42s   system:node:k8s-02   Pending
csr-jkhhs   41s   system:node:k8s-03   Pending
csr-jv7dn   56s   system:bootstrap:ta7onm       Approved,Issued
csr-vb6p5   54s   system:bootstrap:xk27zp       Approved,Issued

  • Pending 的 CSR 用於建立 kubelet server 證書,需要手動 approve,參考後文。

所有節點均註冊(NotReady 狀態是預期的,後續安裝了網路外掛後就好):

$ kubectl get node
NAME              STATUS     ROLES    AGE   VERSION
k8s-01   NotReady   <none>   10h   v1.16.6
k8s-02   NotReady   <none>   10h   v1.16.6
k8s-03   NotReady   <none>   10h   v1.16.6

kube-controller-manager 為各 node 生成了 kubeconfig 檔案和公私鑰:

$ ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2246 Feb  7 15:38 /etc/kubernetes/kubelet.kubeconfig

$ ls -l /etc/kubernetes/cert/kubelet-client-*
-rw------- 1 root root 1281 Feb  7 15:38 /etc/kubernetes/cert/kubelet-client-2020-02-07-15-38-21.pem
lrwxrwxrwx 1 root root   59 Feb  7 15:38 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2020-02-07-15-38-21.pem

  • 沒有自動生成 kubelet server 證書;

10)手動 approve server cert csr

基於安全性考慮,CSR approving controllers 不會自動 approve kubelet server 證書籤名請求,需要手動 approve:

$ kubectl get csr
NAME        AGE     REQUESTOR                     CONDITION
csr-5rwzm   3m22s   system:node:k8s-01   Pending
csr-65nms   3m34s   system:bootstrap:2sb8wy       Approved,Issued
csr-8t5hj   3m21s   system:node:k8s-02   Pending
csr-jkhhs   3m20s   system:node:k8s-03   Pending
csr-jv7dn   3m35s   system:bootstrap:ta7onm       Approved,Issued
csr-vb6p5   3m33s   system:bootstrap:xk27zp       Approved,Issued

$ # 手動 approve
$ kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve

$ # 自動生成了 server 證書
$  ls -l /etc/kubernetes/cert/kubelet-*
-rw------- 1 root root 1281 Feb  7 15:38 /etc/kubernetes/cert/kubelet-client-2020-02-07-15-38-21.pem
lrwxrwxrwx 1 root root   59 Feb  7 15:38 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2020-02-07-15-38-21.pem
-rw------- 1 root root 1330 Feb  7 15:42 /etc/kubernetes/cert/kubelet-server-2020-02-07-15-42-12.pem
lrwxrwxrwx 1 root root   59 Feb  7 15:42 /etc/kubernetes/cert/kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2020-02-07-15-42-12.pem

11)kubelet api 認證和授權

kubelet 配置瞭如下認證引數:

  • authentication.anonymous.enabled:設定為 false,不允許匿名訪問 10250 埠;
  • authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啟 HTTPs 證書認證;
  • authentication.webhook.enabled=true:開啟 HTTPs bearer token 認證;

同時配置瞭如下授權引數:

  • authroization.mode=Webhook:開啟 RBAC 授權;

kubelet 收到請求後,使用 clientCAFile 對證書籤名進行認證,或者查詢 bearer token 是否有效。如果兩者都沒通過,則拒絕請求,提示 Unauthorized

$ curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.1.1:10250/metrics
Unauthorized

$ curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.1.1:10250/metrics
Unauthorized

通過認證後,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 傳送請求,查詢證書或 token 對應的 user、group 是否有操作資源的許可權(RBAC);

12)證書認證和授權

# 許可權不足的證書;
$ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://192.168.1.1:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)

$ # 使用部署 kubectl 命令列工具時建立的、具有最高許可權的 admin 證書;
$ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.1.1:10250/metrics|head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

--cacert--cert--key 的引數值必須是檔案路徑,如上面的 ./admin.pem 不能省略 ./,否則返回 401 Unauthorized

13)bear token 認證和授權

建立一個 ServiceAccount,將它和 ClusterRole system:kubelet-api-admin 繫結,從而具有呼叫 kubelet API 的許可權:

$ kubectl create sa kubelet-api-test
$ kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
$ SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
$ TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
$ echo ${TOKEN}


$ curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.1.1:10250/metrics | head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

(4)部署 kube-proxy 元件

kube-proxy 執行在所有 worker 節點上,它監聽 apiserver 中 service 和 endpoint 的變化情況,建立路由規則以提供服務 IP 和負載均衡功能。

講解部署 ipvs 模式的 kube-proxy 過程。

注意:如果沒有特殊指明,本文件的所有操作均在 k8s-01 節點上執行,然後遠端分發檔案和執行命令。

1)建立 kube-proxy 證書

建立證書籤名請求:

$ cd /opt/k8s/work
$ cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ]
}
EOF

  • CN:指定該證書的 User 為 system:kube-proxy
  • 預定義的 RoleBinding system:node-proxier 將User system:kube-proxy 與 Role system:node-proxier 繫結,該 Role 授予了呼叫 kube-apiserver Proxy 相關 API 的許可權;
  • 該證書只會被 kube-proxy 當做 client 證書使用,所以 hosts 欄位為空;

生成證書和私鑰:

$ cd /opt/k8s/work
$ cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
$ ls kube-proxy*.pem
kube-proxy-key.pem  kube-proxy.pem

2)建立和分發 kubeconfig 檔案

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

$ kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

$ kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

$ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

分發 kubeconfig 檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"
    scp kube-proxy.kubeconfig root@${node_name}:/etc/kubernetes/
  done

3)建立 kube-proxy 配置檔案

從 v1.10 開始,kube-proxy 部分引數可以配置檔案中配置。可以使用 --write-config-to 選項生成該配置檔案,或者參考 原始碼的註釋。

建立 kube-proxy config 檔案模板:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ cat > kube-proxy-config.yaml.template <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
  qps: 100
bindAddress: ##NODE_IP##
healthzBindAddress: ##NODE_IP##:10256
metricsBindAddress: ##NODE_IP##:10249
enableProfiling: true
clusterCIDR: ${CLUSTER_CIDR}
hostnameOverride: ##NODE_NAME##
mode: "ipvs"
portRange: ""
iptables:
  masqueradeAll: false
ipvs:
  scheduler: rr
  excludeCIDRs: []
EOF

  • bindAddress`: 監聽地址;
  • clientConnection.kubeconfig: 連線 apiserver 的 kubeconfig 檔案;
  • clusterCIDR: kube-proxy 根據 --cluster-cidr 判斷叢集內部和外部流量,指定 --cluster-cidr--masquerade-all 選項後 kube-proxy 才會對訪問 Service IP 的請求做 SNAT;
  • hostnameOverride: 引數值必須與 kubelet 的值一致,否則 kube-proxy 啟動後會找不到該 Node,從而不會建立任何 ipvs 規則;
  • mode: 使用 ipvs 模式;

為各節點建立和分發 kube-proxy 配置檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for (( i=0; i < 3; i++ ))
  do 
    echo ">>> ${NODE_NAMES[i]}"
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${NODE_NAMES[i]}.yaml.template
    scp kube-proxy-config-${NODE_NAMES[i]}.yaml.template root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml
  done

4)建立和分發 kube-proxy systemd unit 檔案

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=${K8S_DIR}/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy-config.yaml \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

分發 kube-proxy systemd unit 檔案:

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_name in ${NODE_NAMES[@]}
  do 
    echo ">>> ${node_name}"
    scp kube-proxy.service root@${node_name}:/etc/systemd/system/
  done

5)啟動 kube-proxy 服務

$ cd /opt/k8s/work
$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
    ssh root@${node_ip} "modprobe ip_vs_rr"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
  done

6)檢查啟動結果

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-proxy|grep Active"
  done

  • 確保狀態為 active (running),否則檢視日誌,確認原因:
$ journalctl -u kube-proxy

7)檢視監聽埠

$ netstat -lnpt|grep kube-prox
tcp        0      0 172.27.138.251:10256    0.0.0.0:*               LISTEN      30590/kube-proxy
tcp        0      0 172.27.138.251:10249    0.0.0.0:*               LISTEN      30590/kube-proxy

  • 10249:http prometheus metrics port;
  • 10256:http healthz port;

8)檢視 ipvs 路由規則

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
  done

預期輸出:

>>> 192.168.1.1
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.1.1:6443             Masq    1      0          0         
  -> 192.168.1.2:6443             Masq    1      0          0         
  -> 192.168.1.3:6443             Masq    1      0          0         
>>> 192.168.1.2
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.1.1:6443             Masq    1      0          0         
  -> 192.168.1.2:6443             Masq    1      0          0         
  -> 192.168.1.3:6443             Masq    1      0          0         
>>> 192.168.1.3
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.1.1:6443             Masq    1      0          0         
  -> 192.168.1.2:6443             Masq    1      0          0         
  -> 192.168.1.3:6443             Masq    1      0          0        

(5)部署calico 網路

kubernetes 要求叢集內各節點(包括 master 節點)能通過 Pod 網段互聯互通。

calico 使用 IPIP 或 BGP 技術(預設為 IPIP)為各節點建立一個可以互通的 Pod 網路。

注意:如果沒有特殊指明,本文件的所有操作均在 k8s01 節點上執行。

1)安裝 calico 網路外掛

$ cd /opt/k8s/work
$ curl https://docs.projectcalico.org/manifests/calico.yaml -O

修改配置:

$ cp calico.yaml calico.yaml.orig
# 自行修改直到出現以下差別
$ diff calico.yaml.orig calico.yaml
630c630,632
<               value: "192.168.0.0/16"
---
>               value: "172.30.0.0/16"
>             - name: IP_AUTODETECTION_METHOD
>               value: "interface=eth.*"
699c701
<             path: /opt/cni/bin
---
>             path: /opt/k8s/bin

  • 將 Pod 網段地址修改為 172.30.0.0/16;
  • calico 自動探查網際網路卡,如果有多快網絡卡,則可以配置用於互聯的網路介面命名正則表示式,如上面的 eth.*(根據自己伺服器的網路介面名修改);

執行 calico 外掛:

$ kubectl apply -f  calico.yaml

  • calico 插架以 daemonset 方式執行在所有的 K8S 節點上;

2)檢視 calico 執行狀態

$ kubectl get pods -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP               NODE              NOMINATED NODE   READINESS GATES
calico-kube-controllers-77c4b7448-99lfq   1/1     Running   0          2m11s   172.30.184.128   zhangjun-k8s-03   <none>           <none>
calico-node-dxnjs                         1/1     Running   0          2m11s   172.27.137.229   zhangjun-k8s-02   <none>           <none>
calico-node-rknzz                         1/1     Running   0          2m11s   172.27.138.239   zhangjun-k8s-03   <none>           <none>
calico-node-rw84c                         1/1     Running   0          2m11s   172.27.138.251   zhangjun-k8s-01   <none>           <none>

使用 crictl 命令檢視 calico 使用的映象:

$ crictl  images
IMAGE                                                     TAG                 IMAGE ID            SIZE
docker.io/calico/cni                                      v3.12.0             cb6799752c46c       66.5MB
docker.io/calico/node                                     v3.12.0             fc05bc4225f39       89.7MB
docker.io/calico/pod2daemon-flexvol                       v3.12.0             98793d0a88c82       37.5MB
registry.cn-beijing.aliyuncs.com/images_k8s/pause-amd64   3.1                 21a595adc69ca       326kB

如果 crictl 輸出為空或執行失敗,則有可能是缺少配置檔案 /etc/crictl.yaml 導致的,該檔案的配置如下:

$ cat /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false

七 、驗證叢集功能

本文件驗證 K8S 叢集是否工作正常。

注意:如果沒有特殊指明,本文件的所有操作均在 k8s-01 節點上執行,然後遠端分發檔案和執行命令。

1)檢查節點狀態

$ kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
k8s-01   Ready    <none>   15m   v1.16.6
k8s-02   Ready    <none>   15m   v1.16.6
k8s-03   Ready    <none>   15m   v1.16.6

都為 Ready 且版本為 v1.16.6 時正常。

2)建立測試檔案

$ cd /opt/k8s/work
$ cat > nginx-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      app: nginx-ds
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
EOF

3)執行測試

$ kubectl create -f nginx-ds.yml

4)檢查各節點的 Pod IP 連通性

$ kubectl get pods  -o wide -l app=nginx-ds
NAME             READY   STATUS    RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATES
nginx-ds-j7v5g   1/1     Running   0          61s   172.30.244.1     zhangjun-k8s-01   <none>           <none>
nginx-ds-js8g8   1/1     Running   0          61s   172.30.82.129    zhangjun-k8s-02   <none>           <none>
nginx-ds-n2p4x   1/1     Running   0          61s   172.30.184.130   zhangjun-k8s-03   <none>           <none>

在所有 Node 上分別 ping 上面三個 Pod IP,看是否連通:

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "ping -c 1 172.30.244.1"
    ssh ${node_ip} "ping -c 1 172.30.82.129"
    ssh ${node_ip} "ping -c 1 172.30.184.130"
  done

5)檢查服務 IP 和埠可達性

$ kubectl get svc -l app=nginx-ds
NAME       TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx-ds   NodePort   10.254.116.22   <none>        80:30562/TCP   2m7s

可見:

  • Service Cluster IP:10.254.116.22
  • 服務埠:80
  • NodePort 埠:30562

在所有 Node 上 curl Service IP:

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "curl -s 10.254.116.22"
  done

預期輸出 nginx 歡迎頁面內容。

6)檢查服務的 NodePort 可達性

在所有 Node 上執行:

$ source /opt/k8s/bin/environment.sh
$ for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "curl -s ${node_ip}:30562"
  done

預期輸出 nginx 歡迎頁面內容。