1. 程式人生 > 實用技巧 >Kubernetes 1.18.0 二進位制高可用叢集搭建

Kubernetes 1.18.0 二進位制高可用叢集搭建

本文出自劉騰飛視訊教程:http://video.jessetalk.cn/

主要步驟
  • 準備虛擬機器環境,部署好centos,做好初始準備
  • 理解master節點元件和node節點元件的用處(視訊第6章最好能自己實踐一下)
  • 理解TLS以及K8S中的認證授權(視訊第7章),有助於理解部署時證書的用處
  • 生成證書(注意儲存好csr檔案,可能需要重複生成證書)
  • etcd 叢集部署
  • Master節點部署
    • kube-apiserver
    • kube-controller-manager
    • kube-scheduler
  • Node節點部署
    • kubelet
    • kube-proxy
  • 網路以及外掛
    • coredns
    • dashboard
  • Keepalived和HaProxy
機器準備

172.21.0.17修host 17

120.53.237.58

node00

etcd, master, node,

keepalived, haproxy

172.21.0.2

81.70.28.31

node01

etcd, master, node, keepalived, haproxy

172.21.0.8

81.70.32.66

node02

etcd, master, node, keepalived, haproxy

172.21.0.210

VIP

環境準備 更新centos

# 更新centos 
yum update
# 下載 wget 工具
yum install wget
# 禁用防火牆
systemctl stop firewalld
systemctl disable firewalld
# 安裝 epel 
yum install epel-release
禁用swap

swapoff -a

修改/etc/fstab

  • 在行首加 #,註釋/dev/mapper/centos-swap swap

check

swapon -s
禁用SELinux

vi /etc/selinux/config
# set SELINUX=disabled 
SELINUX=disabled
# 重啟
reboot

輸入sestatus, 輸出應該為:disabled

sestatus
SELinux status:                 disabled

hostname 主機名稱修改

#192.168.0.201
hostnamectl set-hostname node00
#192.168.0.202
hostnamectl set-hostname node01
#192.168.0.203
hostnamectl set-hostname node02
時間同步

所有節點安裝chrony確保時間同步

# 安裝
yum install chrony
# 啟用
systemctl start chronyd
systemctl enable chronyd
# 設定亞洲時區
timedatectl set-timezone Asia/Shanghai
# 啟用NTP同步
timedatectl set-ntp yes
hostname

vi /etc/hosts
# 新增以下內容
192.168.0.201 node00
192.168.0.202 node01
192.168.0.203 node02
證書準備

可以說證書是整個部署當中最繁瑣但是又非常容易出錯的地方,幾乎每一個元件都需要用到相關的證書,一旦出錯就影響整個叢集的執行。搞懂每個證書的用處以及在各個元件裡面的配置方式非常重要。

生成的 CA 證書和祕鑰檔案如下:

  • ca-key.pem
  • ca.pem
  • kubernetes-key.pem
  • kubernetes.pem
  • kube-controller-manager.pem
  • kube-controller-manager-key.pem
  • kube-scheduler.pem
  • kube-scheduler-key.pem
  • service-account.pem
  • service-account-key.pem
  • node00.pem
  • node00-key.pem
  • node01.pem
  • node01-key.pem
  • node02.pem
  • node02-key.pem
  • kube-proxy.pem
  • kube-proxy-key.pem
  • admin.pem
  • admin-key.pem

使用證書的元件如下:

CA證書

API Server證書

ServiceAccount證書

TLS API Server證書

etcd

ca.pem

kubernetes.pem

kubernetes-key.pem

kube-apiserver

ca.pem

kubernetes.pem

kubernetes-key.pem

service-account.pem

kube-controller-manager

ca.pem

ca-key.pem

service-account-key.pem

kube-controller-manager.pem

kube-controller-manager-key.pem

kube-scheduler

ca.pem

kube-scheduler.pem

kube-scheduler-key.pem

kubelet

ca.pem

nodexx.pem

nodexx-key.pem

kube-proxy

ca.pem

kube-proxy.pem

kube-proxy-key.pem

安裝 cfssl

只在node00上安裝和生成證書之後拷貝到其它master節點即可

mkdir -p /ssl
cd /ssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo


#export PATH=/usr/local/bin:$PATH  
建立ca配置檔案

mkdir /root/ssl
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json
# 根據config.json檔案的格式建立如下的ca-config.json檔案
# 過期時間設定成了 87600h
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

欄位說明

  • ca-config.json:可以定義多個 profiles,分別指定不同的過期時間、使用場景等引數;後續在簽名證書時使用某個 profile;
  • signing:表示該證書可用於簽名其它證書;生成的 ca.pem 證書中CA=TRUE;
  • server auth:表示client可以用該 CA 對server提供的證書進行驗證;
  • client auth:表示server可以用該CA對client提供的證書進行驗證;
建立 CA 證書籤名請求

建立ca-csr.json檔案,內容如下:

{
  "CN": "kubernetes",
  "hosts": [
      "127.0.0.1",
      "172.21.0.17",
      "172.21.0.2",
      "172.21.0.8",
      "172.21.0.210"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ],
    "ca": {
       "expiry": "87600h"
    }
}
  • host中的IP地址改為你自己節點的IP
  • "CN":Common Name,kube-apiserver 從證書中提取該欄位作為請求的使用者名稱 (User Name);瀏覽器使用該欄位驗證網站是否合法;
  • "O":Organization,kube-apiserver 從證書中提取該欄位作為請求使用者所屬的組 (Group);

這裡的UserName和Group在 K8S的 RDBC授權中我們提到過可以用來做許可權的處理 。
生成 CA 證書和私鑰

cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem
建立 kubernetes 證書

建立 kubernetes 證書籤名請求檔案kubernetes-csr.json

{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "172.21.0.17",
      "172.21.0.2",
      "172.21.0.8",
      "172.21.0.210",
      "10.254.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
  • 如果 hosts 欄位不為空則需要指定授權使用該證書的IP 或域名列表,由於該證書後續被etcd叢集和kubernetes master叢集使用,所以上面分別指定了etcd叢集、kubernetes master叢集的主機 IP 和kubernetes服務的服務 IP(一般是kube-apiserver指定的service-cluster-ip-range網段的第一個IP,如 10.254.0.1)。
  • 這是最小化安裝的kubernetes叢集,包括一個私有映象倉庫,三個節點的kubernetes叢集,以上物理節點的IP也可以更換為主機名。
生成 kubernetes 證書和私鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
# 檢視生成的證書
ls kubernetes*
kubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem

建立kubelet證書

# node00 
cat > node00.json <<EOF
{
  "CN": "system:node:node00",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "hosts": [
     "node00",
     "node01",
     "node02",
     "172.21.0.17",
      "172.21.0.2",
      "172.21.0.8"
  ],
  "names": [
    {
      "C": "China",
      "L": "Shanghai",
      "O": "system:nodes",
      "OU": "Kubernetes",
      "ST": "Shanghai"
    }
  ]
}
EOF
# node01
cat > node01.json <<EOF
{
  "CN": "system:node:node01",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "hosts": [
     "node00",
     "node01",
     "node02",
      "172.21.0.17",
      "172.21.0.2",
      "172.21.0.8"
  ],
  "names": [
    {
      "C": "China",
      "L": "Shanghai",
      "O": "system:nodes",
      "OU": "Kubernetes",
      "ST": "Shanghai"
    }
  ]
}
EOF
# node02
cat > node02.json <<EOF
{
  "CN": "system:node:node02",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "hosts": [
     "node00",
     "node01",
     "node02",
      "172.21.0.17",
      "172.21.0.2",
      "172.21.0.8"
  ],
  "names": [
    {
      "C": "China",
      "L": "Shanghai",
      "O": "system:nodes",
      "OU": "Kubernetes",
      "ST": "Shanghai"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  node00.json | cfssljson -bare node00
  
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  node01.json | cfssljson -bare node01
  
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  node02.json | cfssljson -bare node02
建立 admin 證書

建立 admin 證書籤名請求檔案admin-csr.json

{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

  • 後續kube-apiserver使用RBAC對客戶端(如kubelet、kube-proxy、Pod)請求進行授權;
  • kube-apiserver預定義了一些RBAC使用的RoleBindings,如cluster-admin將 Groupsystem:masters與 Rolecluster-admin繫結,該 Role 授予了呼叫kube-apiserver的所有 API的許可權;
  • O 指定該證書的 Group 為system:masters,kubelet使用該證書訪問kube-apiserver時 ,由於證書被 CA 簽名,所以認證通過,同時由於證書使用者組為經過預授權的system:masters,所以被授予訪問所有 API 的許可權;

注意:這個admin 證書,是將來生成管理員用的kube config 配置檔案用的,現在我們一般建議使用RBAC 來對kubernetes 進行角色許可權控制, kubernetes 將證書中的CN 欄位 作為User, O 欄位作為 Group。

在搭建完 kubernetes 集群后,我們可以通過命令:kubectl get clusterrolebinding cluster-admin -o yaml,檢視到clusterrolebinding cluster-admin的 subjects 的 kind 是 Group,name 是system:masters。roleRef物件是ClusterRole cluster-admin。 意思是凡是system:masters Group的 user 或者serviceAccount都擁有cluster-admin的角色。 因此我們在使用 kubectl 命令時候,才擁有整個叢集的管理許可權。可以使用kubectl get clusterrolebinding cluster-admin -o yaml來檢視。

生成 admin 證書和私鑰:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
# 檢視生成的證書 
ls admin*
admin.csr  admin-csr.json  admin-key.pem  admin.pem

建立 kube-controller-manager 證書

cat > kube-controller-manager-csr.json <<EOF
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes",
      "ST": "BeiJing"
    }
  ]
}
EOF
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
建立 kube-proxy 證書

建立 kube-proxy 證書籤名請求檔案kube-proxy-csr.json

{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
  • CN 指定該證書的 User 為system:kube-proxy;
  • kube-apiserver預定義的 RoleBindingsystem:node-proxier將Usersystem:kube-proxy與 Rolesystem:node-proxier繫結,該 Role 授予了呼叫kube-apiserverProxy 相關 API 的許可權;

生成 kube-proxy 客戶端證書和私鑰

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
# 檢視生成的證書
ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem
建立kube-scheudler證書

cat > kube-scheduler-csr.json <<EOF
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes",
      "ST": "BeiJing"
    }
  ]
}
EOF
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-scheduler-csr.json | cfssljson -bare kube-scheduler
建立ServiceAccount證書

cat > service-account-csr.json <<EOF
{
  "CN": "service-accounts",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "O": "Kubernetes",
      "OU": "Kubernetes",
      "ST": "BeiJing"
    }
  ]
}
EOF
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  service-account-csr.json | cfssljson -bare service-account
校驗證書 使用cfssl-certinfo命令

cfssl-certinfo -cert kubernetes.pem
分發證書

mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl
cd /etc/kubernetes/ssl/
ls
admin-key.pem  ca-key.pem  kube-proxy-key.pem  kubernetes-key.pem
admin.pem      ca.pem      kube-proxy.pem      kubernetes.pem

複製到node01和node02(確保對應的節點上先建立/etc/kubernetes/ssl的資料夾)

scp *.pem [email protected]:/etc/kubernetes/ssl
scp *.pem [email protected]:/etc/kubernetes/ssl
部署ETCD 叢集 下載etcd檔案

最新下載地址:https://kubernetes.io/docs/setup/release/notes/

client、server、node 三個包

# 在3臺節點上建立etcd檔案臨時目錄 
mkdir -p /root/etcd
cd /root/etcd
# 在node00上下載檔案 
wget https://github.com/coreos/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz
# 下載完之後複製到 node01和 node 02 
scp etcd-v3.3.18-linux-amd64.tar.gz [email protected]:/root/etcd
scp etcd-v3.3.18-linux-amd64.tar.gz [email protected]:/root/etcd
# 在node00, node01, node02的 /root/etcd目錄下執行 
tar -xvf etcd-v3.3.18-linux-amd64.tar.gz
mv etcd-v3.3.18-linux-amd64/etcd* /usr/local/bin

驗證etcd安裝(確保三個節點上都安裝成功)

etcd --version
etcd Version: 3.4.3
Git SHA: 3c8740a79
Go Version: go1.12.9
Go OS/Arch: linux/amd64

建立etcd 資料目錄 (三個節點都要執行)

mkdir -p /var/lib/etcd
建立 etcd 的 systemd unit 檔案

在/usr/lib/systemd/system/目錄下建立檔案etcd.service,內容如下。注意替換IP地址為你自己的etcd叢集的主機IP。

#node00

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
Restart=always
RestartSec=5s
LimitNOFILE=40000
TimeoutStartSec=0
ExecStart=/usr/local/bin/etcd \
  --name infra1 \
  --data-dir /var/lib/etcd \
  --initial-advertise-peer-urls https://172.21.0.17:2380 \
  --listen-peer-urls https://172.21.0.17:2380 \
  --listen-client-urls https://172.21.0.17:2379 \
  --advertise-client-urls https://172.21.0.17:2379 \
  --initial-cluster-token etcd-cluster \
  --initial-cluster infra1=https://172.21.0.17:2380,infra2=https://172.21.0.2:2380,infra3=https://172.21.0.8:2380 \
  --initial-cluster-state new \
  --client-cert-auth \
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --peer-client-cert-auth \
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem
[Install]
WantedBy=multi-user.target

#node01

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
Restart=always
RestartSec=5s
LimitNOFILE=40000
TimeoutStartSec=0
ExecStart=/usr/local/bin/etcd \
  --name infra2 \ #注意
  --data-dir /var/lib/etcd \
  --initial-advertise-peer-urls https://172.21.0.2:2380 \
  --listen-peer-urls https://172.21.0.2:2380 \
  --listen-client-urls https://172.21.0.2:2379 \
  --advertise-client-urls https://172.21.0.2:2379 \
  --initial-cluster-token etcd-cluster \
  --initial-cluster infra1=https://172.21.0.17:2380,infra2=https://172.21.0.2:2380,infra3=https://172.21.0.8:2380 \
  --initial-cluster-state new \
  --client-cert-auth \
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --peer-client-cert-auth \
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem
[Install]
WantedBy=multi-user.target

#node02

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
Restart=always
RestartSec=5s
LimitNOFILE=40000
TimeoutStartSec=0
ExecStart=/usr/local/bin/etcd \
  --name infra2 \
  --data-dir /var/lib/etcd \
  --initial-advertise-peer-urls https://172.21.0.8:2380 \
  --listen-peer-urls https://172.21.0.8:2380 \
  --listen-client-urls https://172.21.0.8:2379 \
  --advertise-client-urls https://172.21.0.8:2379 \
  --initial-cluster-token etcd-cluster \
  --initial-cluster infra1=https://172.21.0.17:2380,infra2=https://172.21.0.2:2380,infra3=https://172.21.0.8:2380 \
  --initial-cluster-state new \
  --client-cert-auth \
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --peer-client-cert-auth \
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem
[Install]
WantedBy=multi-user.target
  • infra1 為etcd member節點的名稱,在不同的節點上要改為不同節點的名稱
  • 指定etcd的工作目錄為/var/lib/etcd,資料目錄為/var/lib/etcd,需在啟動服務前建立這個目錄,否則啟動服務的時候會報錯“Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory”;
  • 為了保證通訊安全,需要指定 etcd 的公私鑰(cert-file和key-file)、Peers 通訊的公私鑰和 CA 證書(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客戶端的CA證書(trusted-ca-file);
  • 建立kubernetes.pem證書時使用的kubernetes-csr.json檔案的hosts欄位包含所有 etcd 節點的IP,否則證書校驗會出錯;
  • --initial-cluster-state值為new時,--name的引數值必須位於--initial-cluster列表中;

重要引數解釋

name

本member名稱

data-dir

指定節點的資料儲存目錄,這些資料包括節點ID,叢集ID,叢集初始化配置,Snapshot檔案,若未指定-wal-dir,還會儲存WAL檔案;如果不指定會用預設目錄。

initial-advertise-peer-urls

其他member使用,其他member通過該地址與本member互動資訊。一定要保證從其他member能可訪問該地址。靜態配置方式下,該引數的value一定要同時在--initial-cluster引數中存在。

memberID的生成受--initial-cluster-token和--initial-advertise-peer-urls影響。

listen-peer-urls

本member側使用,用於監聽其他member傳送資訊的地址。ip為全0代表監聽本member側所有介面

listen-client-urls

本member側使用,用於監聽etcd客戶傳送資訊的地址。ip為全0代表監聽本member側所有介面

advertise-client-urls

etcd客戶使用,客戶通過該地址與本member互動資訊。一定要保證從客戶側能可訪問該地址

client-cert-auth

啟用客戶證書認證

trusted-ca-file

客戶端認證CA檔案

cert-file

客戶端認證公鑰

key-file

客戶端認證私鑰

peer-client-cert-auth

啟用member成員之間證書認證

peer-trusted-ca-file

成員之間證書認證CA檔案

peer-cert-file

成員之間證書認證公鑰

peer-key-file

成員之間證書認證私鑰

initial-cluster-token

用於區分不同叢集。本地如有多個叢集要設為不同

initial-cluster

本member側使用。描述叢集中所有節點的資訊,本member根據此資訊去聯絡其他member。

memberID的生成受--initial-cluster-token和--initial-advertise-peer-urls影響。

initial-cluster-state

用於指示本次是否為新建叢集。有兩個取值new和existing。如果填為existing,則該member啟動時會嘗試與其他member互動。

叢集初次建立時,要填為new,經嘗試最後一個節點填existing也正常,其他節點不能填為existing。

叢集執行過程中,一個member故障後恢復時填為existing,經嘗試填為new也正常。

啟用etcd服務

mv etcd.service /usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
驗證etcd服務

ETCDCTL_API=3 etcdctl --cert=/etc/kubernetes/ssl/kubernetes.pem --key /etc/kubernetes/ssl/kubernetes-key.pem --insecure-skip-tls-verify=true --endpoints=https://192.168.0.201:2379,https://192.168.0.202:2379,https://192.168.0.203:2379 endpoint health
https://192.168.0.201:2379 is healthy: successfully committed proposal: took = 13.87734ms
https://192.168.0.202:2379 is healthy: successfully committed proposal: took = 16.08662ms
https://192.168.0.203:2379 is healthy: successfully committed proposal: took = 15.656404ms
部署Master節點

所有元件需要在3個 master節點上都執行。

# 建立統一檔案存放目錄
mkdir /kube
cd /kube
# 下載 kube-apiserver 元件
wget https://storage.googleapis.com/kubernetes-release/release/v1.17.1/bin/linux/amd64/kube-apiserver
# 下載 kube-scheduler元件 
wget https://storage.googleapis.com/kubernetes-release/release/v1.17.1/bin/linux/amd64/kube-scheduler
# 下載 kube-controller-manager元件 
wget https://storage.googleapis.com/kubernetes-release/release/v1.17.1/bin/linux/amd64/kube-controller-manager
建立 TLS Bootstrapping Token - (本次搭建不需要)

Token auth file

Token可以是任意的包含128 bit的字串,可以使用安全的隨機數發生器生成。

head -c 16 /dev/urandom | od -An -t x | tr -d ' '
7dc36cb645fbb422aeb328320673bbe0

把下面的 {BOOTSTRAP_TOKEN} 替換成上面生成的 token即可

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
{BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

BOOTSTRAP_TOKEN將被寫入到 kube-apiserver 使用的 token.csv 檔案和 kubelet 使用的bootstrap.kubeconfig檔案,如果後續重新生成了 BOOTSTRAP_TOKEN,則需要:

  1. 更新 token.csv 檔案,分發到所有機器 (master 和 node)的 /etc/kubernetes/ 目錄下,分發到node節點上非必需;
  2. 重新生成 bootstrap.kubeconfig 檔案,分發到所有 node 機器的 /etc/kubernetes/ 目錄下;
  3. 重啟 kube-apiserver 和 kubelet 程序;
  4. 重新 approve kubelet 的 csr 請求;

cp token.csv /etc/kubernetes/
scp token.csv [email protected]:/etc/kubernetes
scp token.csv [email protected]:/etc/kubernetes

-----------------------------以上

kube-apiserver

預先準備

  • 三個節點的證書(供kubelet使用,同時kube-apiserver訪問kubelet時也要使用node00.pem node00-key.pem node01.pem node01-key.pem node02.pem node02-key.pem
  • service-account.pem

將/kube下的kube-apiserver檔案放到 /usr/local/bin下

mv ~/kube/kube-apiserver /usr/local/bin
cd /usr/local/bin
chmod 755 kube-apiserver 

service配置檔案/usr/lib/systemd/system/kube-apiserver.service內容:

#node00

[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
    --advertise-address=172.21.0.17 \
    --allow-privileged=true \
    --audit-log-maxage=30 \
    --audit-log-maxbackup=3 \
    --audit-log-maxsize=100 \
    --audit-log-path=/var/log/audit.log \
    --authorization-mode=Node,RBAC \
    --bind-address=0.0.0.0 \
    --client-ca-file=/etc/kubernetes/ssl/ca.pem \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
    --enable-swagger-ui=true \
    --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
    --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
    --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
    --etcd-servers=https://172.21.0.17:2379,https://172.21.0.2:2379,https://172.21.0.8:2379 \
    --event-ttl=1h \
    --insecure-bind-address=127.0.0.1 \
    --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
    --kubelet-client-certificate=/etc/kubernetes/ssl/node00.pem \
    --kubelet-client-key=/etc/kubernetes/ssl/node00-key.pem \
    --kubelet-https=true \
    --service-account-key-file=/etc/kubernetes/ssl/service-account.pem \
    --service-cluster-ip-range=10.254.0.0/16 \
    --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
    --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
    --v=2
Restart=always
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

#node01

[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
    --advertise-address=172.21.0.2 \
    --allow-privileged=true \
    --audit-log-maxage=30 \
    --audit-log-maxbackup=3 \
    --audit-log-maxsize=100 \
    --audit-log-path=/var/log/audit.log \
    --authorization-mode=Node,RBAC \
    --bind-address=0.0.0.0 \
    --client-ca-file=/etc/kubernetes/ssl/ca.pem \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
    --enable-swagger-ui=true \
    --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
    --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
    --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
    --etcd-servers=https://172.21.0.17:2379,https://172.21.0.2:2379,https://172.21.0.8:2379 \
    --event-ttl=1h \
    --insecure-bind-address=127.0.0.1 \
    --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
    --kubelet-client-certificate=/etc/kubernetes/ssl/node01.pem \
    --kubelet-client-key=/etc/kubernetes/ssl/node01-key.pem \
    --kubelet-https=true \
    --service-account-key-file=/etc/kubernetes/ssl/service-account.pem \
    --service-cluster-ip-range=10.254.0.0/16 \
    --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
    --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
    --v=2
Restart=always
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

#node02

[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
    --advertise-address=172.21.0.8 \
    --allow-privileged=true \
    --audit-log-maxage=30 \
    --audit-log-maxbackup=3 \
    --audit-log-maxsize=100 \
    --audit-log-path=/var/log/audit.log \
    --authorization-mode=Node,RBAC \
    --bind-address=0.0.0.0 \
    --client-ca-file=/etc/kubernetes/ssl/ca.pem \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
    --enable-swagger-ui=true \
    --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
    --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
    --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
    --etcd-servers=https://172.21.0.17:2379,https://172.21.0.2:2379,https://172.21.0.8:2379 \
    --event-ttl=1h \
    --insecure-bind-address=127.0.0.1 \
    --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
    --kubelet-client-certificate=/etc/kubernetes/ssl/node02.pem \
    --kubelet-client-key=/etc/kubernetes/ssl/node02-key.pem \
    --kubelet-https=true \
    --service-account-key-file=/etc/kubernetes/ssl/service-account.pem \
    --service-cluster-ip-range=10.254.0.0/16 \
    --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
    --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
    --v=2
Restart=always
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

啟動

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

重要引數解釋

https://blog.csdn.net/zhonglinzhang/article/details/90697495 (中文)

advertise-address

向叢集成員釋出apiserver的IP地址,該地址必須能夠被叢集的成員訪問。如果為空,則使用--bind-address,如果--bind-address未指定,那麼使用主機的預設介面。

authorization-mode

在安全埠上執行授權的有序的外掛列表。預設值:AlwaysAllow

以逗號分隔的列表:AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node.

allow-privileged

true允許特權模式的容器。預設值false

audit-log-maxage

audit-log-maxbackup

audit-log-maxsize

audit-log-path

bind-address

監聽安全埠的IP地址。必須能被叢集的其他以及CLI/web客戶機訪問

tls-cert-file

包含HTTPS的預設x509證書的檔案。 CA證書,如果有的話,在伺服器證書之後連線。如果啟用了HTTPS服務,但未提供 --tls-cert-file和--tls-private-key-file,則會為公共地址生成自簽名證書和金鑰,並將其儲存到--cert-dir指定的目錄中。

tls-private-key-file

包含和--tls-cert-file配對的預設x509私鑰的檔案

insecure-bind-address

地址繫結到不安全服務埠,(default 127.0.0.1),將來會被remove

client-ca-file

啟用客戶端證書認證。該引數引用的檔案中必須包含一個或多個證書頒發機構,用於驗證提交給該元件的客戶端證書。如果客戶端證書已驗證,則用其中的 Common Name 作為請求的使用者名稱

enable-admission

enable-swagger-ui

啟用swagger ui

etcd-cafile

保護etcd通訊的SSL證書頒發機構檔案

etcd-certfile

用於保護etcd通訊的SSL證書檔案

etcd-keyfile

用來保護etcd通訊的SSL key檔案

etcd-servers

etcd伺服器列表(格式://ip:port),逗號分隔

event-ttl

保留事件的時間。預設值 1h0m0s

kubelet-certificate-authority

kubelet-client-certificate

kubelet-client-key

kubelet-https

kubelet通訊使用https,預設值 true

service-account-key-file

包含PEM編碼的x509 RSA或ECDSA私有或者公共金鑰的檔案。用於驗證service account token。指定的檔案可以包含多個值。引數可以被指定多個不同的檔案。如未指定,--tls-private-key-file將被使用。如果提供了--service-account-signing-key,則必須指定該引數

service-cluster-ip-range

CIDR表示IP範圍,用於分配服務叢集IP。不能與分配給pod節點的IP重疊 (default 10.0.0.0/24)

v

安裝 kubectl

//三個節點都執行

cd ~/kube
wget https://storage.googleapis.com/kubernetes-release/release/v1.17.1/bin/linux/amd64/kubectl
mv kubectl /usr/local/bin
chmod 755 /usr/local/bin/kubectl
建立kubectl kubeconfig 檔案

在/etc/kubernetes/ 目錄下執行如下程式碼

kubectl config set-cluster kubernetes-training \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=admin.config
kubectl config set-credentials admin \
  --client-certificate=/etc/kubernetes/ssl/admin.pem \
  --client-key=/etc/kubernetes/ssl/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=admin.config
kubectl config set-context default \
  --cluster=kubernetes-training \
  --user=admin \
  --kubeconfig=admin.config
kubectl config use-context default --kubeconfig=admin.config
  • admin.pem證書 OU 欄位值為system:masters,kube-apiserver預定義的 RoleBindingcluster-admin將 Groupsystem:masters與 Rolecluster-admin繫結,該 Role 授予了呼叫kube-apiserver相關 API 的許可權;
  • 執行 cp admin.config ~/.kube/config

注意:~/.kube/config檔案擁有對該叢集的最高許可權,請妥善保管

kubectl get ns檢視

kubectl get ns 
NAME              STATUS   AGE
default           Active   4h31m
kube-node-lease   Active   4h32m
kube-public       Active   4h32m
kube-system       Active   4h32m
kube-controller-manager

準備

  • 下載檔案
  • 準備kube-controller-manager證書
  • 準備kube-controller-manager.config 訪問api-server的config檔案
  • 三個節點

將/kube下的kube-apiserver檔案放到 /usr/local/bin下

mv ~/kube/kube-controller-manager /usr/local/bin
cd /usr/local/bin
chmod 755 kube-controller-manager

service配置檔案/usr/lib/systemd/system/kube-controller-manager.service內容:

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
  --address=0.0.0.0 \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.244.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.config\
  --leader-elect=true \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/service-account-key.pem \
  --service-cluster-ip-range=10.254.0.0/16 \
  --use-service-account-credentials=true \
  --v=2
Restart=always
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

kube-config

在 /etc/kubernetes/執行

kubectl config set-cluster kubernetes-training \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=kube-controller-manager.config
kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=/etc/kubernetes/ssl/kube-controller-manager.pem \
  --client-key=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.config
kubectl config set-context default \
  --cluster=kubernetes-training \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.config
kubectl config use-context default --kubeconfig=kube-controller-manager.config

scp kube-controller-manager.config [email protected]:/etc/kubernetes/
scp kube-controller-manager.config [email protected]:/etc/kubernetes/

啟動

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
kubectl get componentstatus

重要引數說明

https://www.jianshu.com/p/bdb153daba21

address

allocate-node-cidrs

cluster-cidr

cluster-name

cluster-signing-cert-file

一個PEM編碼的有X509 CA證書的檔案,用於在叢集內釋出證書

cluster-signing-key-file

一個PEM編碼的有RSA或ECDSA私鑰的檔案,用於對叢集內的證書進行簽名

kubeconfig

leader-elect

root-ca-file

service-account-private-key-file

用於簽署 service account tokens 的 PEM 編碼的RSA或ECDSA金鑰檔案

service-cluster-ip-range

叢集中服務的CIDR範圍。 要求--allocate-node-cidrs為true

use-service-account-credentials

v

kube-scheduler

準備

  • 下載檔案
  • 準備證書kube-scheduler.pem, kube-scheduler-key.pem
  • 準備kubeconfig, kube-scheudler.config

將/kube下的kube-apiserver檔案放到 /usr/local/bin下

mv ~/kube/kube-scheduler /usr/local/bin
cd /usr/local/bin
chmod 755 kube-scheduler

kubeconfig

/etc/kubenetes 三個節點

kubectl config set-cluster kubernetes-training \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=kube-scheduler.config
kubectl config set-credentials system:kube-scheduler \
  --client-certificate=/etc/kubernetes/ssl/kube-scheduler.pem \
  --client-key=/etc/kubernetes/ssl/kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.config
kubectl config set-context default \
  --cluster=kubernetes-training \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.config
kubectl config use-context default --kubeconfig=kube-scheduler.config

vi /etc/kubernetes/config/kube-scheduler.yaml

apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
  kubeconfig: "/etc/kubernetes/kube-scheduler.config"
leaderElection:
  leaderElect: true

vi /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
  --config=/etc/kubernetes/config/kube-scheduler.yaml \
  --v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

啟動

sudo systemctl daemon-reload
sudo systemctl enable kube-scheduler
sudo systemctl start kube-scheduler
部署node節點 安裝Docker

sudo yum install -y socat conntrack ipset
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install -y docker-ce docker-ce-cli containerd.io
sudo systemctl enable docker
sudo systemctl start docker
安裝kubelet, kube-proxy
cd ~/kube
wget --timestamping \
https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz \
  https://storage.googleapis.com/kubernetes-release/release/v1.17.1/bin/linux/amd64/kube-proxy \
  https://storage.googleapis.com/kubernetes-release/release/v1.17.1/bin/linux/amd64/kubelet

安裝二進位制檔案

三個節點

cd ~/kube
chmod +x kube-proxy kubelet
sudo mv kube-proxy kubelet /usr/local/bin/
mkdir -p /opt/cni/bin
tar -xvf cni-plugins-linux-amd64-v0.8.5.tgz --directory /opt/cni/bin/
scp cni-plugins-linux-amd64-v0.8.5.tgz [email protected]/root/kube
cd ~/kube
mkdir -p /opt/cni/bin
tar -xvf cni-plugins-linux-amd64-v0.8.5.tgz --directory /opt/cni/bin
scp cni-p
lugins-linux-amd64-v0.8.5.tgz [email protected]:/root/kube
cd ~/kube
mkdir -p /opt/cni/bin
tar -xvf cni-plugins-linux-amd64-v0.8.5.tgz --directory /opt/cni/bin


---------------------------------------------------------
/etc/kubenetes 目錄下執行
# node00 
kubectl config set-cluster kubernetes-training \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=kubelet.config
kubectl config set-credentials system:node:node00 \
  --client-certificate=/etc/kubernetes/ssl/node00.pem \
  --client-key=/etc/kubernetes/ssl/node00-key.pem \
  --embed-certs=true \
  --kubeconfig=kubelet.config
kubectl config set-context default \
  --cluster=kubernetes-training \
  --user=system:node:node00 \
  --kubeconfig=kubelet.config
kubectl config use-context default --kubeconfig=kubelet.config
# node01
kubectl config set-cluster kubernetes-training \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=kubelet.config
kubectl config set-credentials system:node:node01 \
  --client-certificate=/etc/kubernetes/ssl/node01.pem \
  --client-key=/etc/kubernetes/ssl/node01-key.pem \
  --embed-certs=true \
  --kubeconfig=kubelet.config
kubectl config set-context default \
  --cluster=kubernetes-training \
  --user=system:node:node01 \
  --kubeconfig=kubelet.config
kubectl config use-context default --kubeconfig=kubelet.config
# node02
kubectl config set-cluster kubernetes-training \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=kubelet.config
kubectl config set-credentials system:node:node02 \
  --client-certificate=/etc/kubernetes/ssl/node02.pem \
  --client-key=/etc/kubernetes/ssl/node02-key.pem \
  --embed-certs=true \
  --kubeconfig=kubelet.config
kubectl config set-context default \
  --cluster=kubernetes-training \
  --user=system:node:node02 \
  --kubeconfig=kubelet.config
kubectl config use-context default --kubeconfig=kubelet.config

/etc/kubernetes/config/kubelet.yaml

Kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/ssl/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "10.254.0.10"
runtimeRequestTimeout: "15m"
tlsCertFile: "/etc/kubernetes/ssl/node00.pem"
tlsPrivateKeyFile: "/etc/kubernetes/ssl/node00-key.pem"

vi /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/local/bin/kubelet \
  --config=/etc/kubernetes/config/kubelet.yaml \
  --image-pull-progress-deadline=2m \
  --kubeconfig=/etc/kubernetes/kubelet.config \
  --pod-infra-container-image=cargo.caicloud.io/caicloud/pause-amd64:3.1 \
  --network-plugin=cni \
  --register-node=true \
  --cni-conf-dir=/etc/cni/net.d \
  --cni-bin-dir=/opt/cni/bin \
  --v=2
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target

重要引數解釋

config

image-pull-progress-deadline

kubeconfig

pod-infra-container-image

network-plugin

register-node

cni-conf-dir

cni-bin-dir

v

kube-proxy配置

/etc/kubernetes/

kubectl config set-cluster kubernetes-training \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=kube-proxy.config
kubectl config set-credentials system:kube-proxy \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.config
kubectl config set-context default \
  --cluster=kubernetes-training \
  --user=system:kube-proxy \
  --kubeconfig=kube-proxy.config
kubectl config use-context default --kubeconfig=kube-proxy.config

vi /etc/kubernetes/config/kube-proxy-config.yaml

Kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/etc/kubernetes/kube-proxy.config"
mode: "iptables"
clusterCIDR: "10.244.0.0/16"

vi /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/config/kube-proxy-config.yaml
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
啟動kubelet,kube-proxy

sudo systemctl daemon-reload
sudo systemctl enable kubelet kube-proxy
sudo systemctl start kubelet kube-proxy

kubelet授權

訪問 Kubelet API 是獲取 metrics、日誌以及執行容器命令所必需的。

這裡設定 Kubeket--authorization-mode為Webhook模式。Webhook 模式使用SubjectAccessReviewAPI 來決定授權。

建立system:kube-apiserver-to-kubeletClusterRole以允許請求 Kubelet API 和執行大部分來管理 Pods 的任務:

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
EOF

Kubernetes API Server 使用客戶端憑證授權 Kubelet 為kubernetes使用者,此憑證用--kubelet-client-certificateflag 來定義。

繫結system:kube-apiserver-to-kubeletClusterRole 到kubernetes使用者:

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: Group
    name: system:nodes
EOF

安裝flannel網路外掛

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

如果遇到牆不能訪問,把下面文件裡面的程式碼儲存到本地檔案kube-flannel.yml然後在本地執行。

https://shimo.im/docs/VWdqDhDg3wWJWqcQ/ 「kube-flannel.yml」,可複製連結後用石墨文件 App 或小程式開啟 
安裝kubedns外掛

kubectl apply -f https://raw.githubusercontent.com/caicloud/kube-ladder/master/tutorials/resources/coredns.yaml

建立一個busybox部署:

kubectl run busybox --image=busybox:1.28.3 --command -- sleep 3600

列出busybox部署的 Pod:

kubectl get pods -l run=busybox

輸出為

NAME                      READY   STATUS    RESTARTS   AGE
busybox-d967695b6-29hfh   1/1     Running   0          61s

查詢busyboxPod 的全名:

POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")

在busyboxPod 中查詢 DNS:

kubectl exec -ti $POD_NAME -- nslookup kubernetes

輸出為

Server:    10.254.0.10
Address 1: 10.254.0.10 kube-dns.kube-system.svc.cluster.local
Name:      kubernetes
Address 1: 10.254.0.1 kubernetes.default.svc.cluster.local
HAProxy 和 KeepAlived

在3臺master節點上安裝haproxy和 keepalvied

yum install haproxy
yum install keepalived
KeepAlived

keepalived是以VRRP協議為實現基礎的,VRRP全稱Virtual Router Redundancy Protocol,即虛擬路由冗餘協議。

虛擬路由冗餘協議,可以認為是實現路由器高可用的協議,即將多個提供相同功能的路由器組成一個路由器組,這個組裡面有一個master和多個backup,master上面有一個對外提供服務的vip(該路由器所在區域網內其他機器的預設路由為該vip),master會發組播,當backup收不到vrrp包時就認為master宕掉了,這時就需要根據VRRP的優先順序來選舉一個backup當master。這樣的話就可以保證路由器的高可用了。

cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_forward = 1
EOF

在3臺節點上配置keepalived

/etc/keepalived/keepalived.conf

vrrp_script haproxy-check {
    script "killall -0 haproxy"
    interval 2
    weight -2
    fall 10
    rise 2
}
 
vrrp_instance haproxy-vip {
    state MASTER
    priority 250
    interface ens33
    virtual_router_id 47
    advert_int 3
 
    unicast_src_ip 192.168.0.201
    unicast_peer {
        192.168.0.202
        192.168.0.203 
    }
 
    virtual_ipaddress {
        192.168.0.210
    }
 
    track_script {
        haproxy-check
    }
}

  • interface ens33 中的ens33是網絡卡的名字,要改成你本機自己的,可以用nmtui工具檢視
  • unicast_src_ip 為當前節點IP,unicast_peer為另外兩臺節點IP

HAProxy

HAProxy 一旦啟動,會做三件事情:

處理客戶端接入的連線

週期性檢查 server 的狀態(健康檢查)

與其他 haproxy 交換資訊

處理客戶端接入的連線,是目前為止最為複雜的工作,因為配置有太多的可能性,但總的說來有 9 個步驟:

  1. 配置實體 frontend 擁有監聽 socket,HAProxy 從它的監聽 socket 處接受客戶端連線
  2. 根據 frontend 配置的規則,對連線進行處理。可能會拒絕一些連線,修改一些 headers,或是攔截連線,執行內部的小程式,比如統計頁面,或者 CLI
  3. backend 是定義後端 servers,以及負載均衡規則的配置實體,frontend 完成上面的處理後將連線轉發給 backend。
  4. 根據 backend 定義的規則,對連線進行處理
  5. 根據負載均衡規則對連線進行排程
  6. 根據 backend 定義的規則對 response data 進行處理
  7. 根據 frontend 定義的規則對 response data 進行處理
  8. 發起一個 log report,記錄日誌
  9. 在 HTTP 模式,回到第二步,等待新的請求,或者關閉連線。

frontend 和 backend 有時被認為是 half-proxy,因為他們對一個 end-to-end(端對端)的連線只關心一半:frontend 只關心 client,backend 只關心 server。

HAProxy 也支援 full proxy,通過對 frontend 和 backend 的準確聯合來工作。

HAProxy 工作於 HTTP 模式時,配置被分裂為 frontend 和 backend 兩個部分,因為任何 frontend 可能轉發連線給 任何 backend。

HAProxy 工作於 TCP 模式時,實際上就是 full proxy 模式,配置中使用 frontend 和 backend 不能提供更多的好處,在 full proxy 模式,配置檔案更有可讀性。

在3臺節點上配置haproxy

/etc/haproxy/haproxy.cfg

cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_nonlocal_bind = 1
EOF

frontend k8s-api
  bind *:8443 #ingre 443 衝突
  mode tcp
  option tcplog
  default_backend k8s-api
backend k8s-api
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-api-1 192.168.0.201:6443 check
  server k8s-api-2 192.168.0.202:6443 check
  server k8s-api-3 192.168.0.203:6443 check

重啟keepalived和haproxy

systemctl enable keepalived haproxy
systemctl restart keepalived haproxy

修改以下元件的kube apiserver地址至127.0.0.1-> https://192.168.0.210

  • kubectl
  • kube-controller-manager
  • kube-scheduler
  • kubelet
  • kube-proxy

systemctl restart kube-controller-manager kube-scheduler kubelet kube-proxy