1. 程式人生 > 其它 >K8S_v1.20+二進位制安裝(一)

K8S_v1.20+二進位制安裝(一)

1. Kubernetes介紹

1. 應用部署方式演變

在部署應用程式的方式上,主要經歷了三個時代:

  • 傳統部署:網際網路早期,會直接將應用程式部署在物理機上

    優點:簡單,不需要其它技術的參與

    缺點:不能為應用程式定義資源使用邊界,很難合理地分配計算資源,而且程式之間容易產生影響

  • 虛擬化部署:可以在一臺物理機上執行多個虛擬機器,每個虛擬機器都是獨立的一個環境

    優點:程式環境不會相互產生影響,提供了一定程度的安全性

    缺點:增加了作業系統,浪費了部分資源

  • 容器化部署:與虛擬化類似,但是共享了作業系統

    優點:

    可以保證每個容器擁有自己的檔案系統、CPU、記憶體、程序空間等

    執行應用程式所需要的資源都被容器包裝,並和底層基礎架構解耦

    容器化的應用程式可以跨雲服務商、跨Linux作業系統發行版進行部署

容器化部署方式給帶來很多的便利,但是也會出現一些問題,比如說:

  • 一個容器故障停機了,怎麼樣讓另外一個容器立刻啟動去替補停機的容器
  • 當併發訪問量變大的時候,怎麼樣做到橫向擴充套件容器數量

這些容器管理的問題統稱為容器編排問題,為了解決這些容器編排問題,就產生了一些容器編排的軟體:

  • Swarm:Docker自己的容器編排工具
  • Mesos:Apache的一個資源統一管控的工具,需要和Marathon結合使用
  • Kubernetes:Google開源的的容器編排工具

2. kubernetes簡介

kubernetes,是一個全新的基於容器技術的分散式架構領先方案,是谷歌嚴格保密十幾年的祕密武器----Borg系統的一個開源版本,於2014年9月釋出第一個版本,2015年7月釋出第一個正式版本。

kubernetes的本質是一組伺服器叢集,它可以在叢集的每個節點上執行特定的程式,來對節點中的容器進行管理。目的是實現資源管理的自動化,主要提供瞭如下的主要功能:

  • 自我修復:一旦某一個容器崩潰,能夠在1秒中左右迅速啟動新的容器
  • 彈性伸縮:可以根據需要,自動對叢集中正在執行的容器數量進行調整
  • 服務發現:服務可以通過自動發現的形式找到它所依賴的服務
  • 負載均衡:如果一個服務起動了多個容器,能夠自動實現請求的負載均衡
  • 版本回退:如果發現新發布的程式版本有問題,可以立即回退到原來的版本
  • 儲存編排:可以根據容器自身的需求自動建立儲存卷

3. kubernetes元件

一個kubernetes叢集主要是由控制節點(master)

工作節點(node)構成,每個節點上都會安裝不同的元件。

master:叢集的控制平面,負責叢集的決策 ( 管理 )

ApiServer : 資源操作的唯一入口,接收使用者輸入的命令,提供認證、授權、API註冊和發現等機制

Scheduler : 負責叢集資源排程,按照預定的排程策略將Pod排程到相應的node節點上

ControllerManager : 負責維護叢集的狀態,比如程式部署安排、故障檢測、自動擴充套件、滾動更新等

Etcd :負責儲存叢集中各種資源物件的資訊

node:叢集的資料平面,負責為容器提供執行環境 ( 幹活 )

Kubelet : 負責維護容器的生命週期,即通過控制docker,來建立、更新、銷燬容器

KubeProxy : 負責提供叢集內部的服務發現和負載均衡

Docker : 負責節點上容器的各種操作

下面,以部署一個nginx服務來說明kubernetes系統各個元件呼叫關係:

  1. 首先要明確,一旦kubernetes環境啟動之後,master和node都會將自身的資訊儲存到etcd資料庫中

  2. 一個nginx服務的安裝請求會首先被髮送到master節點的apiServer元件

  3. apiServer元件會呼叫scheduler元件來決定到底應該把這個服務安裝到哪個node節點上

    在此時,它會從etcd中讀取各個node節點的資訊,然後按照一定的演算法進行選擇,並將結果告知apiServer

  4. apiServer呼叫controller-manager去排程Node節點安裝nginx服務

  5. kubelet接收到指令後,會通知docker,然後由docker來啟動一個nginx的pod

    pod是kubernetes的最小操作單元,容器必須跑在pod中至此,

  6. 一個nginx服務就運行了,如果需要訪問nginx,就需要通過kube-proxy來對pod產生訪問的代理

這樣,外界使用者就可以訪問叢集中的nginx服務了

4. kubernetes概念

Master:叢集控制節點,每個叢集需要至少一個master節點負責叢集的管控

Node:工作負載節點,由master分配容器到這些node工作節點上,然後node節點上的docker負責容器的執行

Pod:kubernetes的最小控制單元,容器都是執行在pod中的,一個pod中可以有1個或者多個容器

Controller:控制器,通過它來實現對pod的管理,比如啟動pod、停止pod、伸縮pod的數量等等

Service:pod對外服務的統一入口,下面可以維護者同一類的多個pod

Label:標籤,用於對pod進行分類,同一類pod會擁有相同的標籤

NameSpace:名稱空間,用來隔離pod的執行環境

2. kubernetes系統初始化

1. 前置

目前生產部署Kubernetes 叢集主要有兩種方式:

kubeadm

Kubeadm 是一個K8s 部署工具,提供kubeadm init 和kubeadm join,用於快速部署Kubernetes 叢集。

官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

二進位制包

從github 下載發行版的二進位制包,手動部署每個元件,組成Kubernetes 叢集。

Kubeadm 降低部署門檻,但遮蔽了很多細節,遇到問題很難排查。如果想更容易可控,推薦使用二進位制包部署Kubernetes 叢集,雖然手動部署麻煩點,期間可以學習很多工作原理,也利於後期維護。

2. 安裝要求

在開始之前,部署Kubernetes 叢集機器需要滿足以下幾個條件:

  • 一臺或多臺機器,作業系統CentOS7.x-86_x64
  • 硬體配置:2GB 或更多RAM,2 個CPU 或更多CPU,硬碟30GB 或更多
  • 叢集中所有機器之間網路互通
  • 可以訪問外網,需要拉取映象
  • 禁止swap 分割槽

3. 準備環境

主機名 IP地址 元件
master01 192.168.3.188 apiserver,scheduler,controller-manager,etcd
master02 192.168.3.189 apiserver,scheduler,controller-manager,etcd
node01 192.168.3.199 kubelet,kube-proxy
node02 192.168.3.200 kubelet,kube-proxy
k8s_lb 192.168.3.246 keeplived虛擬ip

4. 配置Host解析

所有伺服器

cat <<EOF>> /etc/hosts
192.168.3.188    master01
192.168.3.189    master02
192.168.3.199    node01
192.168.3.200    node02
192.168.3.246     k8s-lb
EOF

5. 安裝基礎檔案

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum install -y  ntpdate curl  lrzsz wget bash-completion.noarch bash-completion-extras.noarch dos2unix telnet  tree vim

yum install ipvsadm ipset sysstat conntrack libseccomp -y

6 設定防火牆

#可關閉防火牆或者設定埠開放
systemctl disable --now firewalld 
#master
firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent
firewall-cmd --zone=public --add-port=10250/tcp --permanent
firewall-cmd --zone=public --add-port=10259/tcp --permanent
firewall-cmd --zone=public --add-port=10257/tcp --permanent
firewall-cmd --reload

#node
firewall-cmd --zone=public --add-port=10250/tcp --permanent
firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent
firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent
firewall-cmd --reload
#詳細請看官方介紹
https://kubernetes.io/docs/reference/ports-and-protocols/

7 關閉 Selinux+swap

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

8 調整系統時區

#執行同步,可以使用自己的ntp伺服器如果沒有
ntpdate time2.aliyun.com
#寫入定時任務
crontab -e
*/5 * * * * ntpdate time2.aliyun.com

9 優化linux

ulimit -SHn 65535

vim /etc/security/limits.conf
# 末尾新增如下內容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

10 升級核心

#升級到4.0以上即可
[root@matster1 ~]# uname -r 
3.10.0-1160.el7.x86_64
#要在 CentOS 7.× 上啟用 ELRepo 倉庫,請執行:
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
#檢視相關的包
# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
#安裝最新穩定版
 yum --enablerepo=elrepo-kernel install kernel-ml  -y
#檢視核心的啟動順序
#awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
#將新安裝的核心成為預設啟動選項
vi /etc/default/grub
GRUB_DEFAULT=0
#重新載入核心配置
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot
#檢視
[root@matster1 ~]# uname -r
5.16.1-1.el7.elrepo.x86_64

11 調整核心引數

#載入ipvs模組
cat > /etc/modules-load.d/ipvs.conf << EFO
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack 
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EFO

systemctl enable --now systemd-modules-load.service
#調整核心引數
cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空間,只有當系統 OOM 時才允許使用它
vm.overcommit_memory=1 # 不檢查實體記憶體是否夠用
vm.panic_on_oom=0 # 開啟 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

#重啟
reboot
#重啟伺服器執行檢查
lsmod | grep -e ip_vs -e nf_conntrack

3. 準備軟體包

#下載kubernetes1.23.+的二進位制包
github二進位制包下載地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md
#下載etcdctl二進位制包
github二進位制包下載地址:https://github.com/etcd-io/etcd/releases
#下載cfssl二進位制包
github二進位制包下載地址:https://github.com/cloudflare/cfssl/releases
#cni外掛下載
github下載地址:https://github.com/containernetworking/plugins/releases

4. 安裝 Docker 軟體

node節點

curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#檢視所有版本
#yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce

## 建立 /etc/docker 目錄
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://u7vs31xg.mirror.aliyuncs.com"],
   "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 重啟docker服務
systemctl restart docker && systemctl enable docker

5.生成etcd叢集所需證書

1 安裝cfssl工具

cfssl是一個開源的證書管理工具,使用json檔案生成證書,相比openssl更方便使用

#解壓
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl*
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

生成證書的ca機構

mkdir -p /data/work
cd /data/work
#生成申請檔案
cat > ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
      "algo": "rsa",
      "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Qingdao",
      "L": "Qingdao",
      "O": "k8s",
      "OU": "system"
    }
  ],
  "ca": {
          "expiry": "87600h"
  }
}
EOF

#生成ca證書
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

#建立etcd證書的ca
cat > ca-config.json << EOF
{
  "signing": {
      "default": {
          "expiry": "87600h"
        },
      "profiles": {
          "kubernetes": {
              "usages": [
                  "signing",
                  "key encipherment",
                  "server auth",
                  "client auth"
              ],
              "expiry": "87600h"
          }
      }
  }
}
EOF


2 生成etcd證書

生成etcd請求csr檔案

#生成etcd證書申請檔案

cat > etcd-csr.json << EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.3.188",
    "192.168.3.189"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
    "C": "CN",
    "ST": "Qingdao",
    "L": "Qingdao",
    "O": "k8s",
    "OU": "system"
  }]
}
EOF

#生成證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd

6. 安裝Etcd叢集

配置安裝包

tar xf   etcd-v3.5.2-linux-amd64.tar.gz  
\cp etcd-v3.5.2-linux-amd64/etcd* /usr/local/bin/

編輯etcd配置檔案


cat > etcd.conf << EOF
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.3.188:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.3.188:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.3.188:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.3.188:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.3.188:2380,etcd2=https://192.168.3.189:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
注:
ETCD_NAME:節點名稱,叢集中唯一
ETCD_DATA_DIR:資料目錄
ETCD_LISTEN_PEER_URLS:叢集通訊監聽地址
ETCD_LISTEN_CLIENT_URLS:客戶端訪問監聽地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:叢集通告地址
ETCD_ADVERTISE_CLIENT_URLS:客戶端通告地址
ETCD_INITIAL_CLUSTER:叢集節點地址
ETCD_INITIAL_CLUSTER_TOKEN:叢集Token
ETCD_INITIAL_CLUSTER_STATE:加入叢集的當前狀態,new是新叢集,existing表示加入已有叢集

建立 etcd的 systemd啟動檔案

cat >/usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-client-cert-auth \
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

移動證書

mkdir -p /etc/etcd/ssl
cp ca*.pem /etc/etcd/ssl/
cp etcd*.pem /etc/etcd/ssl/
cp etcd.conf /etc/etcd/
cp etcd.service /usr/lib/systemd/system/

scp /etc/etcd  master02:/etc
scp etcd.service master02:/usr/lib/systemd/system/
#注意master02需要更改ip地址

啟動

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
#啟動ETCD叢集同時啟動二個節點,單節點是無法正常啟動的。

檢查狀態


[root@master01 work]# ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.3.188:2379,https://192.168.3.189:2379 endpoint health
#常用命令
./etcdctl --endpoints=ip,ip,ip   endpoint status
./etcdctl --endpoints=ip,ip,ip   endpoint health
./etcdctl --endpoints=ip,ip,ip  endpoint hashkv
./etcdctl --endpoints=ip,ip,ip  check perf
./etcdctl --endpoints=ip,ip,ip  check datascale
./etcdctl --endpoints=ip,ip,ip  member list

7. 部署apiserver

tar xf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
\cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

scp  kube-apiserver kube-controller-manager kube-scheduler kubectl master02:/usr/local/bin/
scp  kubelet kube-proxy  node01:/usr/local/bin/
scp  kubelet kube-proxy  node02:/usr/local/bin/


#建立工作目錄
[root@master1 work]# mkdir -p /etc/kubernetes/          # kubernetes元件配置檔案存放目錄
[root@master1 work]# mkdir -p /etc/kubernetes/ssl     # kubernetes元件證書檔案存放目錄
[root@master1 work]# mkdir /var/log/kubernetes        # kubernetes元件日誌檔案存放目錄

1.建立kube-apiserver證書

#建立目錄
#生成證書申請檔案
cat > kube-apiserver-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.3.188",
    "192.168.3.189",
    "192.168.3.246",
    "10.255.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
   ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Qingdao",
      "L": "Qingdao",
      "O": "k8s",
      "OU": "system"
    }
  ]
}
EOF
#生成證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

#生成token
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

上述檔案hosts欄位中IP為所有Master/LB/VIP IP,一個都不能少!為了方便後期擴容可以多寫幾個預留的IP。
由於該證書後續被 kubernetes master 叢集使用,需要將master節點的IP都填上,同時還需要填寫 service 網路的首個IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 網段的第一個IP,如 10.200.0.1)

2.建立配置檔案

vim kube-apiserver.conf

KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --anonymous-auth=false
--bind-address=192.168.3.188 --secure-port=6443 --advertise-address=192.168.3.188 --insecure-port=0 --authorization-mode=Node,RBAC --runtime-config=api/all=true --enab
le-bootstrap-token-auth --service-cluster-ip-range=10.255.0.0/16 --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/etc
/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --kubelet-client-certi
ficate=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem
 --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-issuer=https://kubernetes.default.svc.cluster.local --etcd-cafile=/etc/etcd/ssl/ca
.pem --etcd-certfile=/etc/etcd/ssl/etcd.pem --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem --etcd-servers=https://192.168.3.188:2379,https://192.168.3.189:2379 --enable-swa
gger-ui=true --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/kube-apiserver
-audit.log --event-ttl=1h --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=4"
–logtostderr:啟用日誌
–v:日誌等級
–log-dir:日誌目錄
–etcd-servers:etcd叢集地址
–bind-address:監聽地址
–secure-port:https安全埠
–advertise-address:叢集通告地址
–allow-privileged:啟用授權
–service-cluster-ip-range:Service虛擬IP地址段
–enable-admission-plugins:准入控制模組
–authorization-mode:認證授權,啟用RBAC授權和節點自管理
–enable-bootstrap-token-auth:啟用TLS bootstrap機制
–token-auth-file:bootstrap token檔案
–service-node-port-range:Service nodeport型別預設分配埠範圍
–kubelet-client-xxx:apiserver訪問kubelet客戶端證書
–tls-xxx-file:apiserver https證書
–etcd-xxxfile:連線Etcd叢集證書
–audit-log-xxx:審計日誌

3.建立啟動檔案


cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF


4. 同步檔案到其他節點

[root@master1 work]# \cp ca*.pem /etc/kubernetes/ssl/
[root@master1 work]# \cp kube-apiserver*.pem /etc/kubernetes/ssl/
[root@master1 work]# \cp token.csv /etc/kubernetes/
[root@master1 work]# \cp kube-apiserver.conf /etc/kubernetes/
[root@master1 work]# \cp kube-apiserver.service /usr/lib/systemd/system/

scp -rp  kube-apiserver.service  master02:/usr/lib/systemd/system/
scp -rp /etc/kubernetes/   master02:/etc



注:master2和master3配置檔案的IP地址修改為實際的本機IP

解釋:啟用 TLS Bootstrapping 機制

    TLS Bootstraping:Master apiserver啟用TLS認證後,Node節點kubelet和kube-proxy要與kube-apiserver進行通訊,必須使用CA簽發的有效證書才可以,當Node節點很多時,這種客戶端證書頒發需要大量工作,同樣也會增加叢集擴充套件複雜度。為了簡化流程,Kubernetes引入了TLS bootstraping機制來自動頒發客戶端證書,kubelet會以一個低許可權使用者自動向apiserver申請證書,kubelet的證書由apiserver動態簽署。所以強烈建議在Node上使用這種方式,目前主要用於kubelet,kube-proxy還是由我們統一頒發一個證書。

TLS bootstraping 工作流程:

# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
dfbbade94a5f76a24802f5bc3cdd1b6a

# vim /data/kubernetes/cfg/token.csv
dfbbade94a5f76a24802f5bc3cdd1b6a,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

5.啟動並設定開機啟動

#master01節點操作
systemctl daemon-reload
systemctl restart kube-apiserver 
systemctl enable kube-apiserver

8.配置HA+keeplived高可用

yum install keepalived haproxy -y

1.Master配置HAProxy,Master節點都配置一樣

vim /etc/haproxy/haproxy.cfg 

global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend k8s-master
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01    192.168.3.188:6443  check
  server k8s-master02    192.168.3.189:6443  check

2.配置KeepAlived

vim /etc/keepalived/keepalived.conf

master01

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    mcast_src_ip 192.168.3.188
    virtual_router_id 51
    priority 101
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.3.246
    }
    track_script {
      chk_apiserver 
} }

master02

vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
 
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 192.168.3.189
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.3.246
    }
    track_script {
      chk_apiserver 
} }

3.健康檢查指令碼(所有master節點)

cat > /etc/keepalived/check_apiserver.sh  << EOF
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
EOF
# 授權
chmod +x /etc/keepalived/check_apiserver.sh

4.啟動haproxy和keepalived(所有master節點)

systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
systemctl restart keepalived
systemctl restart haproxy
systemctl enable --now keepalived

9.部署kubectl

1.建立csr檔案


[root@master1 work]# cat > admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Qingdao",
      "L": "Qingdao",
      "O": "system:masters",             
      "OU": "system"
    }
  ]
}
EOF

說明:
後續 kube-apiserver 使用 RBAC 對客戶端(如 kubelet、kube-proxy、Pod)請求進行授權;
kube-apiserver 預定義了一些 RBAC 使用的 RoleBindings,如 cluster-admin 將 Group system:masters 與 Role cluster-admin 繫結,該 Role 授予了呼叫kube-apiserver 的所有 API的許可權;
O指定該證書的 Group 為 system:masters,kubelet 使用該證書訪問 kube-apiserver 時 ,由於證書被 CA 簽名,所以認證通過,同時由於證書使用者組為經過預授權的 system:masters,所以被授予訪問所有 API 的許可權;
注:
這個admin 證書,是將來生成管理員用的kube config 配置檔案用的,現在我們一般建議使用RBAC 來對kubernetes 進行角色許可權控制, kubernetes 將證書中的CN 欄位 作為User, O 欄位作為 Group;

“O”: “system:masters”, 必須是system:masters,否則後面kubectl create clusterrolebinding報錯。

2.生成證書

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@master1 work]# cp admin*.pem /etc/kubernetes/ssl/

3. 建立kubeconfig配置檔案

kubeconfig 為 kubectl 的配置檔案,包含訪問 apiserver 的所有資訊,如 apiserver 地址、CA 證書和自身使用的證書

設定叢集引數
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.246:8443 --kubeconfig=kube.config
設定客戶端認證引數
[root@master1 work]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
設定上下文引數
[root@master1 work]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
設定預設上下文
[root@master1 work]# kubectl config use-context kubernetes --kubeconfig=kube.config
[root@master1 work]# mkdir ~/.kube
[root@master1 work]# cp kube.config ~/.kube/config
授權kubernetes證書訪問kubelet api許可權
[root@master1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

4. 檢視叢集元件狀態

上面步驟完成後,kubectl就可以與kube-apiserver通訊了

[root@master1 work]# kubectl cluster-info
[root@master1 work]# kubectl get componentstatuses
[root@master1 work]# kubectl get all --all-namespaces

同步kubectl配置檔案到其他節點

[root@master1 work]# scp -rp /root/.kube/config master02:/root/.kube/

10. 部署kube-controller-manager

1. 生成kube-controller-manager證書

[root@master1 work]# vim kube-controller-manager-csr.json
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.3.188",
      "192.168.3.189"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Qingdao",
        "L": "Qingdao",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}

注:
hosts 列表包含所有 kube-controller-manager 節點 IP;

CN 為 system:kube-controller-manager、O 為 system:kube-controller-manager,kubernetes 內建的 ClusterRoleBindings system:kube-controller-manager 賦予 kube-controller-manager 工作所需的許可權

2. 生成證書

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
[root@master1 work]# ll kube-controller-manager*.pem

建立kube-controller-manager的kubeconfig

設定叢集引數
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.246:8443 --kubeconfig=kube-controller-manager.kubeconfig
設定客戶端認證引數
[root@master1 work]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
設定上下文引數
[root@master1 work]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
設定預設上下文
[root@master1 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

3.建立配置檔案

[root@master1 work]# vim kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS=" \
  --bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.255.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.0.0.0/16 \
  --experimental-cluster-signing-duration=87600h \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2"

--kubeconfig:連線apiserver配置檔案
--leader-elect:當該元件啟動多個時,自動選舉(HA)
--cluster-signing-cert-file/--cluster-signing-key-file:自動為kubelet頒發證書的CA,與apiserver保持一致

4.建立啟動檔案

[root@master1 work]# vim kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

同步相關檔案到各個節點

[root@master1 work]# cp kube-controller-manager*.pem /etc/kubernetes/ssl/
[root@master1 work]# cp kube-controller-manager.kubeconfig /etc/kubernetes/
[root@master1 work]# cp kube-controller-manager.conf /etc/kubernetes/
[root@master1 work]# cp kube-controller-manager.service /usr/lib/systemd/system/
[root@master1 work]# scp -rp kube-controller-manager*.pem master02:/etc/kubernetes/ssl/
[root@master1 work]# scp -rp kube-controller-manager.kubeconfig kube-controller-manager.conf master02:/etc/kubernetes/
[root@master1 work]# scp -rp kube-controller-manager.service master02:/usr/lib/systemd/system/

5.啟動服務

systemctl daemon-reload 
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

11.部署kube-scheduler

1.建立csr請求檔案

[root@master1 work]# vim kube-scheduler-csr.json
{
    "CN": "system:kube-scheduler",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.3.188",
      "192.168.3.189"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Qingdao",
        "L": "Qingdao",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}

注:
hosts 列表包含所有 kube-scheduler 節點 IP;
CN 為 system:kube-scheduler、O 為 system:kube-scheduler,kubernetes 內建的 ClusterRoleBindings system:kube-scheduler 將賦予 kube-scheduler 工作所需的許可權。

2.生成證書

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

[root@master1 work]# ll kube-scheduler*.pem

3.建立kube-scheduler的kubeconfig

設定叢集引數
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.246:8443 --kubeconfig=kube-scheduler.kubeconfig
設定客戶端認證引數
[root@master1 work]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
設定上下文引數
[root@master1 work]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
設定預設上下文
[root@master1 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

4.建立配置檔案

[root@master1 work]# vim kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"

5.建立服務啟動檔案

[root@master1 work]# vim kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

6.同步相關檔案到各個節點

cp kube-scheduler*.pem /etc/kubernetes/ssl/
cp kube-scheduler.kubeconfig /etc/kubernetes/
cp kube-scheduler.conf /etc/kubernetes/
cp kube-scheduler.service /usr/lib/systemd/system/


scp kube-scheduler*.pem master02:/etc/kubernetes/ssl/
scp kube-scheduler.kubeconfig kube-scheduler.conf master02:/etc/kubernetes/
scp kube-scheduler.service master02:/usr/lib/systemd/system/

7.啟動服務

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

12. 授權node允許請求證書

#建立node必備,不然node的kubelet無法啟動,就是建立一個可以申請證書的使用者
[root@master1 work]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
設定叢集引數
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.246:8443 --kubeconfig=kubelet-bootstrap.kubeconfig
設定客戶端認證引數
[root@master1 work]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
設定上下文引數
[root@master1 work]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
設定預設上下文
[root@master1 work]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
建立角色繫結
[root@master1 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

13.部署node-kubelet

1 建立配置檔案

master節點操作

[root@master1 work]#  vim kubelet.json
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "192.168.3.199",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",                     
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.255.0.2"]
}
# 如果docker的驅動為systemd,cgroupDriver處修改為systemd。此處設定很重要,否則後面node節點無法加入到叢集

建立啟動檔案

[root@master1 work]# vim kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--config=/etc/kubernetes/kubelet.json \
--network-plugin=cni \
--pod-infra-container-image=k8s.gcr.io/pause:3.2 \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target






注: –hostname-override:顯示名稱,叢集中唯一 
–network-plugin:啟用CNI 
–kubeconfig:空路徑,會自動生成,後面用於連線apiserver 
–bootstrap-kubeconfig:首次啟動向apiserver申請證書 
–config:配置引數檔案 
–cert-dir:kubelet證書生成目錄 
–pod-infra-container-image:管理Pod網路容器的映象
k8s.gcr.io/pause:3.2映象無法直接下載,需通過阿里雲映象倉庫下載:
[root@node01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
[root@node01 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
[root@node01 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
[root@node01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
[root@node01 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
[root@node01 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
[root@node01 ~]# mkdir  /etc/kubernetes/ssl -p

2.同步檔案到各個節點



[root@master1 work]# for i in node01 node02;do scp -rp kubelet-bootstrap.kubeconfig kubelet.json $i:/etc/kubernetes/;done
[root@master1 work]# for i in node01 node02;do scp -rp ca.pem $i:/etc/kubernetes/ssl/;done
[root@master1 work]# for i in node01 node02;do scp -rp kubelet.service $i:/usr/lib/systemd/system/;done

注:kubelete.json配置檔案address改為各個節點的ip地址 啟動服務 各個work節點上操作

[root@node1 ~]# mkdir /var/lib/kubelet
[root@node1 ~]# mkdir /var/log/kubernetes
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

確認kubelet服務啟動成功後,接著到master上Approve一下bootstrap請求。執行如下命令可以看到2個worker節點分別傳送了2個 CSR 請求:

3. 批准kubelet證書申請並加入叢集

kubectl get csr
kubectl certificate approve 查詢到的請求名稱
[root@master1 work]# kubectl certificate approve node-csr-O73Wkk6YcpWMOb0Tmyt_AN2zxn1U5qqc6wlWufIL9Zo
[root@master1 work]# kubectl certificate approve node-csr-hWq-wet8Iqvql6vG2-lz5PeMT1L00XI8__g4tUrPrAs   
[root@master1 work]# kubectl get csr
[root@master1 work]# kubectl get nodes

kubectl delete csr ode-csr-ulBg1w4mZCuReB8q1q2Une2BWXtuyl_vUXqu5En #刪除csr

無法加入叢集時,刪除ssl證書檔案即可

14. 部署kube-proxy

1. 建立csr請求檔案

# 建立證書請求檔案
cat > kube-proxy-csr.json << EOF
{
	"CN": "system:kube-proxy",
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [{
		"C": "CN",
		"ST": "Qingdao",
		"L": "Qingdao",
		"O": "k8s",
		"OU": "system"
	}]
}
EOF

# 生成證書
[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@master1 work]# ls kube-proxy*.pem

2 生成kubeconfig檔案

[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.246:8443 --kubeconfig=kube-proxy.kubeconfig
[root@master1 work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
[root@master1 work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
[root@master1 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

3 生成kube-proxy-config.yml

#node01節點操作
cat > kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.3.199
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.0.0.0/16      
healthzBindAddress: 192.168.3.199:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.3.199:10249
mode: "ipvs"
EOF

#clusterCIDR 此處網段必須與網路元件網段保持一致,否則部署網路元件時會報錯

4.建立啟動檔案

cat > kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

5.傳送到其他節點

[root@master1 work]# for i in node01 node02;do scp kube-proxy.kubeconfig kube-proxy.yaml $i:/etc/kubernetes/;done
[root@master1 work]# for i in node1  node02;do scp kube-proxy.service $i:/usr/lib/systemd/system/;done

注意修改IP

6.啟動

#node01 02節點操作
mkdir -p /var/lib/kube-proxy
systemctl daemon-reload
systemctl restart kube-proxy
systemctl enable kube-proxy

15. calico安裝

[root@master1 work]# wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml
[root@master1 work]# kubectl apply -f calico.yaml 
[root@master1 work]# kubectl get pods -A
[root@master1 work]# kubectl get nodes

注意啟動不起來可能是因為記憶體不夠

16.部署coredns

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/coredns/coredns.yaml.base
cp  coredns.yaml.base coredns.yaml
修改yaml檔案:
kubernetes cluster.local in-addr.arpa ip6.arpa
forward . /etc/resolv.conf
clusterIP為:10.255.0.2(kubelet配置檔案中的clusterDNS)
[root@master1 work]# cat coredns.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local  in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: coredns/coredns:1.8.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.255.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
[root@master1 work]# kubectl apply -f coredns.yaml 

17.Dashboard

下載

wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

# 修改kubernetes-dashboard的Service型別
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort  # 新增
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30009  # 新增
  selector:
    k8s-app: kubernetes-dashboard


#kubectl create -f recommended.yaml

建立管理員使用者

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
  
  
  #kubectl apply -f admin.yaml -n kube-system

訪問地址:

https://192.168.3.199:30009/

獲取token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')