1. 程式人生 > 其它 >Kubernetes kubeadm部署k8s叢集

Kubernetes kubeadm部署k8s叢集

​kubeadm​​​是​​Kubernetes​​​專案自帶的及叢集構建工具,負責執行構建一個最小化的可用叢集以及將其啟動等的必要基本步驟,​​kubeadm​​​是​​Kubernetes​​​叢集全生命週期的管理工具,可用於實現叢集的部署、升級、降級及拆除。​​kubeadm​​​部署​​Kubernetes​​​叢集是將大部分資源以​​pod​​​的方式執行,例如(​​kube-proxy​​​、​​kube-controller-manager​​​、​​kube-scheduler​​​、​​kube-apiserver​​​、​​flannel​​​)都是以​​pod​​方式執行。

​​Kubeadm​​​僅關心如何初始化並啟動叢集,餘下的其他操作,例如安裝​​Kubernetes Dashboard​​、監控系統、日誌系統等必要的附加元件則不在其考慮範圍之內,需要管理員自行部署。

​​Kubeadm​​​集成了​​Kubeadm init​​​和​​kubeadm join​​​等工具程式,其中​​kubeadm init​​​用於叢集的快速初始化,其核心功能是部署Master節點的各個元件,而​​kubeadm join​​​則用於將節點快速加入到指定叢集中,它們是建立​​Kubernetes​​​叢集最佳實踐的“快速路徑”。另外,​​kubeadm token​​​可於叢集構建後管理用於加入叢集時使用的認證令牌(t​​oken​​​),而​​kubeadm reset​​命令的功能則是刪除叢集構建過程中生成的檔案以重置回初始狀態。

​ ​kubeadm專案地址​​

​ ​kubeadm官方文件​​

Kubeadm部署Kubernetes叢集
架構圖

環境規劃
作業系統

IP

CPU/Mem

主機名

角色

CentOS7.4-86_x64

192.168.1.31

2/2G

k8s-master

Master

CentOS7.4-86_x64

192.168.1.32

2/2G

k8s-node1

Node

CentOS7.4-86_x64

192.168.1.33

2/2G

k8s-node2

Node

name

version

Docker

18.09.7

kubeadm

1.15.2

kubelet

1.15.2

kubectl

1.15.2

說明:下面初始化環境工作master節點和node節點都需要執行

1)關閉防火牆

# systemctl stop firewalld
# systemctl disable firewalld

2)關閉​​selinux​​

# sed -i 's/enforcing/disabled/' /etc/selinux/config
# setenforce 0

3)如需要關閉​​swap​​,(由於伺服器本來配置就低,這裡就不關閉swap,在後面部署過程中忽略swap報錯即可)

# swapoff -a #臨時
# vim /etc/fstab #永久

4)時間同步

# ntpdate 0.rhel.pool.ntp.org
1.
5)​​host​​繫結

# vim /etc/hosts
192.168.1.31 k8s-master
192.168.1.32 k8s-node1
192.168.1.33 k8s-node2

安裝docker
master節點和所有node節點都需要執行

1)配置​​docker​​​的​​yum​​倉庫(這裡使用阿里雲倉庫)

# yum -y install yum-utils device-mapper-persistent-data lvm2
# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2)安裝​​docker​​

# yum -y install docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io

3)修改docker cgroup driver為systemd

根據文件CRI installation中的內容,對於使用systemd作為init system的Linux的發行版,使用systemd作為docker的cgroup driver可以確保伺服器節點在資源緊張的情況更加穩定,因此這裡修改各個節點上docker的cgroup driver為systemd。
# mkdir /etc/docker #沒啟動docker之前沒有該目錄
# vim /etc/docker/daemon.json #如果不存在則建立
{
"exec-opts": ["native.cgroupdriver=systemd"]
}

4)啟動​​docker​​

# systemctl restart docker #啟動docker
# systemctl enable docker #開機自啟動

# docker info |grep Cgroup
Cgroup Driver: systemd

安裝kubeadm
master節點和所有node節點都需要執行

1)配置​​kubenetes​​​的​​yum​​倉庫(這裡使用阿里雲倉庫)

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# yum makecache

2)安裝​​kubelat​​​、​​kubectl​​​、​​kubeadm​​

# yum -y install kubelet-1.15.2 kubeadm-1.15.2 kubectl-1.15.2

# rpm -aq kubelet kubectl kubeadm
kubectl-1.15.2-0.x86_64
kubelet-1.15.2-0.x86_64
kubeadm-1.15.2-0.x86_64

3)將​​kubelet​​加入開機啟動,這裡剛安裝完成不能直接啟動。(因為目前還沒有叢集還沒有建立)

# systemctl enable kubelet
1.
初始化Master
注意:在master節點執行

通過​​kubeadm --help​​​幫助手冊可以看到可以通過​​kubeadm init​​​初始化一個​​master​​​節點,然後再通過​​kubeadm join​​​將一個​​node​​節點加入到叢集中。

[root@k8s-master ~]# kubeadm --help
Usage:
kubeadm [command]

Available Commands:
alpha Kubeadm experimental sub-commands
completion Output shell completion code for the specified shell (bash or zsh)
config Manage configuration for a kubeadm cluster persisted in a ConfigMap in the cluster
help Help about any command
init Run this command in order to set up the Kubernetes control plane
join Run this on any machine you wish to join an existing cluster
reset Run this to revert any changes made to this host by 'kubeadm init' or 'kubeadm join'
token Manage bootstrap tokens
upgrade Upgrade your cluster smoothly to a newer version with this command
version Print the version of kubeadm

Flags:
-h, --help help for kubeadm
--log-file string If non-empty, use this log file
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem.
--skip-headers If true, avoid header prefixes in the log messages
--skip-log-headers If true, avoid headers when opening log files
-v, --v Level number for the log level verbosity

Use "kubeadm [command] --help" for more information about a command.

1)配置忽略swap報錯

[root@k8s-master ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

2)初始化master

--kubernetes-version #指定Kubernetes版本
--image-repository #由於kubeadm預設是從官網k8s.grc.io下載所需映象,國內無法訪問,所以這裡通過--image-repository指定為阿里雲映象倉庫地址
--pod-network-cidr #指定pod網路段
--service-cidr #指定service網路段
--ignore-preflight-errors=Swap #忽略swap報錯資訊

[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.15.2 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.31:6443 --token a4pjca.ubxvfcsry1je626j \
--discovery-token-ca-cert-hash sha256:784922b9100d1ecbba01800e7493f4cba7ae5c414df68234c5da7bca4ef0c581

3)按照上面初始化成功提示建立配置檔案

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master ~]# docker image ls #初始化完成後可以看到所需映象也拉取下來了
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-scheduler v1.15.2 88fa9cb27bd2 2 weeks ago 81.1MB
registry.aliyuncs.com/google_containers/kube-proxy v1.15.2 167bbf6c9338 2 weeks ago 82.4MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.15.2 34a53be6c9a7 2 weeks ago 207MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.15.2 9f5df470155d 2 weeks ago 159MB
registry.aliyuncs.com/google_containers/coredns 1.3.1 eb516548c180 7 months ago 40.3MB
registry.aliyuncs.com/google_containers/etcd 3.3.10 2c4adeb21b4f 8 months ago 258MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 20 months ago 742kB

4)新增flannel網路元件 ​ ​flannel專案地址​​

方法一
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master ~]# kubectl get pods -n kube-system |grep flannel #驗證flannel網路外掛是否部署成功(Running即為成功)

# 由於flannel預設是從國外拉取映象,所以經常拉取不到,故使用下面方法二進行安裝

方法二
[root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master ~]# sed -i 's#quay.io#quay-mirror.qiniu.com#g' kube-flannel.yml #替換倉庫地址
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml

加入Node節點


向叢集中新增新節點,執行在kubeadm init 輸出的kubeadm join命令,再在後面同樣新增忽略swap報錯引數。

1)配置忽略swap報錯

[root@k8s-node1 ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

[root@k8s-node2 ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

2)加入node1節點

[root@k8s-node1 ~]# kubeadm join 192.168.1.31:6443 --token a4pjca.ubxvfcsry1je626j --discovery-token-ca-cert-hash sha256:784922b9100d1ecbba01800e7493f4cba7ae5c414df68234c5da7bca4ef0c581 --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3)加入node2節點

[root@k8s-node2 ~]# kubeadm join 192.168.1.31:6443 --token a4pjca.ubxvfcsry1je626j --discovery-token-ca-cert-hash sha256:784922b9100d1ecbba01800e7493f4cba7ae5c414df68234c5da7bca4ef0c581 --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

檢查叢集狀態
1)在master節點輸入命令檢查叢集狀態,返回如下結果則叢集狀態正常

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 9m40s v1.15.2
k8s-node1 NotReady <none> 28s v1.15.2
k8s-node2 NotReady <none> 13s v1.15.2

重點檢視STATUS內容為Ready時,則說明叢集狀態正常。

2)檢視叢集客戶端和服務端程式版本資訊

[root@k8s-master ~]# kubectl version --short=true
Client Version: v1.15.2
Server Version: v1.15.2

3)檢視叢集資訊

[root@k8s-master ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.1.31:6443
KubeDNS is running at https://192.168.1.31:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

4)檢視每個節點下載的映象

master節點:
[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-apiserver v1.15.2 34a53be6c9a7 2 weeks ago 207MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.15.2 9f5df470155d 2 weeks ago 159MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.15.2 88fa9cb27bd2 2 weeks ago 81.1MB
registry.aliyuncs.com/google_containers/kube-proxy v1.15.2 167bbf6c9338 2 weeks ago 82.4MB
quay-mirror.qiniu.com/coreos/flannel v0.11.0-amd64 ff281650a721 6 months ago 52.6MB
registry.aliyuncs.com/google_containers/coredns 1.3.1 eb516548c180 7 months ago 40.3MB
registry.aliyuncs.com/google_containers/etcd 3.3.10 2c4adeb21b4f 8 months ago 258MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 20 months ago 742kB

node1節點
[root@k8s-node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.15.2 167bbf6c9338 2 weeks ago 82.4MB
quay-mirror.qiniu.com/coreos/flannel v0.11.0-amd64 ff281650a721 6 months ago 52.6MB
registry.aliyuncs.com/google_containers/coredns 1.3.1 eb516548c180 7 months ago 40.3MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 20 months ago 742kB

node2
[root@k8s-node2 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.15.2 167bbf6c9338 2 weeks ago 82.4MB
quay-mirror.qiniu.com/coreos/flannel v0.11.0-amd64 ff281650a721 6 months ago 52.6MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 20 months ago 742kB

刪除節點
有時節點出現故障,需要刪除節點,方法如下

1)在master節點上執行

# kubectl drain <NODE-NAME> --delete-local-data --force --ignore-daemonsets
# kubectl delete node <NODE-NAME>

2)在需要移除的節點上執行

# kubeadm reset