1. 程式人生 > 其它 >kubernetes 叢集安裝,離線版

kubernetes 叢集安裝,離線版

kubernetes 叢集安裝,離線版

為什麼使用kubeadm來安裝

kubeadm是官方社群推出的一個用於快速部署kubernetes叢集的工具。這個工具能通過兩條指令快速完成一個kubernetes叢集的部署。

網上很多人說通過二進位制安裝能瞭解到配置的細節,其實通過kubeadm安裝也能檢視到配置的細節。

可以自動生成證書,對初學者帶來了不少便利。

網路環境

我們完全模擬生產環境中,不可以訪問外部網際網路的情況。

基礎的yum源是有提供的,像什麼docker-ce、kubernetes的源是沒有的。

k8s.gcr.io、quay.io這些域名也是不可以訪問的。

準備環境

如果沒有特殊提及,安裝及操作需要在所有master及node節點上執行。

機器網路及配置

複製三臺虛擬機器。

主機名 IP 節點型別 最低配置
k8s-master 192.168.18.134 master節點 CPU 2Core, Memory 100GB
k8s-node1 192.168.18.135 node節點 CPU 2Core, Memory 100GB
k8s-node2 192.168.18.136 node節點 CPU 3Core, Memory 100GB

master節點需要至少2個CPU,不然會報如錯誤:

error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

關閉防火牆

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

關閉selinux

SELINUX=enforcing替換成SELINUX=disabled


[root@localhost ~]# sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
[root@localhost ~]# setenforce 0

檢視一下selinux的狀態。

[root@localhost ~]# getenforce
Permissive

關閉Swap

[root@localhost ~]# swapoff -a
[root@localhost ~]# cp /etc/fstab /etc/fstab_bak
[root@localhost ~]# cat /etc/fstab_bak | grep -v swap > /etc/fstab

grep -v swap是查詢不包含swap的行。

檢視一下swap的情況,Swap已經全部為0了。


[root@localhost ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           2117         253        1351           9         512        1704
Swap:             0           0           0

設定主機名

在master節點上設定主機名。

hostnamectl set-hostname k8s-master

在node1節點上設定主機名。

hostnamectl set-hostname k8s-node1

在node2節點上設定主機名。

hostnamectl set-hostname k8s-node2

在master上檢視主機名。

[root@k8s-master ~]# hostname
k8s-master

設定hosts

>>表示檔案末尾追加記錄。

cat >> /etc/hosts <<EOF
192.168.18.134   k8s-master
192.168.18.135   k8s-node1
192.168.18.136   k8s-node2
EOF

修改sysctl.conf

暫時未修改,裝docker的時候會自動修改。可以暫時先跳過這一步。

如果未修改成功,在執行docker info命令時,會顯示如下提示資訊。

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
cat /proc/sys/net/bridge/bridge-nf-call-iptables
0
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
0

可通過以下方法來做修改。

# 修改 /etc/sysctl.conf
# 如果有配置,則修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
# 可能沒有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
sysctl -p

也就是在/etc/sysctl.conf末尾加上如下內容:

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

同時讓配置生效sysctl -p

安裝Docker

下載docker

由於我們在生產環境中是沒法連線網際網路的,所以要提前準備好docker rpm包。

我們在另一臺可以聯網的機器上下載安裝所需的軟體。

新增docker yum源

在聯網的機器上,下載docker

配置docker-ce源

cd /etc/yum.repos.d/
wget https://download.docker.com/linux/centos/docker-ce.repo

或者

官方源地址(比較慢)

$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

阿里雲

$ sudo yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

清華大學源

$ sudo yum-config-manager \
    --add-repo \
    https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo

檢視docker所有版本

[root@k8s-master ~]# yum list docker-ce --showduplicates
已載入外掛:fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
已安裝的軟體包
docker-ce.x86_64                               18.06.3.ce-3.el7                                       @/docker-ce-18.06.3.ce-3.el7.x86_64
可安裝的軟體包
docker-ce.x86_64                               17.03.0.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.03.1.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.03.2.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.03.3.ce-1.el7                                       docker-ce-stable
docker-ce.x86_64                               17.06.0.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.06.1.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.06.2.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.09.0.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.09.1.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.12.0.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.12.1.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               18.03.0.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               18.03.1.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               18.06.0.ce-3.el7                                       docker-ce-stable
docker-ce.x86_64                               18.06.1.ce-3.el7                                       docker-ce-stable
docker-ce.x86_64                               18.06.2.ce-3.el7                                       docker-ce-stable
docker-ce.x86_64                               18.06.3.ce-3.el7                                       docker-ce-stable
docker-ce.x86_64                               3:18.09.0-3.el7                                        docker-ce-stable
...

我們選擇安裝docker-ce.18.06.3.ce-3.el7

下載

yum install --downloadonly --downloaddir ~/k8s/docker docker-ce-18.06.3.ce-3.el7

docker及其依賴會下載到~/docker資料夾中。

我們可以看到只有docker-ce是來自docker-ce-stable源的。

依賴關係解決

================================================================================================================================================================
 Package                                    架構                       版本                                          源                                    大小
================================================================================================================================================================
正在安裝:
 docker-ce                                  x86_64                     18.06.3.ce-3.el7                              docker-ce-stable                      41 M
為依賴而安裝:
 audit-libs-python                          x86_64                     2.8.5-4.el7                                   base                                  76 k
 checkpolicy                                x86_64                     2.5-8.el7                                     base                                 295 k
 container-selinux                          noarch                     2:2.119.2-1.911c772.el7_8                     extras                                40 k
 libcgroup                                  x86_64                     0.41-21.el7                                   base                                  66 k
 libsemanage-python                         x86_64                     2.5-14.el7                                    base                                 113 k
 policycoreutils-python                     x86_64                     2.5-34.el7                                    base                                 457 k
 python-IPy                                 noarch                     0.75-6.el7                                    base                                  32 k
 setools-libs                               x86_64                     3.3.8-4.el7                                   base                                 620 k

所以,我們只需要把docker-ce-18.06.3.ce-3.el7.x86_64.rpm拷貝到master及node節點裡面。

在master及node節點裡建立~/k8s/docker目錄,用於存放docker安裝rpm包。

mkdir -p ~/k8s/docker

拷貝到k8s叢集

通過scp命令拷貝。

scp docker-ce-18.06.3.ce-3.el7.x86_64.rpm [email protected]:~/k8s/docker/
scp docker-ce-18.06.3.ce-3.el7.x86_64.rpm [email protected]:~/k8s/docker/

當然不copy的話,也可以按照上面的步驟,在node節點上也執行下下載命令

安裝Docker

yum本地安裝

yum install k8s/docker/docker-ce-18.06.3.ce-3.el7.x86_64.rpm

設定開機啟動

systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

我們可以檢視一下安裝包到底生成了哪些檔案。

rpm -ql docker-ce

或者

rpm -qpl k8s/docker/docker-ce-18.06.3.ce-3.el7.x86_64.rpm

啟動Docker

systemctl start docker

檢視docker服務資訊。

docker info
...
Cgroup Driver: cgroupfs
...

呆會兒我們還需要修改這個值

安裝k8s元件

由於kubeadm是依賴kubelet, kubectl的,所以我們只需要下載kubeadm的rpm,其依賴就自動下載下來了。但是版本可能不是我們想要的,所以可能需要單獨下載。比如我下載kubeadm-1.15.6,它依賴的可能是kubelet-1.16.x。

下載k8s元件

我們需要安裝kubeadm, kubelet, kubectl,版本需要一致。在可以連外網的機器上下載元件,同上面docker。

新增kubernetes yum源

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
enabled=1
EOF

檢視kubeadm版本

yum list kubeadm --showduplicates
...
kubeadm.x86_64                                                         1.15.6-0     
...     

版本很多,我們選擇1.15.6-0 的版本

下載

yum install --downloadonly --downloaddir ~/k8s/kubernetes kubeadm-1.15.6

根據如下依賴關係

====================================================================================================================================================
 Package                                    架構                       版本                                    源                              大小
====================================================================================================================================================
正在安裝:
 kubeadm                                    x86_64                     1.15.6-0                                kubernetes                     8.9 M
為依賴而安裝:
 conntrack-tools                            x86_64                     1.4.4-5.el7_7.2                         updates                        187 k
 cri-tools                                  x86_64                     1.13.0-0                                kubernetes                     5.1 M
 kubectl                                    x86_64                     1.16.3-0                                kubernetes                      10 M
 kubelet                                    x86_64                     1.16.3-0                                kubernetes                      22 M
 kubernetes-cni                             x86_64                     0.7.5-0                                 kubernetes                      10 M
 libnetfilter_cthelper                      x86_64                     1.0.0-10.el7_7.1                        updates                         18 k
 libnetfilter_cttimeout                     x86_64                     1.0.0-6.el7_7.1                         updates                         18 k
 libnetfilter_queue                         x86_64                     1.0.2-2.el7_2                           base                            23 k
 socat                                      x86_64                     1.7.3.2-2.el7                           base                           290 k

我們只需要把來自kubernetes源的kubeadm和4個依賴cri-toolskubectlkubeletkubernetes-cni拷貝到master和node節點。

下載kubelet-1.15.6

yum install --downloadonly --downloaddir ~/k8s/kubernetes kubelet-1.15.6

下載kubectl-1.15.6

yum install --downloadonly --downloaddir ~/k8s/kubernetes kubectl-1.15.6

拷貝到k8s叢集

在master及node節點裡建立~/k8s/kubernetes目錄,用於存放k8s元件安裝的rpm包。

mkdir -p ~/k8s/kubernetes

此處省略copy指令碼,如果嫌麻煩,可以將上面的下載命令在node節點上也執行一下

安裝k8s元件

yum install ~/k8s/kubernetes/*.rpm
--> 解決依賴關係完成
錯誤: Multilib version problems found. This often means that the root
      cause is something else and multilib version checking is just
      pointing out that there is a problem. Eg.:

        1. You have an upgrade for kubectl which is missing some
           dependency that another package requires. Yum is trying to
           solve this by installing an older version of kubectl of the
           different architecture. If you exclude the bad architecture
           yum will tell you what the root cause is (which package
           requires what). You can try redoing the upgrade with
           --exclude kubectl.otherarch ... this should give you an error
           message showing the root cause of the problem.

        2. You have multiple architectures of kubectl installed, but
           yum can only see an upgrade for one of those architectures.
           If you don't want/need both architectures anymore then you
           can remove the one with the missing update and everything
           will work.

        3. You have duplicate versions of kubectl installed already.
           You can use "yum check" to get yum show these errors.

      ...you can also use --setopt=protected_multilib=false to remove
      this checking, however this is almost never the correct thing to
      do as something else is very likely to go wrong (often causing
      much more problems).

      保護多庫版本:kubectl-1.23.5-0.x86_64 != kubectl-1.15.6-0.x86_64
錯誤:保護多庫版本:kubelet-1.15.6-0.x86_64 != kubelet-1.23.5-0.x86_64

安裝的時候,報錯了。明明安裝的是1.15的版本,不知道怎麼還有1.23的版本

我們到 ~/k8s/kubernetes 目錄下看看

[root@localhost docker]# cd ~/k8s/kubernetes
[root@localhost kubernetes]# ll
總用量 98512
-rw-r--r--. 1 root root  7401938 3月  18 06:26 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm
-rw-r--r--. 1 root root  9920490 1月   4 2021 5181c2b7eee876b8ce205f0eca87db2b3d00ffd46d541882620cb05b738d7a80-kubectl-1.15.6-0.x86_64.rpm
-rw-r--r--. 1 root root  9294306 1月   4 2021 62cd53776f5e5d531971b8ba4aac5c9524ca95d2bb87e83996cf3f54873211e5-kubeadm-1.15.6-0.x86_64.rpm
-rw-r--r--. 1 root root  9921646 3月  18 06:33 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm
-rw-r--r--. 1 root root   191000 4月   4 2020 conntrack-tools-1.4.4-7.el7.x86_64.rpm
-rw-r--r--. 1 root root 21546750 3月  18 06:38 d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm
-rw-r--r--. 1 root root 19487362 1月   4 2021 db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm
-rw-r--r--. 1 root root 22728902 1月   4 2021 e9e7cc53edd19d0ceb654d1bde95ec79f89d26de91d33af425ffe8464582b36e-kubelet-1.15.6-0.x86_64.rpm
-rw-r--r--. 1 root root    18400 4月   4 2020 libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
-rw-r--r--. 1 root root    18212 4月   4 2020 libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
-rw-r--r--. 1 root root    23584 8月  11 2017 libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
-rw-r--r--. 1 root root   296632 8月  11 2017 socat-1.7.3.2-2.el7.x86_64.rpm

發現確實有兩個1.23的版本,解決辦法,就是將其刪掉

rm -f 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm
rm -f d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm

刪除之後,在執行上面的安裝命令

這樣,kubeadm, kubectl, kubelet就已經安裝好了。

設定kubelet的開機啟動。我們並不需要啟動kubelet,就算啟動,也是不能成功的。執行kubeadm命令,會生成一些配置檔案 ,這時才會讓kubelet啟動成功的。

systemctl enable kubelet

拉取映象

執行kubeadm時,需要用到一些映象,我們需要提前準備。

檢視需要依賴哪些映象


[root@localhost kubernetes]# kubeadm config images list
I0410 16:34:41.007521   20037 version.go:248] remote version is much newer: v1.23.5; falling back to: stable-1.15
k8s.gcr.io/kube-apiserver:v1.15.12
k8s.gcr.io/kube-controller-manager:v1.15.12
k8s.gcr.io/kube-scheduler:v1.15.12
k8s.gcr.io/kube-proxy:v1.15.12
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

在生產環境,是肯定訪問不了k8s.gcr.io這個地址的。在有大陸聯網的機器上,也是無法訪問的。所以我們需要使用國內映象先下載下來。

解決辦法跟簡單,我們使用docker命令搜尋下

[root@localhost kubernetes]# docker search kube-apiserver
NAME                                    DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
aiotceo/kube-apiserver                  k8s.gcr.io/kube-apiserver                       20
mirrorgooglecontainers/kube-apiserver                                                   19
kubesphere/kube-apiserver                                                               7
kubeimage/kube-apiserver-amd64          k8s.gcr.io/kube-apiserver-amd64                 5
empiregeneral/kube-apiserver-amd64      kube-apiserver-amd64                            4                                       [OK]
graytshirt/kube-apiserver               Alpine with the kube-apiserver binary           2
k8simage/kube-apiserver                                                                 1
docker/desktop-kubernetes-apiserver     Mirror of selected tags from k8s.gcr.io/kube…   1
cjk2atmb/kube-apiserver                                                                 0
kope/kube-apiserver-healthcheck                                                         0
forging2012/kube-apiserver                                                              0
ramencloud/kube-apiserver               k8s.gcr.io/kube-apiserver                       0
lbbi/kube-apiserver                     k8s.gcr.io                                      0
v5cn/kube-apiserver                                                                     0
cangyin/kube-apiserver                                                                  0
mesosphere/kube-apiserver-amd64                                                         0
boy530/kube-apiserver                                                                   0
ggangelo/kube-apiserver                                                                 0
opsdockerimage/kube-apiserver                                                           0
mesosphere/kube-apiserver                                                               0
lchdzh/kube-apiserver                   kubernetes原版基礎映象,Registry為k8s.gcr.io            0
willdockerhub/kube-apiserver                                                            0
woshitiancai/kube-apiserver                                                             0
k8smx/kube-apiserver                                                                    0
rancher/kube-apiserver                                                                  0

映象很多,一般選擇 STARS 梳理多的。樓主選擇的是 aiotceo/kube-apiserver

在三臺機器上拉取如下映象。

docker pull aiotceo/kube-apiserver:v1.15.6
docker pull aiotceo/kube-controller-manager:v1.15.6
docker pull aiotceo/kube-scheduler:v1.15.6
docker pull aiotceo/kube-proxy:v1.15.6
docker pull aiotceo/pause:3.1
docker pull aiotceo/etcd:3.3.10
docker pull aiotceo/coredns:1.3.1

檢視拉取映象。

[root@localhost kubernetes]# docker images
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
aiotceo/kube-proxy                v1.15.6             d756327a2327        2 years ago         82.4MB
aiotceo/kube-apiserver            v1.15.6             9f612b9e9bbf        2 years ago         207MB
aiotceo/kube-controller-manager   v1.15.6             83ab61bd43ad        2 years ago         159MB
aiotceo/kube-scheduler            v1.15.6             502e54938456        2 years ago         81.1MB
aiotceo/coredns                   1.3.1               eb516548c180        3 years ago         40.3MB
aiotceo/etcd                      3.3.10              2c4adeb21b4f        3 years ago         258MB
aiotceo/pause                     3.1                 da86e6ba6ca1        4 years ago         742kB

tag映象

為了讓kubeadm程式能找到k8s.gcr.io下面的映象,需要把剛才下載的映象名稱重新打一下tag。

docker images | grep aiotceo | sed 's/aiotceo/k8s.gcr.io/' | awk '{print "docker tag " $3 " " $1 ":" $2}' | sh

刪除舊的映象,當然,你留著也不會佔用太多空間。

docker images | grep aiotceo | awk '{print "docker rmi " $1 ":" $2}' | sh

檢視映象

REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.15.6             d756327a2327        2 years ago         82.4MB
k8s.gcr.io/kube-apiserver            v1.15.6             9f612b9e9bbf        2 years ago         207MB
k8s.gcr.io/kube-controller-manager   v1.15.6             83ab61bd43ad        2 years ago         159MB
k8s.gcr.io/kube-scheduler            v1.15.6             502e54938456        2 years ago         81.1MB
k8s.gcr.io/coredns                   1.3.1               eb516548c180        3 years ago         40.3MB
k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        3 years ago         258MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        4 years ago         742kB

映象搞定了。

部署k8s叢集

初始化master節點

在master節點上執行kubeadm init命令。

如果使用flannel網路。則要把引數中必須設定--pod-network-cidr=10.244.0.0/16,這個IP地址是固定的。

如果不用,則不需要。樓主因為網路問題,沒有使用flannel網路

kubeadm init --kubernetes-version=v1.15.6 \
    --apiserver-advertise-address=192.168.18.134 \
    --pod-network-cidr=10.244.0.0/16

解決WARNING

我們看到上面的訊息中有一句

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

還記得前面我在檢視docker info時,有提到要修改cgroup driver麼?現在就來修改吧。

修改或建立/etc/docker/daemon.json,新增如下內容:

{
	"exec-opts": ["native.cgroupdriver=systemd"]
}

重啟docker

systemctl restart docker

檢視修改結果,如果Cgroup Driver改為systemd後就表示成功了。

docker info
...
Cgroup Driver: systemd
...

重置

kubeadm reset

[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1207 22:12:18.285935   27649 reset.go:98] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://172.16.64.233:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 172.16.64.233:6443: connect: connection refused
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1207 22:12:19.569005   27649 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

再次初始化Master節點

apiserver-advertise-addresspod-network-cidr引數都可以省略掉。

kubeadm init --kubernetes-version=v1.15.6 \
--apiserver-advertise-address=192.168.18.134 \
--pod-network-cidr=10.244.0.0/16


[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.18.134]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.18.134 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.18.134 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 33.002499 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 8y4nd8.ww9f2npklyebtjqp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.18.134:6443 --token 8y4nd8.ww9f2npklyebtjqp \
    --discovery-token-ca-cert-hash sha256:c5f01fe144020785cb82b53bcda3b64c2fb8d955af3ca863b8c31d9980c32023

提示資訊和上面初始化時的資訊一樣,只是少了剛才的WARNING。

按照資訊提示,執行如下命令,目前登入的就是root使用者,所以也不需要用sudo了。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

檢視節點資訊,節點狀態為NotReady:

kubectl get no
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   2m22s   v1.15.6

往叢集裡面加入node節點

在節點node1上,按上面的提示執行命令:

kubeadm join 192.168.18.134:6443 --token 8y4nd8.ww9f2npklyebtjqp \
    --discovery-token-ca-cert-hash sha256:c5f01fe144020785cb82b53bcda3b64c2fb8d955af3ca863b8c31d9980c32023

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在Master節點上(control-plane)上檢視節點資訊

kubectl get no
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   master   7m    v1.15.6
k8s-node1    NotReady   <none>   65s   v1.15.6
k8s-node2    NotReady   <none>   65s   v1.15.6

我們看到了多了一個節點,雖然現在都是NotReady狀態。

Token過期後再加入節點

過了一段時間後,再加入節點,這個時候會提示token已經過期了。我們可以這樣拿到token和hash值。

kubeadm token create
kubeadm token list
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

安裝Network外掛

安裝flannel網路外掛。(網路允許的話)

檢視安裝方法

檢視flannel的官網https://github.com/coreos/flannel,找到安裝方法。

For Kubernetes v1.7+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

下載yml檔案

在有網路的機器上下載kube-flannel.yml檔案

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

把下載好的yml檔案分發到k8s叢集的三臺機器裡面。

下載映象

cat kube-flannel.yml | grep image
        image: quay.io/coreos/flannel:v0.11.0-amd64
        ...

還記得前面方法麼?不記得就回到上面再看看吧。

docker pull quay.azk8s.cn/coreos/flannel:v0.11.0-amd64
docker tag ff281650a721 quay.io/coreos/flannel:v0.11.0-amd64
docker rmi quay.azk8s.cn/coreos/flannel:v0.11.0-amd64

安裝flannel

我們也可以選擇安裝Calico網路外掛。

在Master節點執行:

kubectl apply -f kube-flannel.yml

podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

網路不行的話

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

"kubeadm config print init-defaults"這個命令可以告訴我們kubeadm.yaml版本資訊。

檢視節點資訊

[root@k8s-master ~]# kubectl get no
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   5h46m   v1.15.6
k8s-node1    Ready    <none>   5h41m   v1.15.6
k8s-node2    Ready    <none>   5h38m   v1.15.6

這一下所有節點都已經ready了。

檢視程序

Master節點

[root@k8s-master ~]# ps -ef | grep kube
root       1674      1  1 14:17 ?        00:02:55 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1
root       2410   2393  1 14:17 ?        00:02:24 etcd --advertise-client-urls=https://192.168.18.134:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://192.168.18.134:2380 --initial-cluster=k8s-master=https://192.168.18.134:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.18.134:2379 --listen-peer-urls=https://192.168.18.134:2380 --name=k8s-master --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
root       2539   2520  3 14:18 ?        00:04:58 kube-apiserver --advertise-address=192.168.18.134 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root       2822   2802  0 14:18 ?        00:00:05 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8s-master
root       3382   2994  0 14:18 ?        00:00:01 /home/weave/kube-utils -run-reclaim-daemon -node-name=k8s-master -peer-name=da:f9:bb:91:b9:c4 -log-level=debug
root      19885  19841  2 14:55 ?        00:02:25 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --node-cidr-mask-size=24 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --use-service-account-credentials=true
root      19894  19866  0 14:55 ?        00:00:10 kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
root      71218  19968  0 16:55 pts/1    00:00:00 grep --color=auto kube

Worker節點

[root@k8s-node1 ~]# ps -ef | grep kube
root       5013      1  1 14:24 ?        00:02:08 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1
root       5225   5206  0 14:24 ?        00:00:07 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8s-node1
root       5765   5517  0 14:24 ?        00:00:01 /home/weave/kube-utils -run-reclaim-daemon -node-name=k8s-node1 -peer-name=a2:4e:07:10:2c:21 -log-level=debug
root      15767   8087  0 16:56 pts/1    00:00:00 grep --color=auto kube

測試k8s叢集

安裝一個nginx。

建立一個部署(deployment)

在master節點(Control Plane)安裝一個叫nginx-deployment的deployment:

kubectl create deploy nginx-deployment --image=nginx
deployment.apps/nginx-deployment created

檢視deployment狀態

[root@k8s-master ~]# kubectl get deploy
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   1/1     1            1           119m

檢視pod狀態

[root@k8s-master ~]# kubectl get po
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-6f77f65499-tnztr   1/1     Running   0          120m

如果STATUS 不是Running狀態,說明拉去很慢,可以修改下docker的映象

配置docker源

在生產環境,肯定是有內部的映象源的,在這裡,我就模擬把源配置為阿里的映象源了。

/etc/docker/daemon.json內容如下:

{
	"exec-opts": ["native.cgroupdriver=systemd"],
	"registry-mirrors": ["http://hub-mirror.c.163.com"]
}

重啟docker

systemctl restart docker

這個時候,映象就容易拉取了。

測試pod

再次檢視deploy, pod,狀態已經變為READY了。

NAME                                READY   STATUS    RESTARTS   AGE    IP          NODE        NOMINATED NODE   READINESS GATES
nginx-deployment-6f77f65499-tnztr   1/1     Running   0          122m   10.46.0.1   k8s-node1   <none>           <none>

我們看到pod的IP為10.46.0.1。

在叢集內的三個節點訪問nginx,能成功訪問。

[root@k8s-node1 ~]# curl 10.46.0.1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

建立Service

我們把deployment暴露出來。

kubectl expose deploy nginx-deployment --port=80 --type=NodePort
service/nginx-deployment exposed

檢視狀態

[root@k8s-master ~]# kubectl get svc
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP        5h54m
nginx-deployment   NodePort    10.111.68.248   <none>        80:31923/TCP   122m

在三個節點內訪問nginx

[root@k8s-node1 ~]# curl 10.111.68.248
...
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
...

在叢集外訪問nginx

curl 192.168.18.134:31923

...
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
...