1. 程式人生 > >KUBERNETES-1-1-初始化與叢集

KUBERNETES-1-1-初始化與叢集

1.配置本地域名解析master,node1,node2,將防火牆服務關閉,同時禁止其開機啟動。因為後續涉及到網路之間的通訊比較多,要配置防火牆也比較複雜。這裡會涉及vim,wget,以及epel-release的安裝,比較簡單,這裡就不詳述了。

[[email protected] ~]# yum install -y wget epel-release vim

[[email protected] ~]# vim /etc/hosts
[[email protected] ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.20.0.128 master.example.com master
172.20.0.129 node1.example.com node1
172.20.0.130 node2.example.com node2
[

[email protected] ~]# systemctl status firewalld
[[email protected] ~]# systemctl stop firewalld
[[email protected] ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
 

2.配置docker-ce和kubernetes的yum源,完成repo檔案配置後,檢查安裝包的可用情況。將repo檔案遠端複製到node1和node2上。

[[email protected] ~]# cd /etc/yum.repos.d/
[[email protected] yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[[email protected] yum.repos.d]# vim kubernetes.repo 
[[email protected]

yum.repos.d]# cat kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpg=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enable=1

[[email protected] yum.repos.d]# yum repolist
[[email protected] yum.repos.d]# scp docker-ce.repo kubernetes.repo node1:/etc/yum.repos.d/
[[email protected] yum.repos.d]# scp docker-ce.repo kubernetes.repo node2:/etc/yum.repos.d/

 

3.這裡用yum安裝docker-ce kubelet kubeadm kubectl,但是需要rpm-package-key.gpg進行驗證。這裡docker的版本是有要求的,這裡要特別注意安裝包版本之間的相容。

[[email protected] ~]# wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[[email protected] ~]# rpm --import rpm-package-key.gpg

[[email protected] ~]# yum install docker-ce kubelet kubeadm kubectl

 

4.編輯docker.service檔案,增加Environment引數。重新載入daemon,啟動docker服務,檢視docker資訊。

[[email protected] ~]# vim /usr/lib/systemd/system/docker.service

[[email protected] ~]# grep ExecStart -B2 /usr/lib/systemd/system/docker.service
Environment="HTTPS_PROXY=http://206.189.28.51:3128"
Environment="NO_PROXY=127.0.0.0/8,172.20.0.0/16"
ExecStart=/usr/bin/dockerd -H unix://

[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl start docker
[[email protected] ~]# docker info

 

5.考慮到後續有大量的網路橋接工作,先確認網路橋接功能處於開啟狀態。

[[email protected] ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 
1
[[email protected] ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables 
1
 

6.檢視kubelet生成的檔案,以及服務檔案的引數(為空,後續可傳入)。設定開機啟動kubelet和docker,這裡不用現在就啟動。否則,可能會報錯。

[[email protected] ~]# rpm -ql kubelet
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/etc/systemd/system/kubelet.service
/usr/bin/kubelet
[[email protected] ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=
[[email protected] ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[[email protected] ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service


7.編寫一個可執行指令碼,將元件從國內映象站pull下來,再打上標籤為國外源映象倉庫。通過init初始化kubernetes。

[[email protected] ~]# vim kubernetes.sh

[[email protected] ~]# cat kubernetes.sh 
#!/bin/bash
images=(kube-proxy-amd64:v1.11.1 kube-scheduler-amd64:v1.11.1 kube-controller-manager-amd64:v1.11.1 kube-apiserver-amd64:v1.11.1
etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9
k8s-dns-dnsmasq-nanny-amd64:1.14.9 )
for imageName in ${images[@]} ; do
  docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName
  docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName
  #docker rmi registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName
done
docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1

[[email protected] ~]# chmod +x kubernetes.sh 
[[email protected] ~]# ./kubernetes.sh 

[[email protected] ~]# swapoff -a
[[email protected] ~]# sed -i 's/.*swap.*/#&/' /etc/fstab
[[email protected] ~]# kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.20.0.128
 

特別注意:網上還有一種方法,操作如下,但是我操作多次都不成功,鑑於國內網路情況,不推薦。

編輯kubelet引數檔案,增加"--fail-swap-on=false"引數,這樣可以當swap沒有啟動的時候定義為false,然後通過ignore忽略到該錯誤。通過kubeadm init初始化master節點的kubernetes部署。具體操作如下:

[[email protected] ~]# vim /etc/sysconfig/kubelet
[[email protected] ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

[[email protected] ~]# kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

  kubeadm join 172.20.0.128:6443 --token r7pywg.zedbc48t5uzhgryu --discovery-token-ca-cert-hash sha256:36cd5484b32012e85b5d17268e21480e7ff9617a11df70c3927dd6faf6cbb616
 

8.建立隱藏目錄,通過複製生成管理配置和認證檔案。

[[email protected] ~]# mkdir -p $HOME/.kube
[[email protected] ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[[email protected] ~]# chown $(id -u):$(id -g) $HOME/.kube/config
 

9.檢視kubernetes狀態,並部署flannel(github上有詳細說明,參見coreos/flannel)。flannel的映象要pull到本地。同時要確認kube-system名稱空間中的kube-flannel-ds-amd64-xftt2容器執行起來了。這樣master節點才算正式搭建完畢。

[[email protected] ~]# kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
[[email protected] ~]# kubectl get nodes
NAME                 STATUS     ROLES     AGE       VERSION
master.example.com   NotReady   master    1h        v1.11.1
[[email protected] ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

[[email protected] ~]# kubectl get nodes
NAME                 STATUS     ROLES     AGE       VERSION
master.example.com   NotReady   master    1h        v1.11.1
[[email protected] ~]# docker image pull quay.io/coreos/flannel:v0.10.0-amd64
[[email protected] ~]# kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
master.example.com   Ready     master    1h        v1.11.1
[[email protected] ~]# kubectl get pods -n kube-system
NAME                                         READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-5rshr                     1/1       Running   0          1h
coredns-78fcdf6894-sgrjm                     1/1       Running   0          1h
etcd-master.example.com                      1/1       Running   0          1h
kube-apiserver-master.example.com            1/1       Running   0          1h
kube-controller-manager-master.example.com   1/1       Running   0          1h
kube-flannel-ds-amd64-xftt2                  1/1       Running   0          21m
kube-proxy-l259p                             1/1       Running   0          1h
kube-scheduler-master.example.com            1/1       Running   0          1h
[[email protected] ~]# kubectl get ns
NAME          STATUS    AGE
default       Active    1h
kube-public   Active    1h
kube-system   Active    1h

 

10.將node1加入到叢集,這個指令在kubeadm init執行結束的最後有,可以儲存下來。如果忘記了,使用kubeadmin token list查詢。(注:在執行這步操作之前,對node1參照master節點的方式,安裝程式包,啟動docker和kubelet,下載映象再打標籤,啟動flannel,中間會涉及到一些排錯,排錯完成要kubeadm reset再執行kubeadm join)

[[email protected] ~]# kubeadm join 172.20.0.128:6443 --token r7pywg.zedbc48t5uzhgryu --discovery-token-ca-cert-hash sha256:36cd5484b32012e85b5d17268e21480e7ff9617a11df70c3927dd6faf6cbb616

注意這裡出現了報錯,解決方案是kubeadmin reset,然後重新執行上面的指令。
[preflight] Some fatal errors occurred:
    [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
    [ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

 

11.在master節點上確認node1節點已經加入叢集,實驗任務完成。(如果要加入node2也按照上述步驟執行即可)

[[email protected] ~]# kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
master.example.com   Ready     master    37m       v1.11.1
node1.example.com    Ready     <none>    21s       v1.11.1