Kubernetes 安裝手冊(Ubuntu非高可用版)
阿新 • • 發佈:2021-08-13
目錄
安裝前準備工作
1.設定hosts解析
操作節點:所有節點(k8s-master
)均需執行
- 修改hostname
# 在master節點 $ hostnamectl set-hostname k8s-master #設定master節點的hostname # slave1節點 $ hostnamectl set-hostname k8s-worker-node1 # slave2節點 $ hostnamectl set-hostname k8s-worker-node2
2.調整系統配置
操作節點: 所有的master和slave節點(k8s-master,k8s-slave
)需要執行
- 設定iptables
$ iptables -P FORWARD ACCEPT
$ /etc/init.d/ufw stop
$ ufw disable
*關閉swap
swapoff -a
# 防止開機自動掛載 swap 分割槽
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
- 修改核心引數
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward=1 vm.max_map_count=262144 EOF modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
- 設定apt源
$ apt-get update && apt-get install -y apt-transport-https ca-certificates software-properties-common $ curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - $ curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add $ add-apt-repository "deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu $(lsb_release -cs) stable" $ add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main" $ apt-get update #若上步出現NO_PUBLICKEY問題,參考https://www.cnblogs.com/jiangzuo/p/13667011.html
3.安裝docker
操作節點: 所有節點
$ apt-get install docker-ce=5:20.10.8~3-0~ubuntu-bionic
## 啟動docker
$ systemctl enable docker && systemctl start docker
部署kubernetes
1.安裝 kubeadm,kubelet和kubectl
操作節點: 所有的master和slave節點(k8s-master,k8s-slave
) 需要執行
$ apt-get install kubelet=1.21.1-00 kubectl=1.21.1-00 kubeadm=1.21.1-00
## 檢視kubeadm 版本
$ kubeadm version
## 設定kubelet開機啟動
$ systemctl enable kubelet
2.初始化配置檔案
操作節點: 只在master節點(k8s-master
)執行
$ kubeadm config print init-defaults > kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.136.138 # 修改為master節點ip
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: node # 刪掉此行,刪掉此行,刪掉此行
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 修改此處映象repo
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 # 新增此行
serviceSubnet: 10.96.0.0/12
scheduler: {}
3.提前下載映象
操作節點:只在master節點(k8s-master
)執行
# 提前下載映象到本地
$ kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.4.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
failed to pull image "registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0": output: Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher
提示找不到coredns
的映象,我們可以通過如下方式解決:
$ docker pull coredns/coredns:1.8.0
$ docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
4.初始化master節點
操作節點:只在master節點(k8s-master
)執行
$ kubeadm init --config kubeadm.yaml
若初始化成功後,最後會提示如下資訊:
...
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.136.138:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3a7987c9f5007ebac7980e6614281ee0e064c760c8db012471f9f662289cc9ce
接下來按照上述提示資訊操作,配置kubectl客戶端的認證
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
⚠️注意:此時使用 kubectl get nodes檢視節點應該處於notReady狀態,因為還未配置網路外掛
若執行初始化過程中出錯,根據錯誤資訊調整後,執行kubeadm reset後再次執行init操作即可
5.新增slave節點到叢集中
操作節點:所有的slave節點(k8s-slave
)需要執行
在每臺slave節點,執行如下命令,該命令是在kubeadm init成功後提示資訊中打印出來的,需要替換成實際init後打印出的命令。
kubeadm join 192.168.136.135:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1c4305f032f4bf534f628c32f5039084f4b103c922ff71b12a5f0f98d1ca9a4f
6.安裝calico外掛
操作節點:只在master節點(k8s-master
)執行
-
安裝operator
$ kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
-
等待operator pod安裝啟動完成
$ kubectl -n tigera-operator get po NAME READY STATUS RESTARTS AGE tigera-operator-698876cbb5-kfpb2 1/1 Running 0 38m
映象拉取比較慢,可以手動去節點docker pull拉取
-
編輯calico配置
$ vim custom-resources.yaml
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 10.244.0.0/16 #修改和pod cidr一致
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://docs.projectcalico.org/v3.20/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
-
建立calico配置
$ kubectl apply -f custom-resources.yaml
-
等待operator自動建立calico的pod
# operator會自動建立calico-apiserver和calico-system兩個名稱空間以及必要的pod,等待pod啟動完成即可 $ kubectl get ns NAME STATUS AGE calico-apiserver Active 13m calico-system Active 19m $ kubectl -n calico-apiserver get po NAME READY STATUS RESTARTS AGE calico-apiserver-554fbf9554-b6kzv 1/1 Running 0 13m $ kubectl -n calico-system get po NAME READY STATUS RESTARTS AGE calico-kube-controllers-868b656ff4-hn6qv 1/1 Running 0 20m calico-node-qqrp9 1/1 Running 0 20m calico-node-r45z2 1/1 Running 0 20m calico-typha-5b64cf4b48-vws5j 1/1 Running 0 20m calico-typha-5b64cf4b48-w6wqf 1/1 Running 0 20m
7.驗證叢集
操作節點: 在master節點(k8s-master
)執行
$ kubectl get nodes #觀察叢集節點是否全部Ready
建立測試nginx服務
$ kubectl run test-nginx --image=nginx:alpine
檢視pod是否建立成功,並訪問pod ip測試是否可用
$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-nginx-5bd8859b98-5nnnw 1/1 Running 0 9s 10.244.1.2 k8s-slave1 <none> <none>
$ curl 10.244.1.2
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
8.清理環境
如果你的叢集安裝過程中遇到了其他問題,我們可以使用下面的命令來進行重置:
# 在全部叢集節點執行
kubeadm reset
ifconfig cni0 down && ip link delete cni0
ifconfig flannel.1 down && ip link delete flannel.1
rm -rf /run/flannel/subnet.env
rm -rf /var/lib/cni/
mv /etc/kubernetes/ /tmp
mv /var/lib/etcd /tmp
mv ~/.kube /tmp
iptables -F
iptables -t nat -F
ipvsadm -C
ip link del kube-ipvs0
ip link del dummy0