Kubernetes叢集搭建
0. 概要
使用kubeadm搭建一個單節點kubernets例項,僅供學習. 執行環境和軟體概要如下:
~ | 版本 | 備註 |
---|---|---|
OS | Ubuntu 18.0.4 | 192.168.132.152 my.servermaster.local/192.168.132.154 my.worker01.local |
Docker | 18.06.1~ce~3-0~ubuntu | k8s最新版(1.12.3)支援的最高版本, 必須固定 |
Kubernetes | 1.12.3 | 目標軟體新 |
以上系統和軟體基本是2018最新的狀態, 其中docker需要注意必須安裝k8s支援到的版本.
1. 安裝步驟
- 關閉防火牆
swapoff -a
- 安裝執行時, 預設使用docker, 安裝docker即可
apt-get install docker-ce=18.06.1~ce~3-0~ubuntu
- 安裝kubeadm 一下命令和官網的命令一致, 但是是包源改為阿里雲
apt-get update && apt-get install -y apt-transport-https curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl
2. 使用kubeadm建立叢集
2.1 準備映象
因為國內是訪問不到k8s.gcr.io所以需要將需要的映象提前下載, 這次採用從阿里雲映象倉庫下載, 並修改下載後的映象tag為k8s.gcr.io
# a. 檢視都需要哪些映象需要下載 kubeadm config images list --kubernetes-version=v1.12.3 k8s.gcr.io/kube-apiserver:v1.12.3 k8s.gcr.io/kube-controller-manager:v1.12.3 k8s.gcr.io/kube-scheduler:v1.12.3 k8s.gcr.io/kube-proxy:v1.12.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2 # b. 建立一個自動處理指令碼下載映象->重新tag->刪除老tag vim ./load_images.sh #!/bin/bash ### config the image map declare -A images map=() images["k8s.gcr.io/kube-apiserver:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3" images["k8s.gcr.io/kube-controller-manager:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3" images["k8s.gcr.io/kube-scheduler:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3" images["k8s.gcr.io/kube-proxy:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.3" images["k8s.gcr.io/pause:3.1"]="registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1" images["k8s.gcr.io/etcd:3.2.24"]="registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24" images["k8s.gcr.io/coredns:1.2.2"]="registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2" ### re-tag foreach for key in ${!images[@]} do docker pull ${images[$key]} docker tag ${images[$key]} $key docker rmi ${images[$key]} done ### check docker images # c. 執行指令碼準映象 sudo chmod +x load_images.sh ./load_images.sh
2.2 初始化叢集(master)
初始化需要指定至少兩個引數:
- kubernetes-version: 方式kubeadm訪問外網獲取版本
- pod-network-cidr: flannel網路外掛配置需要
### 執行初始化命令
sudo kubeadm init --kubernetes-version=v1.12.3 --pod-network-cidr=10.244.0.0/16
### 最後的結果如下
... ...
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.132.152:6443 --token ymny55.4jlbbkxiggmn9ezh --discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9
2.3 根據成功資訊配置非管理員賬號使用kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
使用非root賬號檢視節點情況:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
servermaster NotReady master 28m v1.12.3
發現有一個master節點, 但是狀態是NotReady, 這裡需要做一個決定:
如果希望是單機則執行如下
kubectl taint nodes --all node-role.kubernetes.io/master-
如果希望搭建繼續, 則繼續後續步驟, 此時主節點狀態可以忽略.
2.4 應用網路外掛
檢視kube-flannel.yml檔案內容, 複製到本地檔案避免terminal無法遠端獲取
kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
2.5 新建worker節點
worker節點新建參考[1. 安裝步驟]在另外一臺伺服器上新建即可, worker節點不用準備2.1~2.3及之後的所有步驟, 僅需完成基本安裝, 安裝完畢進入新的worker節點, 執行上一步最後得到join命令:
kubeadm join 192.168.132.152:6443 --token ymny55.4jlbbkxiggmn9ezh --discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9
... ...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
2.6 檢查叢集(1 master, 1 worker)
kubectl get nodes
NAME STATUS ROLES AGE VERSION
servermaster Ready master 94m v1.12.3
worker01 Ready <none> 54m v1.12.3
2.5 建立dashboard
複製kubernetes-dashboard.yaml內容到本地檔案
kubectl create -f kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
瀏覽器輸入worker節點ip和埠使用https訪問如下:https://my.worker01.local:30000/#!/login 即可以驗證dashboard是否安裝成功.
3. 遇到問題
-
master搭建好了, worker也join了get nodes發現還是NotReady狀態
原因: 太複雜說不清楚任然是一個k8s issue, 檢視issue基本可以確定是cni(Container Network Interface)問題,而flannel覆蓋修改了這個問題
解決方法: 安裝flannel外掛(kubectl apply -f kube-flannel.yml)
-
配置錯誤重新開始搭建叢集
解決方案: kubeadm reset
-
不能訪問dashboard
原因: Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
解決方案:
-
修改 kubernetes-dashboard-ce.yaml 檔案中的 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 為 registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0
-
提前下載好映象並配置好tag, 注意下載的位置worker節點, 可以通過: kubectl describe pod kubernetes-dashboard-85477d54d7-wzt7 -n kube-system 檢視比較具體的資訊
-
4. 參考資料
安裝kuadmin相關:
建立叢集相關: