使用 kubeadm 部署 v1.18.5 版本 Kubernetes 叢集
轉載自:https://www.cnblogs.com/hellxz/p/use-kubeadm-init-kubernetes-cluster.html,並由個人完全實踐,感謝@hellxz。
說明
本文系搭建 kubernetes v1.18.5 叢集筆記,使用三臺虛擬機器作為 CentOS 測試機,安裝 kubeadm、kubelet、kubectl 均使用 yum 安裝,網路元件選用的是 flannel。
環境準備
部署叢集沒有特殊說明均使用 root 使用者執行命令。
硬體資訊
IP | hostname | mem | disk | explain |
---|---|---|---|---|
10.1.1.204 | k8s-master | 4GB | 36GB | k8s 控制平臺節點 |
10.1.1.151 | k8s-node1 | 4GB | 36GB | k8s執行節點1 |
10.1.1.186 | k8s-node2 | 4GB | 36GB | k8s執行節點2 |
軟體資訊
software | version |
---|---|
CentOS | CentOS Linux release 7.6.1810 (Core) |
Kubernetes | 1.18.5 |
Docker | 19.03.12 |
保證環境正確性
purpose | commands |
---|---|
保證叢集各節點互通 | ping -c 3 <ip> |
保證MAC地址唯一 | ip link 或 ifconfig -a |
保證叢集內主機名唯一 | 查詢 hostnamectl status ,修改 hostnamectl set-hostname <hostname> |
保證系統產品uuid唯一 | dmidecode -s system-uuid 或 sudo cat /sys/class/dmi/id/product_uuid |
修改MAC地址參考命令:
ifconfig eth0 down
ifconfig eth0 hw ether 00:0c:29:84:fd:a4
ifconfig eth0 up
如product_uuid不唯一,請考慮重新安裝CentOS。
確保埠開放正常
k8s-master
Protocol | Direction | Port Range | Purpose |
---|---|---|---|
TCP | Inbound | 6443* | Kube-apiserver |
TCP | Inbound | 2379-2380 | Etcd API |
TCP | Inbound | 10250 | Kubelet API |
TCP | Inbound | 10251 | Kube-scheduler |
TCP | Inbound | 10252 | Kube-controller-manager |
k8s-node*
節點埠檢查:
Protocol | Direction | Port Range | Purpose |
---|---|---|---|
TCP | Inbound | 10250 | Kubelet api |
TCP | Inbound | 30000-32767 | NodePort Service |
配置主機互信
配置hosts解析:
cat >> /etc/hosts <<EOF
10.1.1.204 k8s-master
10.1.1.151 k8s-node1
10.1.1.186 k8s-node2
EOF
在 k8s-master
生成ssh金鑰,並分發到各個節點:
# 生成ssh金鑰,直接一路回車
ssh-keygen -t rsa
# 複製剛剛生成的金鑰到各節點可信列表中,需分別輸入各主機密碼
ssh-copy-id root@k8s-master
ssh-copy-id root@k8s-node1
ssh-copy-id root@k8s-node2
禁用swap
swap僅當記憶體不夠時會使用硬碟塊充當額外記憶體,硬碟的io較記憶體差距極大,禁用swap以提高效能各節點均需執行:
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
關閉SELinux
關閉 SELinux,否則 kubelet 掛載目錄時可能報錯 Permission denied
,可以設定為 permissive
或 disabled
,permissive
會提示warn資訊各節點均需執行:
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
設定時區、同步時間
timedatectl set-timezone Asia/Shanghai
systemctl enable --now chronyd
檢視同步狀態:
timedatectl status
# 將當前的 UTC 時間寫入硬體時鐘
timedatectl set-local-rtc 0
# 重啟依賴於系統時間的服務
systemctl restart rsyslog && systemctl restart crond
關閉防火牆
systemctl stop firewalld
systemctl disable firewalld
修改核心引數
cp /etc/sysctl.conf{,.bak}
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
部署Docker
所有節點均需要安裝Docker。
新增 Docker yum 源
# 安裝必要依賴
yum install -y yum-utils device-mapper-persistent-data lvm2
# 新增 aliyun docker-ce yum 源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 重建 yum 快取
yum makecache fast
安裝 Docker
# 檢視可用 docker 版本
yum list docker-ce.x86_64 --showduplicates | sort -r
* updates: mirrors.tuna.tsinghua.edu.cn
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
* extras: mirrors.tuna.tsinghua.edu.cn
docker-ce.x86_64 3:19.03.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.12-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.11-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.10-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
* base: mirrors.tuna.tsinghua.edu.cn
Available Packages
# 安裝指定版本 Docker
yum install -y docker-ce-19.03.12-3.el7
這裡以安裝
19.03.12
版本舉例,注意版本號不包含:
與之前的數字。
確保網路模組開機自動載入
lsmod | grep overlay
lsmod | grep br_netfilter
若上面命令無返回值輸出或提示檔案不存在,需執行以下命令:
cat > /etc/modules-load.d/docker.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
使橋接流量對iptables可見
各個節點均需執行:
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
驗證是否生效,均返回 1
即正確。
sysctl -n net.bridge.bridge-nf-call-iptables
sysctl -n net.bridge.bridge-nf-call-ip6tables
配置 Docker
mkdir /etc/docker
# 修改 cgroup 驅動為 systemd [k8s官方推薦]、限制容器日誌量、修改儲存型別,最後的 docker 家目錄可修改
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": ["https://7uuu3esz.mirror.aliyuncs.com"],
"data-root": "/data/docker"
}
EOF
# 新增開機自啟,立即啟動
systemctl enable --now docker
驗證 Docker 是否正常
# 檢視docker資訊,判斷是否與配置一致
docker info
Client:
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 19.03.12
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-957.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.683GiB
Name: k8s-master
ID: ELO6:HASF:6EIU:NJP3:SEMF:KJIH:G7IB:ZEYI:DTJU:V6E4:VU4D:3DHF
Docker Root Dir: /data/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://7uuu3esz.mirror.aliyuncs.com/
Live Restore Enabled: false
# hello-docker 測試
docker run --rm hello-world
# 刪除測試 image
docker rmi hello-world
新增使用者到 Docker 組
對於非root使用者,無需sudo即可使用docker命令。
# 新增使用者到 docker 組
usermod -aG docker <USERNAME>
# 當前會話立即更新 docker 組
newgrp docker
部署 Kubernetes 叢集
如未說明,各節點均需執行如下步驟:
新增 kubernetes 源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 重建yum快取,輸入y新增證書認證
yum makecache fast
安裝 kubeadm、kubelet、kubectl
- 各節點均需安裝
kubeadm
、kubelet
; kubectl
僅k8s-master
節點需安裝(作為worker節點,kubectl無法使用,可以不裝)。
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
配置自動補全命令
# 安裝 bash 自動補全外掛
yum install bash-completion -y
# 設定 kubectl 與 kubeadm 命令補全,下次 login 生效
kubectl completion bash >/etc/bash_completion.d/kubectl
kubeadm completion bash > /etc/bash_completion.d/kubeadm
預拉取 kubernetes 映象
由於國內網路因素,kubernetes映象需要從mirrors站點或通過dockerhub使用者推送的映象拉取。
# 檢視指定 k8s 版本需要哪些映象
kubeadm config images list --kubernetes-version v1.18.5
W0815 22:18:40.474596 19979 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [k
ubelet.config.k8s.io kubeproxy.config.k8s.io]k8s.gcr.io/kube-apiserver:v1.18.5
k8s.gcr.io/kube-controller-manager:v1.18.5
k8s.gcr.io/kube-scheduler:v1.18.5
k8s.gcr.io/kube-proxy:v1.18.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
因為阿里雲的映象暫時還沒更新到
v1.18.5
版本,所以通過在dockerhub
上拉取。
在 /root/k8s
目錄下,新建指令碼 get-k8s-images.sh
內容如下:
#!/bin/bash
# Script For Quick Pull K8S Docker Images
# by iuskye <[email protected]>
KUBE_VERSION=v1.18.5
PAUSE_VERSION=3.2
CORE_DNS_VERSION=1.6.7
ETCD_VERSION=3.4.3-0
# pull kubernetes images from hub.docker.com
docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
# retag to k8s.gcr.io prefix
docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION
# untag origin tag, the images won't be delete.
docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
指令碼新增可執行許可權,執行指令碼拉取映象:
chmod +x get-k8s-images.sh
./get-k8s-images.sh
拉取完成,執行 docker images
檢視映象:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.18.5 a1daed4e2b60 7 weeks ago 117MB
k8s.gcr.io/kube-controller-manager v1.18.5 8d69eaf196dc 7 weeks ago 162MB
k8s.gcr.io/kube-apiserver v1.18.5 08ca24f16874 7 weeks ago 173MB
k8s.gcr.io/kube-scheduler v1.18.5 39d887c6621d 7 weeks ago 95.3MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 6 months ago 683kB
k8s.gcr.io/coredns 1.6.7 67da37a9a360 6 months ago 43.8MB
k8s.gcr.io/etcd
初始化 k8s-master
僅 kube-master 節點需要執行此步驟。
修改kubelet配置預設 cgroup driver
mkdir /var/lib/kubelet
cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
systemctl restart kubelet
生成 kubeadm初始化配置檔案
[可選] 僅當需自定義初始化配置時用。
kubeadm config print init-defaults > init.default.yaml
測試環境是否正常
kubeadm init phase preflight
W0815 22:32:18.647679 21047 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [k
ubelet.config.k8s.io kubeproxy.config.k8s.io][preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
上圖提示Warning是正常的,校驗不了k8s資訊是因為連不上被ban的網站。
初始化 mster
10.244.0.0/16是flannel固定使用的IP段,設定取決於網路元件要求。、
kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.5 [--config kubeadm-init.yaml]
輸出如下:
kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.5
W0815 22:34:22.306284 21385 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [k
ubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.
svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.1.204][certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.1.1.204 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.1.1.204 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0815 22:34:25.496900 21385 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using
"Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"
W0815 22:34:25.498008 21385 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using
"Node,RBAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubern
etes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 22.501958 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelet
s in the cluster[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/maste
r=''"[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/mast
er:NoSchedule][bootstrap-token] Using token: o3imhx.7evputkjj3fspv7t
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long t
erm certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node B
ootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluste
r[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and ke
y[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.1.1.204:6443 --token o3imhx.7evputkjj3fspv7t \
--discovery-token-ca-cert-hash sha256:7e8aac39cbd6374646ff2bdd020215e5bc06ef0a91f5b90e0a3482a0b58e622d
為日常使用叢集的使用者新增 kubectl 使用許可權
su iuskye
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/admin.conf
sudo chown $(id -u):$(id -g) $HOME/.kube/admin.conf
echo "export KUBECONFIG=$HOME/.kube/admin.conf" >> ~/.bashrc
exit
配置 master 認證
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
. /etc/profile
如果不配置這個,會提示如下輸出:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
此時master節點已經初始化成功,但是還未完裝網路元件,還無法與其他節點通訊。
安裝網路元件,以 flannel 為例
cd ~/k8s
yum install -y wget
# 下載flannel最新配置檔案
wget http://download.iuskye.com/Linux/Kubernetes/v1.18.5/kube-flannel.yml
kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
檢視 k8s-master 節點狀態
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 12m v1.18.8
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 20m v1.18.8
如果
STATUS
提示NotReady
,可以通過kubectl describe node kube-master
檢視具體的描述資訊,效能差的伺服器到達Ready
狀態時間會長些。
備份映象供其他節點使用
在 k8s-master
節點將映象備份出來,便於後續傳輸給其他node節點,當然有映象倉庫更好。
docker save k8s.gcr.io/kube-proxy:v1.18.5 \
k8s.gcr.io/kube-apiserver:v1.18.5 \
k8s.gcr.io/kube-controller-manager:v1.18.5 \
k8s.gcr.io/kube-scheduler:v1.18.5 \
k8s.gcr.io/pause:3.2 \
k8s.gcr.io/coredns:1.6.7 \
k8s.gcr.io/etcd:3.4.3-0 > k8s-imagesV1.18.5.tar
初始化 k8s-node* 節點並加入叢集
拷貝映象到 node 節點
以 k8s-node1
舉例,node2不再贅述。
# 此時命令在 kube-node* 節點上執行
mkdir ~/k8s
scp root@k8s-master:/root/k8s/k8s-imagesV1.18.5.tar ~/k8s
cd ~/k8s
docker load < k8s-imagesV1.18.5.tar
獲取加入 kubernetes 的命令
訪問 k8s-master
輸入建立新token命令,同時輸出加入叢集的命令:
kubeadm token create --print-join-command
W0815 22:52:33.703674 27535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [k
ubelet.config.k8s.io kubeproxy.config.k8s.io]kubeadm join 10.1.1.204:6443 --token xv18dj.4j1929tfam4y6pap --discovery-token-ca-cert-hash sha256:7e8aac39cbd6374646ff2bdd020215e5bc06ef0a91f5b90e0a3482a0b58e622d
在 k8s-node* 節點上執行加入叢集的命令
kubeadm join 10.1.1.204:6443 --token xv18dj.4j1929tfam4y6pap --discovery-token-ca-cert-hash sha256:7e8aac39cbd6374646ff2bdd020215e5bc06ef0a91f5b90e0a3482a0b58e622d
W0815 22:57:07.493498 21725 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-syste
m namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
檢視叢集節點狀態
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 25m v1.18.8
k8s-node1 NotReady <none> 3m22s v1.18.8
k8s-node2 NotReady <none> 2m58s v1.18.8
發現 node 節點狀態為NotReady,彆著急,等幾分鐘就好了:
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 26m v1.18.8
k8s-node1 Ready <none> 4m51s v1.18.8
k8s-node2 Ready <none> 4m27s v1.18.8
6.1 部署Dashboard
wget http://download.iuskye.com/Linux/Kubernetes/v1.18.0/bin_install/dashboard/recommended.yaml
預設Dashboard只能叢集內部訪問,修改Service為NodePort型別,暴露到外部:
vi recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001
type: NodePort
selector:
k8s-app: kubernetes-dashboard
# 需要等待一段時間下方 STATUS 為 Running 才行
kubectl apply -f recommended.yaml
kubectl get pods,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-694557449d-6mwvp 1/1 Running 0 41s
pod/kubernetes-dashboard-9774cc786-rqqfq 1/1 Running 0 41s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.103.62.172 <none> 8000/TCP 41s
service/kubernetes-dashboard NodePort 10.105.217.95 <none> 443:30001/TCP 41s
訪問地址:https://NodeIP:30001
;使用Firefox瀏覽器,Chrome瀏覽器打不開不信任SSL證書的網站。
建立service account並繫結預設cluster-admin管理員叢集角色:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
獲得token:
eyJhbGciOiJSUzI1NiIsImtpZCI6Im5MWmNZMTczZlI2V2l2R2NTa2Viank5OVo3Z0d1RF84c0lnLUZXbWJNNVkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdzJjdnQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNDE0OTIwOWQtZDNmNy00NmZkLTg2YWQtYjFmMGYxODM5Mjk0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.UGbo4brzxWfyYI10r0esCUXdCqvcE7dMmjhxhf9qCsfG-8sNr4_6CghG4Cg5qUOmKjtXnG_RFGjDtgQna8D1zxaK8iO9N28kaBxv5dFoubaMV1O1ueLFvnXtSDM9ekf4G88feXRoUHLrCv2HM0XkNZ-_665E8CB1_rVQnGSeVJ7EmJxcEJNYruHmVvsoJ0HfvqUa9X7_K6r7ftkT5hmJSx6EYxUf0zx6siMKo0Dlcn5jLbmNbDwFGbs8_lCDrRxQvV_Z8na3Zk7cN3eTqvuQFNCflmXDsIVtnr8xoKPrySjw_sOX4jxLNWc2dbUUcX3rHrSd9cEtoRLvaO7ab_Q-Jw
這裡需要注意貼上的時候有可能被換行,如果被換行,可在記事本中設定為一行。
使用輸出的token登入Dashboard。
登入介面:
Cluster Roles:
名稱空間:
節點:
Master:
Master Pods: