Centos7三臺機器安裝 k8s
1.機器配置
硬體 | 說明 |
---|---|
CPU | 2核+ |
記憶體 | 2G + |
硬碟 | 10G+ |
系統 | CentOS Linux release 7.9.2009 (Core) |
安裝k8s,cpu不能少於2核,記憶體不能少於2G, 否則安裝時說提示出錯,無法安裝。
2.軟體安裝 (每臺都執行一次)
2.1 修改Hosts
vi /etc/hosts
192.168.56.101 master01
192.168.56.102 node1
192.168.56.103 node2
修改Hostname
hostname
# localhost.localdomain // 顯示當前的hostname
hostnamectl set-hostname master01 //其它兩臺是 node1\node2
hostname
# master01 // 顯示修改後的hostname 其它兩臺是 node1\node2
修改後的hostname 在kubectl get node 裡面顯示
安裝成功的效果圖
kubectl get node
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 112m v1.20.1
node1 Ready <none> 79m v1.20.1
node2 Ready < none> 56m v1.20.1
2.2 關閉swap,註釋swap分割槽
[[email protected] ~]# swapoff -a
[[email protected] ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Mar 31 22:44:34 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root / xfs defaults 0 0
UUID=5fecb240-379b-4331-ba04-f41338e81a6e /boot ext4 defaults 1 2
/dev/mapper/cl-home /home xfs defaults 0 0
#/dev/mapper/cl-swap swap swap defaults 0 0
2.3 配置核心引數,將橋接的IPv4流量傳遞到iptables的鏈
[[email protected] ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[[email protected] ~]# sysctl --system
2.4 基礎包安裝
- 新增阿里源
[[email protected] ~]# rm -rfv /etc/yum.repos.d/*
[[email protected] ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
- 安裝常用包
[[email protected] ~]# yum install vim bash-completion net-tools gcc -y
- 使用aliyun源安裝docker-ce
[[email protected] ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[[email protected] ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[[email protected] ~]# yum -y install docker-ce
- 新增aliyundocker倉庫加速器
[[email protected] ~]# mkdir -p /etc/docker
[[email protected] ~]# tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"]
}
EOF
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl restart docker
2.5 安裝k8s
安裝kubectl、kubelet、kubeadm (新增阿里kubernetes源)
[[email protected] ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安裝
[[email protected] ~]# yum install kubectl kubelet kubeadm
[[email protected] ~]# systemctl enable kubelet
3 master機器安裝
POD的網段為: 10.122.0.0/16, api server地址就是master本機IP。
這一步很關鍵,由於kubeadm 預設從官網k8s.grc.io下載所需映象,國內無法訪問,因此需要通過–image-repository指定阿里雲映象倉庫地址。
kubeadm init --kubernetes-version=1.20.0 \
--apiserver-advertise-address=192.168.56.101 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16
叢集初始化成功後返回如下資訊:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.101:6443 --token wl0fs4.mjpadkvixjzae8j4 \
--discovery-token-ca-cert-hash sha256:9c13914b176c5622098f47c8392a58657f7cd0a6a15b5814f30c9baaf7536b79
執行
[[email protected] ~]# mkdir -p $HOME/.kube
[[email protected] ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[[email protected] ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[email protected] ~]# source <(kubectl completion bash)
檢視節點,pod
master01 NotReady 需要拉取映象要等幾分鐘才會轉成Ready
[[email protected] ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master01 NotReady master 2m29s v1.18.0
[[email protected] ~]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7ff77c879f-fsj9l 0/1 Pending 0 2m12s
kube-system coredns-7ff77c879f-q5ll2 0/1 Pending 0 2m12s
kube-system etcd-master01.paas.com 1/1 Running 0 2m22s
kube-system kube-apiserver-master01.paas.com 1/1 Running 0 2m22s
kube-system kube-controller-manager-master01.paas.com 1/1 Running 0 2m22s
kube-system kube-proxy-th472 1/1 Running 0 2m12s
kube-system kube-scheduler-master01.paas.com 1/1 Running 0 2m22s
[[email protected] ~]#
- 安裝calico網路
[[email protected] ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
問題 :
- kubectl get cs
[[email protected] ~]# kubectl get cs
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
解決:
將 兩檔案裡面的“–port=0“ 這行刪除” “”
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
重啟kubelet服務 systemctl restart kubelet
4 節點機器安裝
- 加入k8s叢集
kubeadm join 192.168.56.101:6443 --token wl0fs4.mjpadkvixjzae8j4 \
> --discovery-token-ca-cert-hash sha256:9c13914b176c5622098f47c8392a58657f7cd0a6a15b5814f30c9baaf7536b79
-
設定kubectl 客戶端可用
複製admin.conf到節點 在master主機執行
[[email protected] ~]# scp /etc/kubernetes/admin.conf node1:/etc/kubernetes/admin.conf
[[email protected] ~]# scp /etc/kubernetes/admin.conf node2:/etc/kubernetes/admin.conf
節點機器執行
[[email protected] ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[[email protected] ~]# source ~/.bash_profile
[[email protected] ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 121m v1.20.1
node1 Ready <none> 88m v1.20.1
node2 Ready <none> 102s v1.20.1
5 安裝kubernetes-dashboard
官方部署dashboard的服務沒使用nodeport,將yaml檔案下載到本地,在service裡新增nodeport
[[email protected] ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
[[email protected] ~]# vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30000
selector:
k8s-app: kubernetes-dashboard
[[email protected] ~]# kubectl create -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
- 獲取token
kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
6. 常見問題
- 問題 : k8s network: stat /var/lib/calico/nodename: no such file or directory
(見calico-kube-controllers 0/1 CrashLoopBackOff 狀態)
命令:kubectl describe pod calico-kube-controllers-744cfdf676-6jbgn -n kube-system
解決:
安裝k8s 的時候殘留的檔案導致的問題
wget https://docs.projectcalico.org/manifests/calico.yaml
rm -rf /var/lib/calico
rm -rf /etc/cni/net.d/calico*
#重新建立calico
kubectl delete -f calico.yaml
kubectl create -f calico.yaml
- 問題 : calico/node is not ready: BIRD is not ready: BGP not established with 10.117.150.23
(見calico-node 0/1 CrashLoopBackOff 狀態)
解決:
// calico.yaml 檔案新增以下二行
- name: IP_AUTODETECTION_METHOD
value: “interface=ens.*” # ens 根據實際網絡卡開頭配置
- name: CLUSTER_TYPE
value: "k8s,bgp"
- name: IP_AUTODETECTION_METHOD
value: "interface=ens.*"
#或者 value: "interface=ens160"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
參考頁面
https://www.kubernetes.org.cn/7189.html