1. 程式人生 > 實用技巧 >kubernetes叢集環境搭建

kubernetes叢集環境搭建

kubernetes叢集環境搭建

1.1 版本統一

Docker       18.09.0
---
kubeadm-1.14.0-0 
kubelet-1.14.0-0 
kubectl-1.14.0-0
---
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
---
v0.11.0-amd64

1.2 k8s安裝步驟

1.2.1 更新並安裝依賴
yum -y update
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
1.2.2 安裝Docker

安裝好Docker,版本為18.09.0

01 安裝必要的依賴
	sudo yum install -y yum-utils \
    device-mapper-persistent-data \
    lvm2
    
    
02 設定docker倉庫
	sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
	
【設定要設定一下阿里雲映象加速器】
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["http://2595fda0.m.daocloud.io"]
}
EOF
sudo systemctl daemon-reload


03 安裝docker

  yum install -y docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io


04 啟動docker
	sudo systemctl start docker && sudo systemctl enable docker
1.2.3 修改hosts檔案

(1)master

# 設定master的hostname,並且修改hosts檔案
sudo hostnamectl set-hostname m

vi /etc/hosts
192.168.1.157 master
192.168.1.158 node1
192.168.1.159 node2

(2)分別在兩個node節點執行

# 設定node1/node2的hostname,並且修改hosts檔案
sudo hostnamectl set-hostname node1
sudo hostnamectl set-hostname node2

vi /etc/hosts
192.168.1.157 master
192.168.1.158 node1
192.168.1.159 node2

(3)使用ping測試一下

1.2.4 系統基礎前提配置
# (1)關閉防火牆
systemctl stop firewalld && systemctl disable firewalld

# (2)關閉selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# (3)關閉swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab

# (4)配置iptables的ACCEPT規則
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

# (5)設定系統引數
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system
1.2.5 安裝 kubeadm, kubelet and kubectl

(1)配置yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

(2)安裝kubeadm&kubelet&kubectl

yum install -y kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0

(3)docker和k8s設定同一個cgroup

# docker
vi /etc/docker/daemon.json
    "exec-opts": ["native.cgroupdriver=systemd"],
    
systemctl restart docker
    
# kubelet,這邊如果發現輸出directory not exist,也說明是沒問題的,大家繼續往下進行即可
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g"/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	
systemctl enable kubelet && systemctl start kubelet
1.2.6 下載國內映象
  • 檢視kubeadm使用的映象
kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
  • 建立kubeadm.sh指令碼,用於拉取映象/打tag/刪除原有映象
#!/bin/bash

set -e

KUBE_VERSION=v1.14.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done
  • 執行指令碼和檢視映象
# 執行指令碼
sh ./kubeadm.sh

# 檢視映象
docker images
1.2.7 kube init初始化master

(1) 初始化master節點

官網:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

注意此操作是在主節點上進行

# 本地有映象
kubeadm init --kubernetes-version=1.14.0 --apiserver-advertise-address=192.168.1.157 --pod-network-cidr=10.244.0.0/16
【若要重新初始化叢集狀態:kubeadm reset,然後再進行上述操作】
init] Using Kubernetes version: v1.14.0


Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.157:6443 --token 40kelq.09xe7ldc86xp6sqm \
    --discovery-token-ca-cert-hash sha256:b958ca9b91fba40dfd1246132d741c9d8a8ca8f63f4826457c9bbfde413733d9 
    
kubeadm join 192.168.100.201:6443 --token dtki8i.g6v5omd7uu7gbijr \
    --discovery-token-ca-cert-hash sha256:439a6015bfeea33e100c8e633d4e0ef0e9a69aa19bd5db4ce7f5b40c61164a07    
    
    //生成不過期token
    kubeadm token create --ttl 0 --print-join-command

記得儲存好最後kubeadm join的資訊

(3)根據日誌提示

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

此時kubectl cluster-info檢視一下是否成功

(4)檢視pod驗證一下

等待一會兒,同時可以發現像etc,controller,scheduler等元件都以pod的方式安裝成功了

注意:coredns沒有啟動,需要安裝網路外掛

kubectl get pods -n kube-system

(5)健康檢查

curl -k https://localhost:6443/healthz

1.2.8 安裝網路外掛flannel

執行完上述操作是coredns 仍未啟動成功,需要安裝網路外掛

docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
docker rmi quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64

kube-flannel.yml

https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml

注意:檔案中 Network 和 kubeadmin init 設定的pod-network-cidr=10.244.0.0/16 一致

net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

然後檢視

kubectl get pods -n kube-system

發現coredns仍然沒有啟動,是因為master 預設不支援排程,執行下面操作即可:

kubectl taint nodes --all node-role.kubernetes.io/master-

本文由部落格群發一文多發等運營工具平臺 OpenWrite 釋出