1. 程式人生 > 其它 >Kubernetes叢集安裝,包含相關指令碼,使用kubeadm安裝kubernetes一主兩從叢集

Kubernetes叢集安裝,包含相關指令碼,使用kubeadm安裝kubernetes一主兩從叢集

技術標籤:Kuberneteskubernetesdockerlinuxcentos運維

Kubernetes叢集安裝,包含相關指令碼,使用kubeadm安裝kubernetes一主兩從叢集

相關博文:

伺服器規劃

IP主機名節點作業系統
192.168.175.101binghe101K8S MasterCentOS 8.0.1905
192.168.175.102binghe102K8S WorkerCentOS 8.0.1905
192.168.175.103binghe103K8S WorkerCentOS 8.0.1905

安裝環境版本

軟體名稱軟體版本說明
Docker19.03.8提供容器環境
docker-compose1.25.5定義和執行由多個容器組成的應用
K8S1.8.12是一個開源的,用於管理雲平臺中多個主機上的容器化的應用,Kubernetes的目標是讓部署容器化的應用簡單並且高效(powerful),Kubernetes提供了應用部署,規劃,更新,維護的一種機制。

伺服器免密碼登入

在各伺服器執行如下命令。

ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

將binghe102和binghe103伺服器上的id_rsa.pub檔案複製到binghe101伺服器。

[[email protected] ~]# scp /root/.ssh/id_rsa.pub binghe101:/root/.ssh/102
[[email protected] ~]# scp /root/.ssh/id_rsa.pub binghe101:/root/.ssh/103

在binghe101伺服器上執行如下命令。

cat /root/.ssh/142 >> ~/.ssh/authorized_keys
cat /root/.ssh/144 >> ~/.ssh/authorized_keys

然後將authorized_keys檔案分別複製到binghe102、binghe103伺服器。

[[email protected] ~]# scp /root/.ssh/authorized_keys binghe102:/root/.ssh/authorized_keys
[[email protected] ~]# scp /root/.ssh/authorized_keys binghe103:/root/.ssh/authorized_keys

刪除binghe101節點上~/.ssh下的102和103檔案。

rm ~/.ssh/102
rm ~/.ssh/103

部署nginx負載均衡(與Haproxy+Keepalive二選一)

rpm -vih http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm
# vim /etc/nginx/nginx.conf
……
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
                server 192.168.0.131:6443;
                server 192.168.0.132:6443;
            }
    
    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }
}
……

#啟動nginx
systemctl start nginx
systemctl enable nginx

Nginx+keepalived高可用

###主節點
# yum install keepalived
# vi /etc/keepalived/keepalived.conf
global_defs { 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface ens32
    virtual_router_id 51 # VRRP 路由 ID例項,每個例項是唯一的 
    priority 100    # 優先順序,備伺服器設定 90 
    advert_int 1    # 指定VRRP 心跳包通告間隔時間,預設1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.0.130/24
    } 
    track_script {
        check_nginx
    } 
}

# cat /etc/keepalived/check_nginx.sh 
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
# systemctl start keepalived
# systemctl enable keepalived

###備節點
#vim /etc/keepalived/keepalived.conf
global_defs { 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_BACKUP
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP 
    interface ens32
    virtual_router_id 51 # VRRP 路由 ID例項,每個例項是唯一的 
    priority 90    # 優先順序,備伺服器設定 90 
    advert_int 1    # 指定VRRP 心跳包通告間隔時間,預設1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.0.130/24
    } 
    track_script {
        check_nginx
    } 
}

# cat /etc/keepalived/check_nginx.sh 
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi

# systemctl start keepalived
# systemctl enable keepalived

###測試VIP是否正常工作
curl -k --header "Authorization: Bearer 8762670119726309a80b1fe94eb66e93" https://192.168.0.130:6443/version
{
  "major": "1",
  "minor": "18",
  "gitVersion": "v1.18.2",
  "gitCommit": "52c56ce7a8272c798dbc29846288d7cd9fbae032",
  "gitTreeState": "clean",
  "buildDate": "2020-04-16T11:48:36Z",
  "goVersion": "go1.13.9",
  "compiler": "gc",
  "platform": "linux/amd64"
}

Haproxy+keepalive搭建高可用

安裝配置haproxy服務

1.安裝haproxy
[[email protected] ~]# yum install -y  haproxy
2.配置haproxy
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.ori
cat /etc/haproxy/haproxy.cfg
global
    log /dev/log local0
    log /dev/log local1 notice
    chroot /var/lib/haproxy
    stats socket /var/run/haproxy-admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon
    nbproc 1
defaults
    log global
    timeout connect 5000
    timeout client 10m
    timeout server 10m
listen admin_stats
    bind 0.0.0.0:10080
    mode http
    log 127.0.0.1 local0 err
    stats refresh 30s
    stats uri /status
    stats realm welcome login\ Haproxy
    stats auth along:along123
    stats hide-version
    stats admin if TRUE
listen kube-master
    bind 0.0.0.0:8443
    mode tcp
    option tcplog
    balance source
    server 192.168.10.11 192.168.10.11:6443 check inter 2000 fall 2 rise 2 weight 1
    server 192.168.10.12 192.168.10.12:6443 check inter 2000 fall 2 rise 2 weight 1
    server 192.168.10.13 192.168.10.13:6443 check inter 2000 fall 2 rise 2 weight 1

3.啟動haproxy
systemctl restart haproxy
[[email protected] ~]# systemctl enable haproxy
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
[[email protected] ~]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2020-04-10 16:35:46 CST; 24s ago
 Main PID: 10235 (haproxy-systemd)
   CGroup: /system.slice/haproxy.service
           ├─10235 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
           ├─10236 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
           └─10237 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

安裝Docker環境

本文件基於Docker 19.03.8 版本搭建Docker環境。

在所有伺服器上建立install_docker.sh指令碼,指令碼內容如下所示。

#!/bin/bash
export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
dnf install yum*
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
dnf install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm
yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8
#配置docker映象加速
vi /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://bk6kzfqm.mirror.aliyuncs.com"],
  "insecure-registries": ["192.168.0.241"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
systemctl enable docker.service
systemctl start docker.service
docker version

在每臺伺服器上為install_docker.sh指令碼賦予可執行許可權,並執行指令碼即可。

安裝K8S叢集環境

本文件基於K8S 1.8.12版本來搭建K8S叢集

安裝K8S基礎環境

在所有伺服器上建立install_k8s.sh指令碼檔案,指令碼檔案的內容如下所示。

#!/bin/bash
#設定hosts
cat >> /etc/hosts << EOF
192.168.0.140 master
192.168.0.142 slave3
192.168.0.144 slave4
EOF

#安裝nfs-utils
yum install -y nfs-utils
yum install -y wget

#同步系統時間:
yum install -y ntpdate
ntpdate time.windows.com

#啟動nfs-server
systemctl start nfs-server
systemctl enable nfs-server

#關閉防火牆
systemctl stop firewalld
systemctl disable firewalld

#關閉SeLinux
#臨時關閉
setenforce 0	
#永久關閉
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# 關閉 swap
# 臨時關閉
swapoff -a
#永久關閉
sed -i 's/.*swap.*/#&/' /etc/fstab
cat /etc/fstab

#修改 /etc/sysctl.conf
# 如果有配置,則修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf
# 可能沒有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf
# 執行命令以應用
sysctl -p

# 配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 解除安裝舊版本K8S
yum remove -y kubelet kubeadm kubectl

# 安裝kubelet、kubeadm、kubectl,這裡我安裝的是1.18.2版本
yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2

# 修改docker Cgroup Driver為systemd
# # 將/usr/lib/systemd/system/docker.service檔案中的這一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
# # 修改為 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
# 如果不修改,在新增 worker 節點時可能會碰到如下錯誤
# [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". 
# Please follow the guide at https://kubernetes.io/docs/setup/cri/
sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service

# 設定 docker 映象,提高 docker 映象下載速度和穩定性
# 如果訪問 https://hub.docker.io 速度非常穩定,亦可以跳過這個步驟
# curl -sSL https://kuboard.cn/install-script/set_mirror.sh | sh -s ${REGISTRY_MIRROR}

# 重啟 docker,並啟動 kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet && systemctl start kubelet

docker version

在每臺伺服器上為install_k8s.sh指令碼賦予可執行許可權,並執行指令碼即可。

初始化Master節點

只在binghe101伺服器上執行的操作。

1.初始化Master節點的網路環境

注意:下面的命令需要在命令列手動執行。

# 只在 master 節點執行
# export 命令只在當前 shell 會話中有效,開啟新的 shell 視窗後,如果要繼續安裝過程,請重新執行此處的 export 命令
export MASTER_IP=192.168.175.101
# 替換 k8s.master 為 您想要的 dnsName
export APISERVER_NAME=k8s.master
# Kubernetes 容器組所在的網段,該網段安裝完成後,由 kubernetes 建立,事先並不存在於物理網路中
export POD_SUBNET=172.18.0.1/16
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

2.初始化Master節點

在binghe101伺服器上建立init_master.sh指令碼檔案,檔案內容如下所示。

#!/bin/bash
# 指令碼出錯時終止執行
set -e

if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then
  echo -e "\033[31;1m請確保您已經設定了環境變數 POD_SUBNET 和 APISERVER_NAME \033[0m"
  echo 當前POD_SUBNET=$POD_SUBNET
  echo 當前APISERVER_NAME=$APISERVER_NAME
  exit 1
fi


# 檢視完整配置選項 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
rm -f ./kubeadm-config.yaml
cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
controlPlaneEndpoint: "${APISERVER_NAME}:6443"
networking:
  serviceSubnet: "10.96.0.0/16"
  podSubnet: "${POD_SUBNET}"
  dnsDomain: "cluster.local"
EOF

# kubeadm init
# 根據伺服器網速的情況,您需要等候 3 - 10 分鐘
kubeadm init --config=kubeadm-config.yaml --upload-certs

# 配置 kubectl
rm -rf /root/.kube/
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config

# 安裝 calico 網路外掛
# 參考文件 https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremises
echo "安裝calico-3.13.1"
rm -f calico-3.13.1.yaml
wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml
kubectl apply -f calico-3.13.1.yaml

賦予init_master.sh指令碼檔案可執行許可權並執行指令碼。

3.檢視Master節點的初始化結果

(1)確保所有容器組處於Running狀態

# 執行如下命令,等待 3-10 分鐘,直到所有的容器組處於 Running 狀態
watch kubectl get pod -n kube-system -o wide

具體執行如下所示。

[[email protected] ~]# watch kubectl get pod -n kube-system -o wide
Every 2.0s: kubectl get pod -n kube-system -o wide                                                                                                                          binghe101: Sun May 10 11:01:32 2020

NAME                                       READY   STATUS    RESTARTS   AGE    IP                NODE        NOMINATED NODE   READINESS GATES          
calico-kube-controllers-5b8b769fcd-5dtlp   1/1     Running   0          118s   172.18.203.66     binghe101   <none>           <none>          
calico-node-fnv8g                          1/1     Running   0          118s   192.168.175.101   binghe101   <none>           <none>          
coredns-546565776c-27t7h                   1/1     Running   0          2m1s   172.18.203.67     binghe101   <none>           <none>          
coredns-546565776c-hjb8z                   1/1     Running   0          2m1s   172.18.203.65     binghe101   <none>           <none>          
etcd-binghe101                             1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>          
kube-apiserver-binghe101                   1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>          
kube-controller-manager-binghe101          1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>          
kube-proxy-dvgsr                           1/1     Running   0          2m1s   192.168.175.101   binghe101   <none>           <none>          
kube-scheduler-binghe101                   1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>

(2) 檢視 Master 節點初始化結果

kubectl get nodes -o wide

具體執行如下所示。

[[email protected] ~]# kubectl get nodes -o wide
NAME        STATUS   ROLES    AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION         CONTAINER-RUNTIME
binghe101   Ready    master   3m28s   v1.18.2   192.168.175.101   <none>        CentOS Linux 8 (Core)   4.18.0-80.el8.x86_64   docker://19.3.8

初始化Worker節點

1.獲取join命令引數

在Master節點(binghe101伺服器)上執行如下命令獲取join命令引數。

kubeadm token create --print-join-command

具體執行如下所示。

[[email protected] ~]# kubeadm token create --print-join-command
W0510 11:04:34.828126   56132 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 

其中,有如下一行輸出。

kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 

這行程式碼就是獲取到的join命令。

注意:join命令中的token的有效時間為 2 個小時,2小時內,可以使用此 token 初始化任意數量的 worker 節點。

2.初始化Worker節點

針對所有的 worker 節點執行,在這裡,就是在binghe102伺服器和binghe103伺服器上執行。

在命令分別手動執行如下命令。

# 只在 worker 節點執行
# 192.168.175.101 為 master 節點的內網 IP
export MASTER_IP=192.168.175.101
# 替換 k8s.master 為初始化 master 節點時所使用的 APISERVER_NAME
export APISERVER_NAME=k8s.master
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

# 替換為 master 節點上 kubeadm token create 命令輸出的join
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 

具體執行如下所示。

[[email protected] ~]# export MASTER_IP=192.168.175.101
[[email protected] ~]# export APISERVER_NAME=k8s.master
[[email protected] ~]# echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
[[email protected] ~]# kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 
W0510 11:08:27.709263   42795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

根據輸出結果可以看出,Worker節點加入了K8S叢集。

注意:kubeadm join…就是master 節點上 kubeadm token create 命令輸出的join。

3.檢視初始化結果

在Master節點(binghe101伺服器)執行如下命令檢視初始化結果。

kubectl get nodes -o wide

具體執行如下所示。

[[email protected] ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
binghe101   Ready    master   20m     v1.18.2
binghe102   Ready    <none>   2m46s   v1.18.2
binghe103   Ready    <none>   2m46s   v1.18.2

注意:kubectl get nodes命令後面加上-o wide引數可以輸出更多的資訊。

K8S安裝ingress-nginx

注意:在Master節點(binghe101伺服器上執行)

1.建立ingress-nginx名稱空間

建立ingress-nginx-namespace.yaml檔案,檔案內容如下所示。

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    name: ingress-nginx

執行如下命令建立ingress-nginx名稱空間。

kubectl apply -f ingress-nginx-namespace.yaml

2.安裝ingress controller

建立ingress-nginx-mandatory.yaml檔案,檔案內容如下所示。

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          # Any image is permissible as long as:
          # 1. It serves a 404 page at /
          # 2. It serves 200 on a /healthz endpoint
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 10m
              memory: 20Mi
            requests:
              cpu: 10m
              memory: 20Mi

---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

---

執行如下命令安裝ingress controller。

kubectl apply -f ingress-nginx-mandatory.yaml

3.安裝K8S SVC:ingress-nginx

主要是用來用於暴露pod:nginx-ingress-controller。

建立service-nodeport.yaml檔案,檔案內容如下所示。ls

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 30080
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 30443
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

執行如下命令安裝。

kubectl apply -f service-nodeport.yaml

4.訪問K8S SVC:ingress-nginx

檢視ingress-nginx名稱空間的部署情況,如下所示。

[[email protected] k8s]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
default-http-backend-796ddcd9b-vfmgn        1/1     Running   1          10h
nginx-ingress-controller-58985cc996-87754   1/1     Running   2          10h

在命令列伺服器命令列輸入如下命令檢視ingress-nginx的埠對映情況。

kubectl get svc -n ingress-nginx 

具體如下所示。

[[email protected] k8s]# kubectl get svc -n ingress-nginx 
NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
default-http-backend   ClusterIP   10.96.247.2   <none>        80/TCP                       7m3s
ingress-nginx          NodePort    10.96.40.6    <none>        80:30080/TCP,443:30443/TCP   4m35s

所以,可以通過Master節點(binghe101伺服器)的IP地址和30080埠號來訪問ingress-nginx,如下所示。

[[email protected] k8s]# curl 192.168.175.101:30080       
default backend - 404

也可以在瀏覽器開啟http://192.168.175.101:30080 來訪問ingress-nginx,如下所示。
img

重啟K8S叢集引起的問題

1.Worker節點故障不能啟動

Master 節點的 IP 地址發生變化,導致 worker 節點不能啟動。需要重新安裝K8S叢集,並確保所有節點都有固定的內網 IP 地址。

2.Pod崩潰或不能正常訪問

重啟伺服器後使用如下命令檢視Pod的執行狀態。

kubectl get pods --all-namespaces

發現很多 Pod 不在 Running 狀態,此時,需要使用如下命令刪除執行不正常的Pod。

kubectl delete pod <pod-name> -n <pod-namespece>

注意:如果Pod 是使用 Deployment、StatefulSet 等控制器建立的,K8S 將建立新的 Pod 作為替代,重新啟動的 Pod 通常能夠正常工作。

kubernetes的node重新加入

注意:以下操作在node下操作

1. 停掉kubelet

systemctl stop kubelet

2. 刪除之前的相關檔案

rm -rf /etc/kubernetes/*

3. 加入叢集

kubeadm join 192.168.233.3:6443