1. 程式人生 > 其它 >Rancher 2.6管理k8s叢集

Rancher 2.6管理k8s叢集

一、 Rancher介紹

1. Rancher簡介

  Rancher是一個開源的企業級多叢集Kubernetes管理平臺,實現了Kubernetes叢集在混合雲+本地資料中心的集中部署與管理,以確保叢集的安全性,加速企業數字化轉型。

  Rancher官方文件:https://docs.rancher.cn/

2. Rancher和k8s的關係

  Rancher和k8s都是用來作為容器的排程與編排系統。但是rancher不僅能夠管理應用容器,更重要的一點是能夠管理k8s叢集。Rancher2.x底層基於k8s排程引擎,通過Rancher的封裝,使用者可以在不熟悉k8s概念的情況下輕鬆的通過Rancher來部署容器到k8s叢集當中。

  為實現上述的功能,Rancher自身提供了一套完整的用於管理k8s的元件,包括Rancher API Server, Cluster Controller, Cluster Agent, Node Agent等等。元件相互協作使得Rancher能夠掌控每個k8s叢集,從而將多叢集的管理和使用整合在統一的Rancher平臺中。Rancher增強了一些k8s的功能,並提供了面向使用者友好的使用方式。

  簡單的說,就是Rancher對k8s進行了功能的拓展與實現了和k8s叢集互動的一些便捷工具,包括執行命令列,管理多個 k8s叢集,檢視k8s叢集節點的執行狀態等等。

二、安裝Rancher

1. 實驗環境設定

1)配置hosts檔案

  在上述節點rancher-admin、k8s-master1、k8s-node1、k8s-node2上分別配置hosts檔案,內容如下:

cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.130  rancher-admin
10.0.0.131  k8s-master1
10.0.0.132  k8s-node1
10.0.0.133  k8s-node2

2)配置rancher到k8s主機互信

  生成ssh祕鑰對,一路回車,不輸入密碼

[root@rancher-admin ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)?
You have new mail in /var/spool/mail/root 

  把本地的ssh公鑰檔案安裝到遠端主機對應的賬戶

[root@rancher-admin ~]# ssh-copy-id rancher-admin
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'rancher-admin (10.0.0.130)' can't be established.
ECDSA key fingerprint is SHA256:J9UnR8HG9Iws8xvmhv4HMjfjJUgOGgEV/3yQ/kFT87c.
ECDSA key fingerprint is MD5:af:38:29:b9:6b:1c:eb:03:bd:93:ad:0d:5a:68:4d:06.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
                (if you think this is a mistake, you may want to use -f option)

You have new mail in /var/spool/mail/root
[root@rancher-admin ~]# ssh-copy-id k8s-master1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-master1 (10.0.0.131)' can't be established.
ECDSA key fingerprint is SHA256:O2leSOvudbcqIRBokjf4cUtbvjzdf/Yl49VkIQGfLxE.
ECDSA key fingerprint is MD5:de:41:d0:68:53:e3:08:09:b0:7a:55:2e:b6:1d:af:d3.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-master1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-master1'"
and check to make sure that only the key(s) you wanted were added.

[root@rancher-admin ~]# ssh-copy-id k8s-node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-node1 (10.0.0.132)' can't be established.
ECDSA key fingerprint is SHA256:O2leSOvudbcqIRBokjf4cUtbvjzdf/Yl49VkIQGfLxE.
ECDSA key fingerprint is MD5:de:41:d0:68:53:e3:08:09:b0:7a:55:2e:b6:1d:af:d3.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-node1'"
and check to make sure that only the key(s) you wanted were added.

[root@rancher-admin ~]# ssh-copy-id k8s-node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-node2 (10.0.0.133)' can't be established.
ECDSA key fingerprint is SHA256:O2leSOvudbcqIRBokjf4cUtbvjzdf/Yl49VkIQGfLxE.
ECDSA key fingerprint is MD5:de:41:d0:68:53:e3:08:09:b0:7a:55:2e:b6:1d:af:d3.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-node2'"
and check to make sure that only the key(s) you wanted were added.

[root@rancher-admin ~]#

3)防火牆和selinux預設關閉

[root@rancher-admin ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@rancher-admin ~]# getenforce
Disabled

4)交換分割槽關閉

[root@rancher-admin ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3931         286        2832          11         813        3415
Swap:             0           0           0

5)開啟轉發

  br_netfilter模組用於將橋接流量轉發至iptables鏈,br_netfilter核心引數需要開啟轉發

[root@rancher-admin ~]# modprobe br_netfilter
[root@rancher-admin ~]# echo "modprobe br_netfilter" >> /etc/profile
[root@rancher-admin ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@rancher-admin ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

6)安裝好docker-ce

[root@rancher-admin ~]# yum install docker-ce docker-ce-cli containerd.io -y
[root@rancher-admin ~]# systemctl start docker && systemctl enable docker.service
#配置映象加速器
[root@rancher-admin ~]# cat /etc/docker/daemon.json
{
        "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","https://reg-mirror.qiniu.com/","https://hub-mirror.c.163.com/","https://registry.docker-cn.com","https://dockerhub.azk8s.cn","http://qtid6917.mirror.aliyuncs.com"],
        "exec-opts": ["native.cgroupdriver=systemd"]
}
#重新載入配置
[root@rancher-admin ~]# systemctl daemon-reload
[root@rancher-admin ~]# systemctl restart docker
[root@rancher-admin ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2022-12-11 12:44:27 CST; 8s ago
     Docs: https://docs.docker.com
 Main PID: 4708 (dockerd)
    Tasks: 8
   Memory: 25.7M
   CGroup: /system.slice/docker.service
           └─4708 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.124138353+08:00" level=info msg="ccResolverWrapper: sending update to cc: {[{...dule=grpc
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.124150996+08:00" level=info msg="ClientConn switching balancer to \"pick_firs...dule=grpc
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.134861031+08:00" level=info msg="[graphdriver] using prior storage driver: overlay2"
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.137283344+08:00" level=info msg="Loading containers: start."
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.312680540+08:00" level=info msg="Default bridge (docker0) is assigned with an... address"
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.381480257+08:00" level=info msg="Loading containers: done."
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.396206717+08:00" level=info msg="Docker daemon" commit=3056208 graphdriver(s)...=20.10.21
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.396311425+08:00" level=info msg="Daemon has completed initialization"
Dec 11 12:44:27 rancher-admin systemd[1]: Started Docker Application Container Engine.
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.425944033+08:00" level=info msg="API listen on /var/run/docker.sock"
Hint: Some lines were ellipsized, use -l to show in full.

2. 安裝Rancher

  Rancher2.6.4支援匯入已經存在的k8s1.23+叢集,所以安裝rancher2.6.4版本

  提前下載好有關rancher的映象:

[root@k8s-master1 ~]# docker pull rancher/rancher-agent:v2.6.4
[root@k8s-node1 ~]# docker pull rancher/rancher-agent:v2.6.4
[root@k8s-node2 ~]# docker pull rancher/rancher-agent:v2.6.4 

1)啟動rancher容器

[root@rancher-admin rancher]# docker pull rancher/rancher:v2.6.4
[root@rancher-admin rancher]# docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.4
0a3209f670cc5c9412d5c34dd20275686c2526865ddfe20b60d65863b346d0d2

注:unless-stopped,在容器退出時總是重啟容器,但是不考慮在Docker守護程序啟動時就已經停止了的容器

2)驗證rancher是否啟動

[root@rancher-admin rancher]# docker ps | grep rancher
0a3209f670cc   rancher/rancher:v2.6.4   "entrypoint.sh"   About a minute ago   Up 46 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   affectionate_rosalind

3)登入Rancher平臺

  在瀏覽器中訪問:輸入http://10.0.0.130

  點選高階,出現如下介面

  點選繼續前往10.0.0.130(不安全),出現如下介面:

(1)獲取密碼:

  檢視到rancher容器的id

[root@rancher-admin rancher]# docker ps | grep rancher
0a3209f670cc   rancher/rancher:v2.6.4   "entrypoint.sh"   6 minutes ago   Up 43 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   affectionate_rosalind

通過上面可以看到容器的id是:0a3209f670cc

  執行以下命令獲取密碼:

[root@rancher-admin rancher]#  docker logs 0a3209f670cc 2>&1 | grep "Bootstrap Password:"
2022/12/11 05:11:56 [INFO] Bootstrap Password: mgrb9rgbl2gxvgjmz5xdwct899b28swnr4ssfwnmhqhwsqf9fhnwdx

通過上面可以看到獲取到的密碼是:mgrb9rgbl2gxvgjmz5xdwct899b28swnr4ssfwnmhqhwsqf9fhnwdx

  在瀏覽器頁面輸入獲取的密碼

  點選Login with Local User,出現如下介面,選擇設定密碼

(2)設定新密碼
(3)正常登入

  點選繼續之後,顯示如下

(4)設定語言

三、Rancher管理已存在的k8s叢集

1. 匯入已有的k8s叢集

  選擇匯入已有的叢集,出現下面介面

  選擇通用,出現如下介面

  填寫叢集名稱:k8s-rancher,點選建立

  出現如下介面:

  複製上述紅框中的命令,在k8s控制節點執行該命令,如下:

[root@k8s-master1 ~]# curl --insecure -sfL https://10.0.0.130/v3/import/s7l7wzbkj5pnwh7wl7lrjt54l2x659mfhc5qlhmntbjflqx4rdbqsm_c-m-86g26jzn.yaml | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created
secret/cattle-credentials-1692b54 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
deployment.apps/cattle-cluster-agent created
service/cattle-cluster-agent created

  驗證rancher-agent是否部署成功

[root@k8s-master1 ~]# kubectl get pods -n cattle-system -o wide
NAME                                    READY   STATUS    RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
cattle-cluster-agent-867ff9c57f-ndspc   1/1     Running   0          18s   10.244.159.188   k8s-master1   <none>           <none>
[root@k8s-master1 ~]# kubectl get pods -n cattle-system -o wide
NAME                                    READY   STATUS        RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
cattle-cluster-agent-5967bb5986-rhzz8   1/1     Running       0          39s   10.244.36.88     k8s-node1     <none>           <none>
cattle-cluster-agent-867ff9c57f-ndspc   1/1     Terminating   0          61s   10.244.159.188   k8s-master1   <none>           <none>
You have new mail in /var/spool/mail/root
[root@k8s-master1 ~]# kubectl get pods -n cattle-system -o wide
NAME                                    READY   STATUS              RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
cattle-cluster-agent-5967bb5986-dbmlj   0/1     ContainerCreating   0          15s   <none>         k8s-node2   <none>           <none>
cattle-cluster-agent-5967bb5986-rhzz8   1/1     Running             0          55s   10.244.36.88   k8s-node1   <none>           <none>
[root@k8s-master1 ~]# kubectl get pods -n cattle-system -o wide
NAME                                    READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
cattle-cluster-agent-5967bb5986-dbmlj   1/1     Running   0          39s   10.244.169.154   k8s-node2   <none>           <none>
cattle-cluster-agent-5967bb5986-rhzz8   1/1     Running   0          79s   10.244.36.88     k8s-node1   <none>           <none>

  看到cattle-cluster-agent這個pod時running,說明rancher-agent部署成功了

  檢視rancher UI頁面顯示結果:

  在https://10.0.0.130/dashboard/home頁面顯示如下:

  上面結果說明rancher裡面已經匯入了k8s,k8s的版本是1.20.6

2. Rancher儀表盤上部署tomcat服務

  點選k8s-rancher叢集

  出現如下介面:

1)建立名稱空間

2)建立deployment

  選擇名稱空間:tomcat-test,輸入deployment的名稱:tomcat-test,副本數:2,容器名稱:tomcat-test,映象:tomcat:8.5-jre8-alpine,拉取策略:IfNotPresent

  新增標籤:app=tomcat,給pod也打app=tomcat標籤

  設定完成後,點選建立:

  檢視是否建立成功

3)建立service

  把k8s叢集的tomcat這個pod映射出來

  選擇節點埠

  輸入service的名稱:tomcat-svc,服務埠號名稱:tomcat-port,監聽埠:8080,目標埠:8080,節點埠:30080

  新增選擇器app=tomcat,點選建立

  檢視建立是否成功:

  訪問k8s任何一個節點+埠 30080,可以訪問內部的tomcat

4)建立Ingress資源

(1)安裝Ingress-controller七層代理

  下載資源清單:https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/baremetal/deploy.yaml,對其做部分修改,修改後的配置檔案如下:

cat deploy.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: 'true'
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
      - namespaces
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: NodePort
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      hostNetwork: true
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/name: ingress-nginx
              topologyKey: kubernetes.io/hostname
      dnsPolicy: ClusterFirstWithHostNet
      containers:
        - name: controller
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.1.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: nginx
  namespace: ingress-nginx
spec:
  controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-4.0.10
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.1.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-4.0.10
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.1.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

  在k8s-master1節點上執行以下命令,安裝Ingress-controller七層代理:

[root@k8s-master1 ~]# cd nginx-ingress/
[root@k8s-master1 nginx-ingress]# ll
total 20
-rw-r--r-- 1 root root 19435 Sep 17 16:18 deploy.yaml
[root@k8s-master1 nginx-ingress]# kubectl apply -f deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
[root@k8s-master1 nginx-ingress]# kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-gxx5m        0/1     Completed   0          88s
ingress-nginx-admission-patch-5tfmc         0/1     Completed   1          88s
ingress-nginx-controller-6c8ffbbfcf-rnbtd   1/1     Running     0          89s
ingress-nginx-controller-6c8ffbbfcf-zknjx   1/1     Running     0          89s
(2)建立ingress規則

  輸入ingress資源的名稱:tomcat-test,請求主機域名:tomcat-test.example.com,路徑:/,目標服務:tomcat-svc,埠:8080

  添加註解:kubernetes.io/ingress.class: nginx

  檢視建立是否成功

(3)配置hosts檔案

  新增本地hosts解析,在C:\Windows\System32\drivers\etc\hosts檔案中新增一行:10.0.0.131  tomcat-test.example.com

(4)瀏覽器訪問  

  瀏覽器中輸入:http://http://tomcat-test.example.com:30080/ 訪問結果如下: