1. 程式人生 > >Kubernetes K8S之Helm部署、使用與示例

Kubernetes K8S之Helm部署、使用與示例

 

Kubernetes K8S之Helm部署、使用、常見操作與示例

 

主機配置規劃

伺服器名稱(hostname)系統版本配置內網IP外網IP(模擬)
k8s-master CentOS7.7 2C/4G/20G 172.16.1.110 10.0.0.110
k8s-node01 CentOS7.7 2C/4G/20G 172.16.1.111 10.0.0.111
k8s-node02 CentOS7.7 2C/4G/20G 172.16.1.112 10.0.0.112

 

Helm是什麼

沒有使用Helm之前,在Kubernetes部署應用,我們要依次部署deployment、service等,步驟比較繁瑣。況且隨著很多專案微服務化,複雜的應用在容器中部署以及管理顯得較為複雜。

helm通過打包的方式,支援釋出的版本管理和控制,很大程度上簡化了Kubernetes應用的部署和管理。

Helm本質就是讓k8s的應用管理(Deployment、Service等)可配置,能動態生成。通過動態生成K8S資源清單檔案(deployment.yaml、service.yaml)。然後kubectl自動呼叫K8S資源部署。

Helm是官方提供類似於YUM的包管理,是部署環境的流程封裝,Helm有三個重要的概念:chart、release和Repository

  • chart是建立一個應用的資訊集合,包括各種Kubernetes物件的配置模板、引數定義、依賴關係、文件說明等。可以將chart想象成apt、yum中的軟體安裝包。
  • release是chart的執行例項,代表一個正在執行的應用。當chart被安裝到Kubernetes叢集,就生成一個release。chart能多次安裝到同一個叢集,每次安裝都是一個release【根據chart賦值不同,完全可以部署出多個release出來】。
  • Repository用於釋出和儲存 Chart 的儲存庫。

Helm包含兩個元件:Helm客戶端和Tiller服務端,如下圖所示:

Helm 客戶端負責 chart 和 release 的建立和管理以及和 Tiller 的互動。Tiller 服務端執行在 Kubernetes 叢集中,它會處理Helm客戶端的請求,與 Kubernetes API Server 互動。

 

Helm部署

現在越來越多的公司和團隊開始使用Helm這個Kubernetes的包管理器,我們也會使用Helm安裝Kubernetes的常用元件。Helm由客戶端命令helm工具和服務端tiller組成。

helm的GitHub地址

https://github.com/helm/helm

 

本次部署版本

 

Helm安裝部署

 1 [root@k8s-master software]# pwd
 2 /root/software 
 3 [root@k8s-master software]# wget https://get.helm.sh/helm-v2.16.9-linux-amd64.tar.gz 
 4 [root@k8s-master software]# 
 5 [root@k8s-master software]# tar xf helm-v2.16.9-linux-amd64.tar.gz
 6 [root@k8s-master software]# ll
 7 total 12624
 8 -rw-r--r-- 1 root root 12926032 Jun 16 06:55 helm-v3.2.4-linux-amd64.tar.gz
 9 drwxr-xr-x 2 3434 3434       50 Jun 16 06:55 linux-amd64
10 [root@k8s-master software]# 
11 [root@k8s-master software]# cp -a linux-amd64/helm /usr/bin/helm

 

因為Kubernetes API Server開啟了RBAC訪問控制,所以需要建立tiller的service account:tiller並分配合適的角色給它。這裡為了簡單起見我們直接分配cluster-admin這個叢集內建的ClusterRole給它。

 1 [root@k8s-master helm]# pwd
 2 /root/k8s_practice/helm
 3 [root@k8s-master helm]# 
 4 [root@k8s-master helm]# cat rbac-helm.yaml
 5 apiVersion: v1
 6 kind: ServiceAccount
 7 metadata:
 8   name: tiller
 9   namespace: kube-system
10 ---
11 apiVersion: rbac.authorization.k8s.io/v1
12 kind: ClusterRoleBinding
13 metadata:
14   name: tiller
15 roleRef:
16   apiGroup: rbac.authorization.k8s.io
17   kind: ClusterRole
18   name: cluster-admin
19 subjects:
20 - kind: ServiceAccount
21   name: tiller
22   namespace: kube-system
23 [root@k8s-master helm]# 
24 [root@k8s-master helm]# kubectl apply -f rbac-helm.yaml 
25 serviceaccount/tiller created
26 clusterrolebinding.rbac.authorization.k8s.io/tiller created

 

初始化Helm的client 和 server

 1 [root@k8s-master helm]# helm init --service-account tiller
 2 ………………
 3 [root@k8s-master helm]# kubectl get pod -n kube-system -o wide | grep 'tiller'
 4 tiller-deploy-8488d98b4c-j8txs       0/1     Pending   0          38m     <none>         <none>       <none>           <none>
 5 [root@k8s-master helm]# 
 6 ##### 之所有沒有排程成功,就是因為拉取映象包失敗;檢視需要拉取的映象包
 7 [root@k8s-master helm]# kubectl describe pod tiller-deploy-8488d98b4c-j8txs -n kube-system
 8 Name:           tiller-deploy-8488d98b4c-j8txs
 9 Namespace:      kube-system
10 Priority:       0
11 Node:           <none>
12 Labels:         app=helm
13                 name=tiller
14                 pod-template-hash=8488d98b4c
15 Annotations:    <none>
16 Status:         Pending
17 IP:             
18 IPs:            <none>
19 Controlled By:  ReplicaSet/tiller-deploy-8488d98b4c
20 Containers:
21   tiller:
22     Image:       gcr.io/kubernetes-helm/tiller:v2.16.9
23     Ports:       44134/TCP, 44135/TCP
24     Host Ports:  0/TCP, 0/TCP
25     Liveness:    http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
26     Readiness:   http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
27     Environment:
28       TILLER_NAMESPACE:    kube-system
29       TILLER_HISTORY_MAX:  0
30     Mounts:
31       /var/run/secrets/kubernetes.io/serviceaccount from tiller-token-kjqb7 (ro)
32 Conditions:
33 ………………

 

由上可見,映象下載失敗。原因是映象在國外,因此這裡需要修改映象地址

1 [root@k8s-master helm]# helm init --upgrade --tiller-image registry.cn-beijing.aliyuncs.com/google_registry/tiller:v2.16.9
2 [root@k8s-master helm]# 
3 ### 等待一會兒後
4 [root@k8s-master helm]# kubectl get pod -o wide -A | grep 'till'
5 kube-system    tiller-deploy-7b7787d77-zln6t    1/1     Running   0    8m43s   10.244.4.123   k8s-node01   <none>    <none>

由上可見,Helm服務端tiller部署成功

 

helm版本資訊檢視

1 [root@k8s-master helm]# helm version
2 Client: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}
3 Server: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"dirty"}

 

Helm使用

helm源地址

helm預設使用的charts源地址

1 [root@k8s-master helm]# helm repo list
2 NAME      URL 
3 stable    https://kubernetes-charts.storage.googleapis.com
4 local     http://127.0.0.1:8879/charts

 

改變helm源【是否改變helm源,根據實際情況而定,一般不需要修改】

1 helm repo remove stable
2 helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
3 helm repo update
4 helm repo list

 

helm安裝包下載存放位置

/root/.helm/cache/archive

 

helm常見應用操作

 1 # 列出charts倉庫中所有可用的應用
 2 helm search
 3 # 查詢指定應用
 4 helm search memcached
 5 # 查詢指定應用的具體資訊
 6 helm inspect stable/memcached
 7 # 用helm安裝軟體包,--name:指定release名字
 8 helm install --name memcached1 stable/memcached
 9 # 檢視安裝的軟體包
10 helm list
11 # 刪除指定引用
12 helm delete memcached1

 

helm常用命令

chart管理

1 create:根據給定的name建立一個新chart
2 fetch:從倉庫下載chart,並(可選項)將其解壓縮到本地目錄中
3 inspect:chart詳情
4 package:打包chart目錄到一個chart歸檔
5 lint:語法檢測
6 verify:驗證位於給定路徑的chart已被簽名且有效

 

release管理

1 get:下載一個release
2 delete:根據給定的release name,從Kubernetes中刪除指定的release
3 install:安裝一個chart
4 list:顯示release列表
5 upgrade:升級release
6 rollback:回滾release到之前的一個版本
7 status:顯示release狀態資訊
8 history:Fetch release歷史資訊

 

helm常見操作

 1 # 新增倉庫
 2 helm repo add REPO_INFO   # 如:helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
 3 ##### 示例
 4 helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
 5 helm repo add elastic https://helm.elastic.co
 6 # 檢視helm倉庫列表
 7 helm repo list
 8 # 建立chart【可供參考,一般都是自己手動建立chart】
 9 helm create CHART_PATH
10 # 根據指定chart部署一個release
11 helm install --name RELEASE_NAME CHART_PATH
12 # 根據指定chart模擬安裝一個release,並列印處debug資訊
13 helm install --dry-run --debug --name RELEASE_NAME CHART_PATH
14 # 列出已經部署的release
15 helm list
16 # 列出所有的release
17 helm list --all
18 # 查詢指定release的狀態
19 helm status Release_NAME
20 # 回滾到指定版本的release,這裡指定的helm release版本
21 helm rollback Release_NAME REVISION_NUM
22 # 檢視指定release的歷史資訊
23 helm history Release_NAME
24 # 對指定chart打包
25 helm package CHART_PATH    如:helm package my-test-app/
26 # 對指定chart進行語法檢測
27 helm lint CHART_PATH
28 # 檢視指定chart詳情
29 helm inspect CHART_PATH
30 # 從Kubernetes中刪除指定release相關的資源【helm list --all 中仍然可見release記錄資訊】
31 helm delete RELEASE_NAME
32 # 從Kubernetes中刪除指定release相關的資源,並刪除release記錄
33 helm delete --purge RELEASE_NAME

上述操作可結合下文示例,這樣能看到更多細節。

 

helm示例

chart檔案資訊

 1 [root@k8s-master helm]# pwd
 2 /root/k8s_practice/helm
 3 [root@k8s-master helm]# 
 4 [root@k8s-master helm]# mkdir my-test-app
 5 [root@k8s-master helm]# cd my-test-app
 6 [root@k8s-master my-test-app]# 
 7 [root@k8s-master my-test-app]# ll
 8 total 8
 9 -rw-r--r-- 1 root root 158 Jul 16 17:53 Chart.yaml
10 drwxr-xr-x 2 root root  49 Jul 16 21:04 templates
11 -rw-r--r-- 1 root root 129 Jul 16 21:04 values.yaml
12 [root@k8s-master my-test-app]# 
13 [root@k8s-master my-test-app]# cat Chart.yaml 
14 apiVersion: v1
15 appVersion: v2.2
16 description: my test app
17 keywords:
18 - myapp
19 maintainers:
20 - email: [email protected]
21   name: zhang
22 # 該name值與上級目錄名相同
23 name: my-test-app
24 version: v1.0.0
25 [root@k8s-master my-test-app]# 
26 [root@k8s-master my-test-app]# cat values.yaml 
27 deployname: my-test-app02
28 replicaCount: 2
29 images:
30   repository: registry.cn-beijing.aliyuncs.com/google_registry/myapp
31   tag: v2
32 [root@k8s-master my-test-app]# 
33 [root@k8s-master my-test-app]# ll templates/
34 total 8
35 -rw-r--r-- 1 root root 544 Jul 16 21:04 deployment.yaml
36 -rw-r--r-- 1 root root 222 Jul 16 20:41 service.yaml
37 [root@k8s-master my-test-app]# 
38 [root@k8s-master my-test-app]# cat templates/deployment.yaml 
39 apiVersion: apps/v1
40 kind: Deployment
41 metadata:
42   name: {{ .Values.deployname }}
43   labels:
44     app: mytestapp-deploy
45 spec:
46   replicas: {{ .Values.replicaCount }}
47   selector:
48     matchLabels:
49       app: mytestapp
50       env: test
51   template:
52     metadata:
53       labels:
54         app: mytestapp
55         env: test
56         description: mytest
57     spec:
58       containers:
59       - name: myapp-pod
60         image: {{ .Values.images.repository }}:{{ .Values.images.tag }}
61         imagePullPolicy: IfNotPresent
62         ports:
63           - containerPort: 80
64 
65 [root@k8s-master my-test-app]# 
66 [root@k8s-master my-test-app]# cat templates/service.yaml 
67 apiVersion: v1
68 kind: Service
69 metadata:
70   name: my-test-app
71   namespace: default
72 spec:
73   type: NodePort
74   selector:
75     app: mytestapp
76     env: test
77   ports:
78   - name: http
79     port: 80
80     targetPort: 80
81     protocol: TCP

 

生成release

 1 [root@k8s-master my-test-app]# pwd
 2 /root/k8s_practice/helm/my-test-app
 3 [root@k8s-master my-test-app]# ll
 4 total 8
 5 -rw-r--r-- 1 root root 160 Jul 16 21:15 Chart.yaml
 6 drwxr-xr-x 2 root root  49 Jul 16 21:04 templates
 7 -rw-r--r-- 1 root root 129 Jul 16 21:04 values.yaml
 8 [root@k8s-master my-test-app]# 
 9 [root@k8s-master my-test-app]# helm install --name mytest-app01 .   ### 如果在上級目錄則為 helm install --name mytest-app01 my-test-app/
10 NAME:   mytest-app01
11 LAST DEPLOYED: Thu Jul 16 21:18:08 2020
12 NAMESPACE: default
13 STATUS: DEPLOYED
14 
15 RESOURCES:
16 ==> v1/Deployment
17 NAME           READY  UP-TO-DATE  AVAILABLE  AGE
18 my-test-app02  0/2    2           0          0s
19 
20 ==> v1/Pod(related)
21 NAME                            READY  STATUS             RESTARTS  AGE
22 my-test-app02-58cb6b67fc-4ss4v  0/1    ContainerCreating  0         0s
23 my-test-app02-58cb6b67fc-w2nhc  0/1    ContainerCreating  0         0s
24 
25 ==> v1/Service
26 NAME         TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)       AGE
27 my-test-app  NodePort  10.110.82.62  <none>       80:30965/TCP  0s
28 
29 [root@k8s-master my-test-app]# helm list
30 NAME            REVISION    UPDATED                     STATUS      CHART                 APP VERSION    NAMESPACE
31 mytest-app01    1           Thu Jul 16 21:18:08 2020    DEPLOYED    my-test-app-v1.0.0    v2.2           default

 

curl訪問

 1 [root@k8s-master ~]# kubectl get pod -o wide
 2 NAME                             READY   STATUS    RESTARTS   AGE    IP             NODE         NOMINATED NODE   READINESS GATES
 3 my-test-app02-58cb6b67fc-4ss4v   1/1     Running   0          9m3s   10.244.2.187   k8s-node02   <none>           <none>
 4 my-test-app02-58cb6b67fc-w2nhc   1/1     Running   0          9m3s   10.244.4.134   k8s-node01   <none>           <none>
 5 [root@k8s-master ~]# 
 6 [root@k8s-master ~]# kubectl get svc -o wide
 7 NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE    SELECTOR
 8 kubernetes    ClusterIP   10.96.0.1      <none>        443/TCP        65d    <none>
 9 my-test-app   NodePort    10.110.82.62   <none>        80:30965/TCP   9m8s   app=mytestapp,env=test
10 [root@k8s-master ~]#
11 ##### 根據svc的IP訪問
12 [root@k8s-master ~]# curl 10.110.82.62
13 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
14 [root@k8s-master ~]# 
15 [root@k8s-master ~]# curl 10.110.82.62/hostname.html
16 my-test-app02-58cb6b67fc-4ss4v
17 [root@k8s-master ~]# 
18 [root@k8s-master ~]# curl 10.110.82.62/hostname.html
19 my-test-app02-58cb6b67fc-w2nhc
20 [root@k8s-master ~]# 
21 ##### 根據本機的IP訪問
22 [root@k8s-master ~]# curl 172.16.1.110:30965/hostname.html
23 my-test-app02-58cb6b67fc-w2nhc
24 [root@k8s-master ~]# 
25 [root@k8s-master ~]# curl 172.16.1.110:30965/hostname.html
26 my-test-app02-58cb6b67fc-4ss4v

 

chart更新

values.yaml檔案修改

 1 [root@k8s-master my-test-app]# pwd
 2 /root/k8s_practice/helm/my-test-app
 3 [root@k8s-master my-test-app]# 
 4 [root@k8s-master my-test-app]# cat values.yaml 
 5 deployname: my-test-app02
 6 replicaCount: 2
 7 images:
 8   repository: registry.cn-beijing.aliyuncs.com/google_registry/myapp
 9   # 改了tag
10   tag: v3

 

重新release釋出

 1 [root@k8s-master my-test-app]# helm list
 2 NAME            REVISION    UPDATED                     STATUS      CHART                 APP VERSION    NAMESPACE
 3 mytest-app01    1           Thu Jul 16 21:18:08 2020    DEPLOYED    my-test-app-v1.0.0    v2.2           default  
 4 [root@k8s-master my-test-app]# 
 5 [root@k8s-master my-test-app]# helm upgrade mytest-app01 .    ### 如果在上級目錄則為 helm upgrade mytest-app01 my-test-app/
 6 Release "mytest-app01" has been upgraded.
 7 LAST DEPLOYED: Thu Jul 16 21:32:25 2020
 8 NAMESPACE: default
 9 STATUS: DEPLOYED
10 
11 RESOURCES:
12 ==> v1/Deployment
13 NAME           READY  UP-TO-DATE  AVAILABLE  AGE
14 my-test-app02  2/2    1           2          14m
15 
16 ==> v1/Pod(related)
17 NAME                            READY  STATUS             RESTARTS  AGE
18 my-test-app02-58cb6b67fc-4ss4v  1/1    Running            0         14m
19 my-test-app02-58cb6b67fc-w2nhc  1/1    Running            0         14m
20 my-test-app02-6b84df49bb-lpww7  0/1    ContainerCreating  0         0s
21 
22 ==> v1/Service
23 NAME         TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)       AGE
24 my-test-app  NodePort  10.110.82.62  <none>       80:30965/TCP  14m
25 
26 
27 [root@k8s-master my-test-app]# 
28 [root@k8s-master my-test-app]# helm list
29 NAME            REVISION    UPDATED                     STATUS      CHART                 APP VERSION    NAMESPACE
30 mytest-app01    2           Thu Jul 16 21:32:25 2020    DEPLOYED    my-test-app-v1.0.0    v2.2           default

curl訪問,可參見上面。可見app version已從v2改為了v3。

 

相關閱讀

1、Helm官網地址

2、Helm官網部署helm

3、Helm的GitHub地址

完畢!

 


 

 

———END———
如果覺得不錯就關注下唄 (-^O^-) !

&n