[kubernetes]helm安裝
阿新 • • 發佈:2019-01-08
正常安裝請參考下面文章,helm與tiller的關係也可以在以下文章中找到,不再贅述。
但是在網路不是很好的環境下,安裝不總是那麼順利。接下來我們換一種安裝方式,基於pod的安裝方式。
1、下載helm安裝包:
https://github.com/kubernetes/helm/releases
tar -xvzf $HELM.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
2、驗證安裝
helm version
[[email protected] ~]$ helm version Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"} Error: could not find tiller
3、安裝tiller。首先搜尋tiller
docker search tiller
[[email protected] k8s]$ docker search tiller NAME DESCRIPTION STARS OFFICIAL AUTOMATED sapcc/tiller Mirror of https://gcr.io/kubernetes-helm/t... 5 jessestuart/tiller Nightly multi-architecture (amd64, arm64, ... 4 [OK] ist0ne/tiller https://gcr.io/kubernetes-helm/tiller 3 [OK] timotto/rpi-tiller k8s.io/tiller for Raspberry Pi 1 itinerisltd/tiller 1 rancher/tiller 1 luxas/tiller 1 ibmcom/tiller Docker Image for IBM Cloud private-CE (Com... 1 ansibleplaybookbundle/tiller-apb An APB that deploys tiller for use with helm. 0 [OK] pcanham/tiller tiller image for Raspberry Pi for testing ... 0 kubeapps/tiller-proxy 0 appscode/tiller 0 jmgao1983/tiller from gcr.io/kubernetes-helm/tiller 0 [OK] anjia0532/tiller 0 4admin2root/tiller gcr.io/kubernetes-helm/tiller 0 [OK] ibmcom/tiller-ppc64le Docker Image for IBM Cloud Private-CE (Com... 0 szgrgo/helm-with-tiller Use helm and tiller together 0 [OK]
關注第一行:Mirror of https://gcr.io/kubernetes-helm/t...
因為gcr.io庫的封鎖,用這個映象。
4、編輯安裝的yaml:
vi tiller.yaml
--- apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
5、執行命令:
kubectl apply -f tiller.yaml
6、再次驗證:
[[email protected] k8s]$ helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Error: could not find a ready tiller pod
7、代表tiller的pod已經存在,但是沒有執行起來。執行命令:
[[email protected] k8s]$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
3d
kube-system kubernetes-dashboard-58f5cb49c-zf7cn 1/1 Running 0 2d
kube-system tiller-deploy-9bdb7c6bc-28rv6 0/1 ImagePullBackOff 0 42s
8、檢視錯誤描述:
[[email protected] k8s]$ kubectl describe pod tiller-deploy-9bdb7c6bc-28rv6 -n kube-system
Name: tiller-deploy-9bdb7c6bc-28rv6
Namespace: kube-system
Node: mvxl2655/10.16.91.120
Start Time: Thu, 22 Nov 2018 18:00:01 +0800
Labels: app=helm
name=tiller
pod-template-hash=568637267
Annotations: <none>
Status: Pending
IP: 10.16.3.18
Controlled By: ReplicaSet/tiller-deploy-9bdb7c6bc
Containers:
tiller:
Container ID:
Image: gcr.io/kubernetes-helm/tiller:v2.11.0
Image ID:
Ports: 44134/TCP, 44135/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from tiller-token-ls9t2 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
tiller-token-ls9t2:
Type: Secret (a volume populated by a Secret)
SecretName: tiller-token-ls9t2
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned tiller-deploy-9bdb7c6bc-28rv6 to mvxl2655
Normal SuccessfulMountVolume 1m kubelet, mvxl2655 MountVolume.SetUp succeeded for volume "tiller-token-ls9t2"
Normal Pulling 1m (x2 over 1m) kubelet, mvxl2655 pulling image "gcr.io/kubernetes-helm/tiller:v2.11.0"
Warning Failed 55s (x2 over 1m) kubelet, mvxl2655 Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.11.0": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 55s (x2 over 1m) kubelet, mvxl2655 Error: ErrImagePull
Warning Failed 50s (x5 over 1m) kubelet, mvxl2655 Error: ImagePullBackOff
Normal SandboxChanged 49s (x7 over 1m) kubelet, mvxl2655 Pod sandbox changed, it will be killed and re-created.
Normal BackOff 47s (x6 over 1m) kubelet, mvxl2655 Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.11.0"
最後一行代表映象拉取失敗:Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.11.0"
9、為此要使用之前搜尋到的映象,編輯deploy,更改映象地址:
[[email protected] k8s]$ kubectl edit deploy tiller-deploy -n kube-system
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: 2018-11-22T10:00:00Z
generation: 2
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: kube-system
resourceVersion: "398202"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/tiller-deploy
uid: 5fd7370d-ee3d-11e8-a632-0050568a39f2
spec:
replicas: 1
selector:
matchLabels:
app: helm
name: tiller
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
spec:
automountServiceAccountToken: true
containers:
- env:
- name: TILLER_NAMESPACE
value: kube-system
- name: TILLER_HISTORY_MAX
value: "0"
image: sapcc/tiller:v2.11.0
imagePullPolicy: IfNotPresent
將 image gcr.io/kubernetes-helm/tiller:v2.11.0 替換成 image: sapcc/tiller:v2.11.0
10:、儲存後,kubernetes會自動生效,再次檢視pod,已經處於running狀態了。
[[email protected] k8s]$ kubectl get pod -n kube-system
tiller-deploy-6b84d85487-4h272 1/1 Running 0 45s
11、驗證helm
[[email protected] k8s]$ helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}