k8s集群之上運行etcd集群
一.知識點:
1.headless services
NOTE:: 我們在k8s上運行etcd集群,集群間通告身份是使用dns,不能使用pod ip,因為如果pod被重構了ip會變,在這種場景中不能直接使用k8s 的service,因為在集群環境中我們需要直接將service name映射到pod ip,而不是 service ip,這樣我們才能完成集群間的身份驗證。
2.env
NOTE: pod還沒啟動之前怎麽能知道pod的ip呢?那我們啟動etcd時綁定網卡的ip該怎樣拿到呢?這時就可以用env將pod ip 以變量方式傳給etcd,這樣當pod啟動後,etcd就通過env拿到了pod ip, 從而綁定到container 的ip
二.架構:
采用三節點的etcd:
分別為:etcd1,etcd2,etcd3
etcd持久數據采用nfs
etcd1.yml
apiVersion: apps/v1 kind: Deployment metadata: name: etcd1 labels: name: etcd1 spec: replicas: 1 selector: matchLabels: app: etcd1 template: metadata: labels: app: etcd1 spec: containers:View Code- name: etcd1 image: myetcd imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /data name: etcd-data env: - name: host_ip valueFrom: fieldRef: fieldPath: status.podIP command: ["/bin/sh","-c"] args: - /tmp/etcd --name etcd1 --initial-advertise-peer-urls http://${host_ip}:2380 --listen-peer-urls http://${host_ip}:2380 --listen-client-urls http://${host_ip}:2379,http://127.0.0.1:2379 --advertise-client-urls http://${host_ip}:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380 --initial-cluster-state new --data-dir=/data volumes: - name: etcd-data nfs: server: 192.168.85.139 path: /data/v1
etcd2.yml
apiVersion: apps/v1 kind: Deployment metadata: name: etcd2 labels: name: etcd2 spec: replicas: 1 selector: matchLabels: app: etcd2 template: metadata: labels: app: etcd2 spec: containers: - name: etcd2 image: myetcd imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /data name: etcd-data env: - name: host_ip valueFrom: fieldRef: fieldPath: status.podIP command: ["/bin/sh","-c"] args: - /tmp/etcd --name etcd2 --initial-advertise-peer-urls http://${host_ip}:2380 --listen-peer-urls http://${host_ip}:2380 --listen-client-urls http://${host_ip}:2379,http://127.0.0.1:2379 --advertise-client-urls http://${host_ip}:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380 --initial-cluster-state new --data-dir=/data volumes: - name: etcd-data nfs: server: 192.168.85.139 path: /data/v2View Code
etcd3.yml
apiVersion: apps/v1 kind: Deployment metadata: name: etcd3 labels: name: etcd3 spec: replicas: 1 selector: matchLabels: app: etcd3 template: metadata: labels: app: etcd3 spec: containers: - name: etcd3 image: myetcd imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /data name: etcd-data env: - name: host_ip valueFrom: fieldRef: fieldPath: status.podIP command: ["/bin/sh","-c"] args: - /tmp/etcd --name etcd3 --initial-advertise-peer-urls http://${host_ip}:2380 --listen-peer-urls http://${host_ip}:2380 --listen-client-urls http://${host_ip}:2379,http://127.0.0.1:2379 --advertise-client-urls http://${host_ip}:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380 --initial-cluster-state new --data-dir=/data volumes: - name: etcd-data nfs: server: 192.168.85.139 path: /data/v3View Code
etcd1-svc.yml
apiVersion: v1 kind: Service metadata: name: etcd1 spec: clusterIP: None ports: - name: client port: 2379 targetPort: 2379 - name: message port: 2380 targetPort: 2380 selector: app: etcd1View Code
etcd2-svc.yml
apiVersion: v1 kind: Service metadata: name: etcd2 spec: clusterIP: None ports: - name: client port: 2379 targetPort: 2379 - name: message port: 2380 targetPort: 2380 selector: app: etcd2View Code
etcd3-svc.yml
apiVersion: v1 kind: Service metadata: name: etcd3 spec: clusterIP: None ports: - name: client port: 2379 targetPort: 2379 - name: message port: 2380 targetPort: 2380 selector: app: etcd3View Code
在任意節點查看etcd狀態:
[[email protected] test]# kubectl exec etcd1-79bdcb47c9-fwqps -- /tmp/etcdctl cluster-health
member 876043ef79ada1ea is healthy: got healthy result from http://10.244.1.113:2379
member 99eab3685d8363a1 is healthy: got healthy result from http://10.244.1.111:2379
member dcb68c82481661be is healthy: got healthy result from http://10.244.1.112:2379
k8s集群之上運行etcd集群