k8s selector_k8s利用statefulset部署kafka叢集(依賴zookeeper)
技術標籤:k8s selector
前言
最近在工作上調研k8s部署kafka叢集的方案,順帶調研部署zookeeper方案。以下配置和部署方案都是親測可用,有問題可以評論或私我解決。
kafka需要依賴zookeeper
kafka的生產者與消費者需要在zookeeper中註冊,不然消費者怎麼知道生產者是否存活之類的哈哈。廢話不多說,直接上乾貨!
本文用的是statefulset和動態儲存部署zookeeper和kafka叢集。
部署zookeeper
apiVersion: v1 kind: Service metadata: name: zk-headless namespace: liulei labels: app: zk spec: ports: - port: 2888 name: server - port: 3888 name: leader-election clusterIP: None #指定無頭服務,需要對外暴露自行建立一個service selector: app: zk --- apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: zk-pdb namespace: liulei spec: selector: matchLabels: app: zk minAvailable: 2 --- apiVersion: apps/v1 kind: StatefulSet metadata: name: zk #指定的zookeeper名稱會在kafka.yaml裡面用來配置和kafka連線,會建立pod為zk-0,zk-1,zk-2......這裡只建立了三個 namespace: liulei spec: selector: matchLabels: app: zk serviceName: zk-headless replicas: 3 updateStrategy: type: RollingUpdate # K8s 會將 StatefulSet 管理的 pod 分批次逐步替換掉 podManagementPolicy: OrderedReady # 設定為Parallel這樣Pod的建立就不必等待,而是會同時建立、同時刪除 template: metadata: labels: app: zk spec: containers: - name: k8s-zk image: k8szk:1.0-3.4.10 #需要指定自己的映象,可去阿里雲上下載zookeeper映象 imagePullPolicy: Always resources: requests: memory: "1Gi" cpu: "1000m" limits: memory: "1Gi" cpu: "1000m" ports: - containerPort: 2181 name: client - containerPort: 2888 name: server - containerPort: 3888 name: leader-election command: - sh - -c - "start-zookeeper --servers=5 --data_dir=/var/lib/zookeeper/data --data_log_dir=/var/lib/zookeeper/data/log --conf_dir=/opt/zookeeper/conf --client_port=2181 --election_port=3888 --server_port=2888 --tick_time=2000 --init_limit=10 --sync_limit=5 --heap=512M --max_client_cnxns=60 --snap_retain_count=3 --purge_interval=12 --max_session_timeout=40000 --min_session_timeout=4000 --log_level=INFO" - "zkGenConfig.sh && exec zkServer.sh start-foreground" readinessProbe: exec: command: - sh - -c - "zookeeper-ready 2181" initialDelaySeconds: 10 timeoutSeconds: 5 livenessProbe: exec: command: - sh - -c - "zookeeper-ready 2181" initialDelaySeconds: 10 timeoutSeconds: 5 imagePullPolicy: IfNotPresent volumeMounts: - name: zookeeper-pvc mountPath: /var/lib/zookeeper securityContext: runAsUser: 1000 fsGroup: 1000 volumeClaimTemplates: - metadata: name: zookeeper-pvc labels: type: stateful spec: accessModes: [ "ReadWriteMany" ] storageClassName: storageclass-default #需要指定你自己的動態儲存類名 resources: requests: storage: 1Gi
驗證部署zookeeper
待zookeeper叢集建立成功後驗證zookeeper叢集(注意修改自己的名稱空間 -n後面引數):
[[email protected] liulei]# for i in 0 1 2; do kubectl exec zk-$i -n liulei -- hostname; done
zk-0
zk-1
zk-2
[[email protected] liulei]# for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -nliulei -- cat /var/lib/zookeeper/data/myid; done
myid zk-0
1
myid zk-1
2
myid zk-2
3
[[email protected] liulei]# for i in 0 1 2; do kubectl exec zk-$i -nliulei -- hostname -f; done
zk-0.zk-headless.liulei.svc.cluster.local
zk-1.zk-headless.liulei.svc.cluster.local
zk-2.zk-headless.liulei.svc.cluster.local
部署kafka
apiVersion: v1 kind: Service metadata: name: kafka-svc namespace: liulei labels: app: kafka spec: ports: - port: 9093 name: server clusterIP: None #建立無頭服務,如果需要對外暴露埠可自行建立service selector: app: kafka --- apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: kafka-pdb namespace: liulei spec: selector: matchLabels: app: kafka minAvailable: 2 --- apiVersion: apps/v1 kind: StatefulSet metadata: name: kafka namespace: liulei spec: selector: matchLabels: app: kafka serviceName: kafka-svc replicas: 3 template: metadata: labels: app: kafka spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app" operator: In values: - kafka topologyKey: "kubernetes.io/hostname" podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: "app" operator: In values: - zk topologyKey: "kubernetes.io/hostname" terminationGracePeriodSeconds: 300 containers: - name: k8skafka imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/ccgg/k8skafka:v1 #可自行去阿里雲拉去對應映象 resources: requests: memory: "1Gi" cpu: "1000m" limits: memory: "1Gi" cpu: "1000m" ports: - containerPort: 9093 name: server command: - sh - -c - "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} --override listeners=PLAINTEXT://:9093 --override zookeeper.connect=zk-0.zk-headless.liulei.svc.cluster.local:2181,zk-1.zk-headless.liulei.svc.cluster.local:2181,zk-2.zk-headless.liulei.svc.cluster.local:2181 #這裡配置了與zookeeper進行連線,非常重要。格式為:pod名.zookeeper的service名.名稱空間.svc.cluster.local:2181 --override log.dir=/var/lib/kafka " env: - name: KAFKA_HEAP_OPTS value : "-Xmx512M -Xms512M" - name: KAFKA_OPTS value: "-Dlogging.level=INFO" volumeMounts: - name: kafka-data mountPath: /var/lib/kafka readinessProbe: exec: command: - sh - -c - "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9093" securityContext: runAsUser: 1000 fsGroup: 1000 volumeClaimTemplates: - metadata: name: kafka-data spec: accessModes: [ "ReadWriteMany" ] storageClassName: storageclass-default #需指定自己的動態儲存類名 resources: requests: storage: 1Gi
部署成功後檢視pod情況:
驗證kafka是否可用:
1、進入kafka-0命令: kubectl exec -it kafka-0 -nliulei bash
進入容器目錄:cd /opt/kafka/config
2、建立一個名為aaa的topc命令:kafka-topics.sh --create --topic aaa --zookeeper zk-0.zk-headless.liulei.svc.cluster.local:2181,zk-1.zk-headless.liulei.svc.cluster.local:2181,zk-2.zk-headless.liulei.svc.cluster.local:2181 --partitions 3 --replication-factor 2
結果為:
Created topic “aaa”.
3、進入topic為aaa的生產者訊息中心:kafka-console-consumer.sh --topic aaa --bootstrap-server localhost:9093
4、複製新的會話,進入另一個容器kafka-1:kubectl exec -it kafka-1 -nliulei bash
進入消費者,輸入命令:kafka-console-producer.sh --topic aaa --broker-list localhost:9093
輸入:
hello
i lovle you
回車後,可在生產者訊息中心看到訊息
總結
以上步驟和內容都是我一個一個坑踩過來的,親測可用,如果本文對你有幫助的話可以給我點個贊支援一下嗎,比心❤。
歡迎訪問我的部落格,裡面或許會有你感興趣的文章哦
leige24的部落格_CSDN部落格-K8S,Java,WSO2領域博主blog.csdn.net