1. 程式人生 > 其它 >k8s下部署zookeeper和kafka叢集

k8s下部署zookeeper和kafka叢集

zookeeper配置istio sidecar後存在的網路不可用問題

如果zookeeper配置了istio sidecar ,在選舉階段就會報connection refused(Connection refused)錯誤

這主要是因為 zookeeper 在server之間通訊預設是監聽 pod IP 地址,而istio要求監聽0.0.0.0,因此需要設定quorumListenOnAllIPs=true

具體問題可以參考:https://istio.io/latest/faq/applications/

這個不止在 zookeeper 中會出現,包括 Apache NiFi 、 Cassandra、 Elasticsearch、

Redis 中安裝 sidecar 模式都會存在這個問題。

由於docker官方的zookeeper映象沒有提供 quorumListenOnAllIPs 的引數,我們需要直接手動新增,詳細參考這個issue: https://github.com/31z4/zookeeper-docker/issues/117

或者可以用 bitnami/zookeeper 這個映象,這個映象提供了 quorumListenOnAllIPs 支援,可以通過設定ZOO_LISTEN_ALLIPS_ENABLED環境變數來控制,下面是簡單的deployment檔案:

Zookeeper叢集的安裝

建立zookeeper-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: zookeeper
-1 labels: app: zookeeper-1 spec: ports: - name: client port: 2181 protocol: TCP - name: follower port: 2888 protocol: TCP - name: leader port: 3888 protocol: TCP selector: app: zookeeper-1 --- apiVersion: v1 kind: Service metadata: name: zookeeper-2 labels: app: zookeeper
-2 spec: ports: - name: client port: 2181 protocol: TCP - name: follower port: 2888 protocol: TCP - name: leader port: 3888 protocol: TCP selector: app: zookeeper-2 --- apiVersion: v1 kind: Service metadata: name: zookeeper-3 labels: app: zookeeper-3 spec: ports: - name: client port: 2181 protocol: TCP - name: follower port: 2888 protocol: TCP - name: leader port: 3888 protocol: TCP selector: app: zookeeper-3
建立zookeeper-deployment.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: zookeeper-1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper-1
  template:
    metadata:
      labels:
        app: zookeeper-1
    spec:
      containers:
      - name: zookeeper
        image: bitnami/zookeeper:3.6.2
        imagePullPolicy: Always
        ports:
        - containerPort: 2181
        env:
        - name: ALLOW_ANONYMOUS_LOGIN
          value: "yes"
        - name: ZOO_LISTEN_ALLIPS_ENABLED
          value: "true"
        - name: ZOO_SERVER_ID
          value: "1"
        - name: ZOO_SERVERS
          value: 0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
--- 
kind: Deployment
apiVersion: apps/v1
metadata:
  name: zookeeper-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper-2
  template:
    metadata:
      labels:
        app: zookeeper-2
    spec:
      containers:
      - name: zookeeper
        image: bitnami/zookeeper:3.6.2
        imagePullPolicy: Always
        ports:
        - containerPort: 2181
        env:
        - name: ALLOW_ANONYMOUS_LOGIN
          value: "yes"
        - name: ZOO_LISTEN_ALLIPS_ENABLED
          value: "true"
        - name: ZOO_SERVER_ID
          value: "2"
        - name: ZOO_SERVERS
          value: zookeeper-1:2888:3888,0.0.0.0:2888:3888,zookeeper-3:2888:3888
--- 

kind: Deployment
apiVersion: apps/v1
metadata:
  name: zookeeper-3
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper-3
  template:
    metadata:
      labels:
        app: zookeeper-3
    spec:
      containers:
      - name: zookeeper
        image: bitnami/zookeeper:3.6.2
        imagePullPolicy: Always
        ports:
        - containerPort: 2181
        env:
        - name: ALLOW_ANONYMOUS_LOGIN
          value: "yes"
        - name: ZOO_LISTEN_ALLIPS_ENABLED
          value: "true"
        - name: ZOO_SERVER_ID
          value: "3"
        - name: ZOO_SERVERS
          value: zookeeper-1:2888:3888,zookeeper-2:2888:3888,0.0.0.0:2888:3888

執行

kubectl apply -f zookeeper-svc.yaml -n zookeeper
kubectl apply -f zookeeper-deployment.yaml -n zookeeper

Kafka叢集的安裝

建立kafka-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: kafka-service-1
  labels:
    app: kafka-service-1
spec:
  type: NodePort
  ports:
  - port: 9092
    name: kafka-service-1
    targetPort: 9092
    nodePort: 30901
    protocol: TCP
  selector:
    app: kafka-service-1
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-service-2
  labels:
    app: kafka-service-2
spec:
  type: NodePort
  ports:
  - port: 9092
    name: kafka-service-2
    targetPort: 9092
    nodePort: 30902
    protocol: TCP
  selector:
    app: kafka-service-2
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-service-3
  labels:
    app: kafka-service-3
spec:
  type: NodePort
  ports:
  - port: 9092
    name: kafka-service-3
    targetPort: 9092
    nodePort: 30903
    protocol: TCP
  selector:
    app: kafka-service-3

建立kafka-deployment.yaml(注意修改我尖括號的地方)

檢視CLUSTER-IP可以使用kubectl get svc -n zookeeper,注意KAFKA_ADVERTISED_LISTENERS變數,我就是在這踩的坑,發現這裡不寫的話命令列能用,但使用java程式去連線會因為沒有代理而直接連線內網地址,最終導致連線不上node而出錯(你會發現程式會去直接連線CLUSTER-IP)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-deployment-1
spec:
  replicas: 1
  selector:
    matchLabels:
      name: kafka-service-1
  template:
    metadata:
      labels:
        name: kafka-service-1
        app: kafka-service-1
    spec:
      containers:
      - name: kafka-1
        image: wurstmeister/kafka
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9092
        env:
        - name: KAFKA_ADVERTISED_PORT
          value: "9092"
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: <kafka-svc1-CLUSTER-IP>
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zoo1:2181,zoo2:2181,zoo3:2181
        - name: KAFKA_BROKER_ID
          value: "1"
        - name: KAFKA_CREATE_TOPICS
          value: mytopic:2:1
        - name: KAFKA_ADVERTISED_LISTENERS
          value: PLAINTEXT://<master-ip例如192.168.128.52>:30901
        - name: KAFKA_LISTENERS
          value: PLAINTEXT://0.0.0.0:9092
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-deployment-2
spec:
  replicas: 1
  selector:
    matchLabels:
      name: kafka-service-2
  template:
    metadata:
      labels:
        name: kafka-service-2
        app: kafka-service-2
    spec:
      containers:
      - name: kafka-2
        image: wurstmeister/kafka
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9092
        env:
        - name: KAFKA_ADVERTISED_PORT
          value: "9092"
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: <kafka-svc2-CLUSTER-IP>
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zoo1:2181,zoo2:2181,zoo3:2181
        - name: KAFKA_BROKER_ID
          value: "2"
        - name: KAFKA_ADVERTISED_LISTENERS
          value: PLAINTEXT://<master-ip例如192.168.128.52>:30902
        - name: KAFKA_LISTENERS
          value: PLAINTEXT://0.0.0.0:9092
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-deployment-3
spec:
  replicas: 1
  selector:
    matchLabels:
      name: kafka-service-3
  template:
    metadata:
      labels:
        name: kafka-service-3
        app: kafka-service-3
    spec:
      containers:
      - name: kafka-3
        image: wurstmeister/kafka
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9092
        env:
        - name: KAFKA_ADVERTISED_PORT
          value: "9092"
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: <kafka-svc3-CLUSTER-IP>
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zoo1:2181,zoo2:2181,zoo3:2181
        - name: KAFKA_BROKER_ID
          value: "3"
        - name: KAFKA_ADVERTISED_LISTENERS
          value: PLAINTEXT://<master-ip例如192.168.128.52>:30903
        - name: KAFKA_LISTENERS
          value: PLAINTEXT://0.0.0.0:9092

測試
命令列
你可以進入任意一個pod然後使用命令列進行kafka的操作,常用的命令和目錄我放下面了
kubectl exec -it kafka-deployment-1-xxxxxxxxxxx -n zookeeper /bin/bash
cd cd opt/kafka
 
# 檢視topics
bin/kafka-topics.sh --list --zookeeper <任意zookeeper-svc-clusterIP>:2181
# 手動建立主題
bin/kafka-topics.sh --create --zookeeper <zookeeper-svc1-clusterIP>:2181,<zookeeper-svc2-clusterIP>:2181,<zookeeper-svc3-clusterIP>:2181 --topic test --partitions 3 --replication-factor 1
# 寫(CTRL+D結束寫內容)
bin/kafka-console-producer.sh --broker-list <kafka-svc1-clusterIP>:9092,<kafka-svc2-clusterIP>:9092,<kafka-svc3-clusterIP>:9092 --topic test
# 讀(CTRL+C結束讀內容)
bin/kafka-console-consumer.sh --bootstrap-server <任意kafka-svc-clusterIP>:9092 --topic test --from-beginning