kubernetes部署kafka叢集
一.kafka介紹
kafka是一個分散式、多副本、多訂閱者、分割槽的,基於zoopkeeper協調的分散式日誌系統。其主要特點為:
1.以時間複雜度為O(1)的方式提供訊息持久化能力,即使對TB級以上的資料也能保證常數時間的訪問效能。
2.高吞吐率。即使在非常廉價的商用機器上也能做到單機支援每秒100K訊息的傳輸。
3.支援kafka server間的訊息分割槽以及分散式消費,同時保證每個partition內的訊息順序和傳輸。
4.同時支援離線資料處理和實時資料處理。
二.應用場景
1.日誌收集
2.資料推送
3.作為大緩衝區使用
4.服務中介軟體
三.應用架構
如上圖所示,一個kafka叢集包含若干個Producer(伺服器日誌、業務資料、Web前端產生的page view等),若干個Broker(kafka支援水平擴充套件,一般broker數量越多,叢集的吞吐量越大),若干個consumer group,一個zookeeper叢集(kafka通過zookeeper管理叢集配置、選舉leader、consumer group等發生變化時進行rebalance)。
3.1 名詞解釋
-
broker
訊息中介軟體處理節點(伺服器),一個節點就是一個broker,一個kafka叢集由一個或多個broker組成 -
Topic
kafka對訊息進行歸類,傳送到叢集的每一條訊息都要指定一個topic -
Partition
物理上的概念,每個topic包含一個或多個partition,一個partition對應一個資料夾,這個資料夾下儲存partition的資料和索引檔案,每個partition內部是有序的。 -
Producer
生產者,負責釋出訊息到broker -
Consumer
消費者,從broker讀取訊息 -
ConsumerGroup
每個consumer屬於一個特定的consumer group,可為每個consumer指定group name,若不指定,則屬於預設的group,一條訊息可以傳送到不同的consumer group,但一個consumer group中只能有一個consumer能消費這條訊息。
四.kubernetes叢集部署kafka
4.1 部署前準備
- 建立好的至少三個節點的kubernetes叢集(這裡我們使用的版本1.13.10)
- 建立好的儲存類StorageClass(這裡我們使用的是cephfs)
4.2 部署yaml檔案
1.部署zookeeper的yaml檔案
[root@k8s001 kafka]# cat zookeeper.yaml apiVersion: v1 kind: Service metadata: name: zk-hs namespace: kafka labels: app: zk spec: ports: - port: 2888 name: server - port: 3888 name: leader-election clusterIP: None selector: app: zk --- apiVersion: v1 kind: Service metadata: name: zk-cs namespace: kafka labels: app: zk spec: ports: - port: 2181 name: client selector: app: zk --- apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: zk-pdb namespace: kafka spec: selector: matchLabels: app: zk maxUnavailable: 1 --- apiVersion: apps/v1 kind: StatefulSet metadata: name: zk namespace: kafka spec: selector: matchLabels: app: zk serviceName: zk-hs replicas: 3 updateStrategy: type: RollingUpdate podManagementPolicy: Parallel template: metadata: labels: app: zk spec: nodeSelector: travis.io/schedule-only: "kafka" tolerations: - key: "travis.io/schedule-only" operator: "Equal" value: "kafka" effect: "NoSchedule" - key: "travis.io/schedule-only" operator: "Equal" value: "kafka" effect: "NoExecute" tolerationSeconds: 3600 - key: "travis.io/schedule-only" operator: "Equal" value: "kafka" effect: "PreferNoSchedule" affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app" operator: In values: - zk topologyKey: "kubernetes.io/hostname" containers: - name: kubernetes-zookeeper imagePullPolicy: IfNotPresent image: fastop/zookeeper:3.4.10 resources: requests: memory: "200Mi" cpu: "0.1" ports: - containerPort: 2181 name: client - containerPort: 2888 name: server - containerPort: 3888 name: leader-election command: - sh - -c - "start-zookeeper \ --servers=3 \ --data_dir=/var/lib/zookeeper/data \ --data_log_dir=/var/lib/zookeeper/data/log \ --conf_dir=/opt/zookeeper/conf \ --client_port=2181 \ --election_port=3888 \ --server_port=2888 \ --tick_time=2000 \ --init_limit=10 \ --sync_limit=5 \ --heap=512M \ --max_client_cnxns=60 \ --snap_retain_count=3 \ --purge_interval=12 \ --max_session_timeout=40000 \ --min_session_timeout=4000 \ --log_level=INFO" readinessProbe: exec: command: - sh - -c - "zookeeper-ready 2181" initialDelaySeconds: 10 timeoutSeconds: 5 livenessProbe: exec: command: - sh - -c - "zookeeper-ready 2181" initialDelaySeconds: 10 timeoutSeconds: 5 volumeMounts: - name: datadir mountPath: /var/lib/zookeeper # 這裡我們需要將runAsuser和fsGroup使用者調整為0,也就是管理員使用者允許,否則會提示許可權的報錯 securityContext: runAsUser: 0 fsGroup: 0 volumeClaimTemplates: - metadata: name: datadir spec: accessModes: [ "ReadWriteMany" ] storageClassName: cephfs resources: requests: storage: 20Gi
2.部署kafka的yaml檔案
[root@k8s001 kafka]# cat kafka.yaml
---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
namespace: kafka
labels:
app: kafka
spec:
ports:
- port: 9092
name: server
clusterIP: None
selector:
app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
namespace: kafka
spec:
selector:
matchLabels:
app: kafka
minAvailable: 2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
namespace: kafka
spec:
selector:
matchLabels:
app: kafka
serviceName: kafka-svc
replicas: 3
template:
metadata:
labels:
app: kafka
spec:
nodeSelector:
travis.io/schedule-only: "kafka"
tolerations:
- key: "travis.io/schedule-only"
operator: "Equal"
value: "kafka"
effect: "NoSchedule"
- key: "travis.io/schedule-only"
operator: "Equal"
value: "kafka"
effect: "NoExecute"
tolerationSeconds: 3600
- key: "travis.io/schedule-only"
operator: "Equal"
value: "kafka"
effect: "PreferNoSchedule"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- kafka
topologyKey: "kubernetes.io/hostname"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 300
containers:
- name: k8s-kafka
imagePullPolicy: IfNotPresent
image: fastop/kafka:2.2.0
resources:
requests:
memory: "600Mi"
cpu: 500m
ports:
- containerPort: 9092
name: server
command:
- sh
- -c
- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
--override listeners=PLAINTEXT://:9092 \
--override zookeeper.connect=zk-0.zk-hs.kafka.svc.cluster.local:2181,zk-1.zk-hs.kafka.svc.cluster.local:2181,zk-2.zk-hs.kafka.svc.cluster.local:2181 \
--override log.dir=/var/lib/kafka \
--override auto.create.topics.enable=true \
--override auto.leader.rebalance.enable=true \
--override background.threads=10 \
--override compression.type=producer \
--override delete.topic.enable=false \
--override leader.imbalance.check.interval.seconds=300 \
--override leader.imbalance.per.broker.percentage=10 \
--override log.flush.interval.messages=9223372036854775807 \
--override log.flush.offset.checkpoint.interval.ms=60000 \
--override log.flush.scheduler.interval.ms=9223372036854775807 \
--override log.retention.bytes=-1 \
--override log.retention.hours=168 \
--override log.roll.hours=168 \
--override log.roll.jitter.hours=0 \
--override log.segment.bytes=1073741824 \
--override log.segment.delete.delay.ms=60000 \
--override message.max.bytes=1000012 \
--override min.insync.replicas=1 \
--override num.io.threads=8 \
--override num.network.threads=3 \
--override num.recovery.threads.per.data.dir=1 \
--override num.replica.fetchers=1 \
--override offset.metadata.max.bytes=4096 \
--override offsets.commit.required.acks=-1 \
--override offsets.commit.timeout.ms=5000 \
--override offsets.load.buffer.size=5242880 \
--override offsets.retention.check.interval.ms=600000 \
--override offsets.retention.minutes=1440 \
--override offsets.topic.compression.codec=0 \
--override offsets.topic.num.partitions=50 \
--override offsets.topic.replication.factor=3 \
--override offsets.topic.segment.bytes=104857600 \
--override queued.max.requests=500 \
--override quota.consumer.default=9223372036854775807 \
--override quota.producer.default=9223372036854775807 \
--override replica.fetch.min.bytes=1 \
--override replica.fetch.wait.max.ms=500 \
--override replica.high.watermark.checkpoint.interval.ms=5000 \
--override replica.lag.time.max.ms=10000 \
--override replica.socket.receive.buffer.bytes=65536 \
--override replica.socket.timeout.ms=30000 \
--override request.timeout.ms=30000 \
--override socket.receive.buffer.bytes=102400 \
--override socket.request.max.bytes=104857600 \
--override socket.send.buffer.bytes=102400 \
--override unclean.leader.election.enable=true \
--override zookeeper.session.timeout.ms=6000 \
--override zookeeper.set.acl=false \
--override broker.id.generation.enable=true \
--override connections.max.idle.ms=600000 \
--override controlled.shutdown.enable=true \
--override controlled.shutdown.max.retries=3 \
--override controlled.shutdown.retry.backoff.ms=5000 \
--override controller.socket.timeout.ms=30000 \
--override default.replication.factor=1 \
--override fetch.purgatory.purge.interval.requests=1000 \
--override group.max.session.timeout.ms=300000 \
--override group.min.session.timeout.ms=6000 \
--override inter.broker.protocol.version=2.2.0 \
--override log.cleaner.backoff.ms=15000 \
--override log.cleaner.dedupe.buffer.size=134217728 \
--override log.cleaner.delete.retention.ms=86400000 \
--override log.cleaner.enable=true \
--override log.cleaner.io.buffer.load.factor=0.9 \
--override log.cleaner.io.buffer.size=524288 \
--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
--override log.cleaner.min.cleanable.ratio=0.5 \
--override log.cleaner.min.compaction.lag.ms=0 \
--override log.cleaner.threads=1 \
--override log.cleanup.policy=delete \
--override log.index.interval.bytes=4096 \
--override log.index.size.max.bytes=10485760 \
--override log.message.timestamp.difference.max.ms=9223372036854775807 \
--override log.message.timestamp.type=CreateTime \
--override log.preallocate=false \
--override log.retention.check.interval.ms=300000 \
--override max.connections.per.ip=2147483647 \
--override num.partitions=4 \
--override producer.purgatory.purge.interval.requests=1000 \
--override replica.fetch.backoff.ms=1000 \
--override replica.fetch.max.bytes=1048576 \
--override replica.fetch.response.max.bytes=10485760 \
--override reserved.broker.max.id=1000 "
env:
- name: KAFKA_HEAP_OPTS
value : "-Xmx512M -Xms512M"
- name: KAFKA_OPTS
value: "-Dlogging.level=INFO"
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
readinessProbe:
tcpSocket:
port: 9092
timeoutSeconds: 1
initialDelaySeconds: 5
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: cephfs
resources:
requests:
storage: 20Gi
4.3 部署
這裡zookeeper和kafka都是有狀態的服務,不能使用deployment和rc這種控制器來部署,這裡我們使用statefulset來部署zookeeper和kafka服務。
4.3.1 給節點打標籤
這裡我們想在哪幾臺機器上來執行kafka,需要對節點進行打標籤。
kubectl label node [node-name] travis.io/schedule-only=kafka
當然,如果我們如果不想在哪些節點執行kafka,可以通過配置汙點來進行。
kubectl taint node [node-name] travis.io/schedule-only=kafka:NoSchedule
4.3.2 建立名稱空間
[root@k8s001 kafka]# kubectl create ns kafka
4.3.3 建立zookeeper服務
# 建立zookeeper服務
[root@k8s001 kafka]# kubectl apply -f zookeeper.yaml
# 檢視zookeeper服務執行狀態
[root@k8s001 kafka]# kubectl get pod -n kafka
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 0 7m8s
zk-1 1/1 Running 0 7m8s
zk-2 1/1 Running 0 7m8s
4.3.4 建立kafka服務
[root@k8s001 kafka]# kubectl apply -f kafka.yaml
[root@k8s001 kafka]# kubectl get pod -n kafka
NAME READY STATUS RESTARTS AGE
kafka-0 1/1 Running 0 11m
kafka-1 1/1 Running 0 11m
kafka-2 1/1 Running 0 10m
zk-0 1/1 Running 0 6m44s
zk-1 1/1 Running 0 6m44s
zk-2 1/1 Running 0 6m44s
4.3.5 測試
測試zookeeper:
kubectl exec -it zk-0 -n kafka -- zkServer.sh status
kubectl exec -it zk-0 -n kafka -- zkCli.sh create /hello world
kubectl delete -f zookeeper.yaml
kubectl apply -f zookeeper.yaml
kubectl exec -it zk-0 -n kafka -- zkCli.sh get /hello
測試kafka:
kubectl exec -it kafka-0 -n kafka -- bash
>kafka-topics.sh --create \
--topic test \
--zookeeper zk-0.zk-hs.kafka.svc.cluster.local:2181,zk-1.zk-hs.kafka.svc.cluster.local:2181,zk-2.zk-hs.kafka.svc.cluster.local:2181 \
--partitions 3 \
--replication-factor 2
kafka-topics.sh --list --zookeeper zk-0.zk-hs.kafka.svc.cluster.local:2181,zk-1.zk-hs.kafka.svc.cluster.local:2181,zk-2.zk-hs.kafka.svc.cluster.local:2181
kafka-console-consumer.sh --topic test --bootstrap-server localhost:9092
# 另起一個視窗,進入kafka-1容器
kubectl exec -it kafka-1 -n kafka -- bash
>kafka-console-producer.sh --topic test --broker-list localhost:9092
隨便輸入內容,觀察kafka-0啟動的kafka-console-consumer.sh的輸出。