1. 程式人生 > 其它 >kafka2.8叢集搭建及監控配置

kafka2.8叢集搭建及監控配置

kafka2.8叢集搭建及監控配置

背景

業務需要搭建kafka叢集及監控,kafka依賴於zookeeper叢集,所以也一併記錄下來。

ip 作業系統 主機名 用途
192.168.0.19 CentOS7 zk1 zookeeper叢集
192.168.0.36 CentOS7 zk2 zookeeper叢集
192.168.0.18 CentOS7 zk3 zookeeper叢集
192.168.0.137 CentOS7 kafka01 kafka叢集
192.168.0.210 CentOS7 kafka02 kafka叢集
192.168.0.132 CentOS7 kafka03 kafka叢集

zookeeper叢集搭建

搭建

3臺都要操作

下載安裝包,解壓

wget https://mirrors.cloud.tencent.com/apache/zookeeper/zookeeper-3.7.0/apache-zookeeper-3.7.0-bin.tar.gz
tar zxvf apache-zookeeper-3.7.0-bin.tar.gz
mv apache-zookeeper-3.7.0-bin zookeeper
rm -f apache-zookeeper-3.7.0-bin.tar.gz 

建立叢集配置檔案

cat <<EOF> /data/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data/
clientPort=2181
server.0=192.168.0.19:2888:3888
server.1=192.168.0.36:2888:3888
server.2=192.168.0.18:2888:3888
EOF

建立資料目錄

mkdir -p /data/zookeeper/data/

3臺分別操作

zk1

echo 0 > /data/zookeeper/data/myid

zk2

echo 1 > /data/zookeeper/data/myid

zk3

echo 2 > /data/zookeeper/data/myid

3臺都執行

cd /data/zookeeper/bin/ && ./zkServer.sh start
cd /data/zookeeper/bin/ && ./zkServer.sh status

會輸出如下結果:

# ./zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
# ./zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
# ./zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

搭建完成

kafka叢集搭建

搭建

以下操作3臺都要做

下載安裝包,解壓

cd /data
wget https://mirrors.cloud.tencent.com/apache/kafka/2.8.0/kafka_2.13-2.8.0.tgz
tar xvf kafka_2.13-2.8.0.tgz
mv kafka* kafka

配置環境變數

cat <<EOF> /etc/profile.d/kafka.sh
export KAFKA_HOME=/data/kafka
export PATH=$PATH:$KAFKA_HOME/bin
EOF

source /etc/profile.d/kafka.sh

重啟指令碼

cat <<EOF> /data/kafka/restart.sh
#!/bin/bash

kafka-server-stop.sh
nohup kafka-server-start.sh config/server.properties >> /data/kafka/nohup.out 2>&1 &
EOF

chmod +x /data/kafka/restart.sh

用於監控的配置,修改 bin/kafka-server-start.sh,增加 JMX_PORT,可以獲取更多指標

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
    export JMX_PORT="9099"
fi

關鍵配置如下:

name 含義 舉例
broker.id 一個Kafka節點就是一個Broker.id,要保證唯一性 broker.id=0
listeners kafka只面向內網時用到listeners,內外網需要作區分時才需要用到advertised.listeners listeners=PLAINTEXT://192.168.0.137:9092
zookeeper.connect 配置zk叢集資訊 zookeeper.connect=192.168.0.19:2181

kafka01 配置檔案/data/kafka/config/server.properties

broker.id=0
listeners=PLAINTEXT://192.168.0.137:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.19:2181,192.168.0.36:2181,192.168.0.18:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

kafka02 配置檔案/data/kafka/config/server.properties

broker.id=1
listeners=PLAINTEXT://192.168.0.210:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.19:2181,192.168.0.36:2181,192.168.0.18:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

kafka03 配置檔案/data/kafka/config/server.properties

broker.id=2
listeners=PLAINTEXT://192.168.0.132:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.19:2181,192.168.0.36:2181,192.168.0.18:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

3臺都啟動

/data/kafka/restart.sh

監控

官方主頁:https://github.com/smartloli/kafka-eagle

官方部署文件:https://www.kafka-eagle.org/articles/docs/installation/linux-macos.html

安裝包下載地址:

https://github.com/smartloli/kafka-eagle/archive/refs/tags/v2.0.6.tar.gz

安裝配置

mkdir -p /opt/kafka-eagele
tar zxvf kafka-eagle-bin-2.0.6.tar.gz 
cd kafka-eagle-bin-2.0.6/
tar zxvf kafka-eagle-web-2.0.6-bin.tar.gz
mv kafka-eagle-web-2.0.6/* /opt/kafka-eagele/

配置檔案 conf/system-config.properties

# 填寫zk地址,會自動獲取到kafka節點
kafka.eagle.zk.cluster.alias=cluster1
cluster1.zk.list=tdn1:2181,tdn2:2181,tdn3:2181

# 預設使用sqlite,容易死鎖,需修改為MySQL
# Default use sqlite to store data
#kafka.eagle.driver=org.sqlite.JDBC
# It is important to note that the '/hadoop/kafka-eagle/db' path must be exist.
#kafka.eagle.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db
#kafka.eagle.username=root
#kafka.eagle.password=smartloli

# MySQL建立ke資料庫即可,無需導 SQL
kafka.eagle.driver=com.mysql.jdbc.Driver
kafka.eagle.url=jdbc:mysql://127.0.0.1:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
kafka.eagle.username=root
kafka.eagle.password=smartloli

執行

cd bin
chmod +x ke.sh 
./ke.sh start

瀏覽器訪問 http://localhost:8048

admin/123456