1. 程式人生 > 實用技巧 >Kafka:docker安裝Kafka訊息佇列

Kafka:docker安裝Kafka訊息佇列

安裝之前先看下圖

Kafka基礎架構及術語

Kafka基本組成

Kafka cluster: Kafka訊息佇列(儲存訊息的佇列元件)

Zookeeper: 註冊中心(kafka叢集依賴zookeeper來儲存叢集的的元資訊,來保證系統的可用性

Producer: 提供者(往佇列放資料的程式或程式碼)

Consumer: 消費者(從佇列取資料的程式或程式碼)

Kafka cluster 組成
    BrokerBroker是kafka例項,每個伺服器上有一個或多個kafka的例項,我們姑且認為每個broker對應一臺伺服器。每個kafka叢集內的broker都有一個不重複的編號,如圖中的broker-0、broker-1等……
    Topic

訊息的主題,可以理解為訊息的分類,kafka的資料就儲存在topic。在每個broker上都可以建立多個topic。
    PartitionTopic的分割槽,每個topic可以有多個分割槽,分割槽的作用是做負載,提高kafka的吞吐量。同一個topic在不同的分割槽的資料是不重複的,partition的表現形式就是一個一個的資料夾!
    Replication: 每一個分割槽都有多個副本,副本的作用是做備胎。當主分割槽(Leader)故障的時候會選擇一個備胎(Follower)上位,成為Leader。在kafka中預設副本的最大數量是10個,且副本的數量不能大於Broker的數量,follower和leader絕對是在不同的機器,同一機器對同一個分割槽也只可能存放一個副本(包括自己)。
    Message
每一條傳送的訊息主體。

Consumer Group組成我們可以將多個消費組組成一個消費者組,在kafka的設計中同一個分割槽的資料只能被消費者組中的某一個消費者消費。同一個消費者組的消費者可以消費同一個topic的不同分割槽的資料,這也是為了提高kafka的吞吐量!

安裝Zookeeper

#docker下載zookeeper映象
docker pull wurstmeister/zookeeper:latest
#生成zookeeper容器
docker run -d --name zookeeper -p 2181:2181 -v /etc/localtime:/etc/localtime wurstmeister/zookeeper:latest

配置詳解

  • -v /etc/localtime:/etc/localtime 容器時間同步虛擬機器的時間

安裝Kafka

#docker下載kafka映象
docker pull wurstmeister/kafka:latest
#生成容器
docker run  -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=10.9.44.11:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://10.9.44.11:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka:latest

配置詳解

  • -e KAFKA_BROKER_ID=0 #在kafka叢集中,每個kafka都有一個BROKER_ID來區分自己
  • -e KAFKA_ZOOKEEPER_CONNECT=10.9.44.11:2181/kafka #配置zookeeper管理kafka的路徑10.9.44.11:2181/kafka
  • -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://10.9.44.11:9092 #把kafka的地址埠註冊給zookeeper
  • -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 #配置kafka的監聽埠

完整server.properties配置檔案

路徑/etc/kafka/

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

##################################################################################
#  broker就是一個kafka的部署例項,在一個kafka叢集中,每一臺kafka都要有一個broker.id
#  並且,該id唯一,且必須為整數
##################################################################################
broker.id=10

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = security_protocol://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

##################################################################################
#The number of threads handling network requests
# 預設處理網路請求的執行緒個數 3個
##################################################################################
num.network.threads=3
##################################################################################
# The number of threads doing disk I/O
# 執行磁碟IO操作的預設執行緒個數 8
##################################################################################
num.io.threads=8

##################################################################################
# The send buffer (SO_SNDBUF) used by the socket server
# socket服務使用的進行傳送資料的緩衝區大小,預設100kb
##################################################################################
socket.send.buffer.bytes=102400

##################################################################################
# The receive buffer (SO_SNDBUF) used by the socket server
# socket服務使用的進行接受資料的緩衝區大小,預設100kb
##################################################################################
socket.receive.buffer.bytes=102400

##################################################################################
# The maximum size of a request that the socket server will accept (protection against OOM)
# socket服務所能夠接受的最大的請求量,防止出現OOM(Out of memory)記憶體溢位,預設值為:100m
# (應該是socker server所能接受的一個請求的最大大小,預設為100M)
##################################################################################
socket.request.max.bytes=104857600

############################# Log Basics (資料相關部分,kafka的資料稱為log)#############################

##################################################################################
# A comma seperated list of directories under which to store log files
# 一個用逗號分隔的目錄列表,用於儲存kafka接受到的資料
##################################################################################
log.dirs=/home/uplooking/data/kafka

##################################################################################
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
# 每一個topic所對應的log的partition分割槽數目,預設1個。更多的partition數目會提高消費
# 並行度,但是也會導致在kafka叢集中有更多的檔案進行傳輸
# (partition就是分散式儲存,相當於是把一份資料分開幾份來進行儲存,即劃分塊、劃分分割槽的意思)
##################################################################################
num.partitions=1

##################################################################################
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
# 每一個數據目錄用於在啟動kafka時恢復資料和在關閉時重新整理資料的執行緒個數。如果kafka資料儲存在磁碟陣列中
# 建議此值可以調整更大。
##################################################################################
num.recovery.threads.per.data.dir=1

############################# Log Flush Policy (資料重新整理策略)#############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs(平衡) here:
#    1. Durability 永續性: Unflushed data may be lost if you are not using replication.
#    2. Latency 延時性: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput 吞吐量: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# kafka中只有基於訊息條數和時間間隔數來制定資料重新整理策略,而沒有大小的選項,這兩個選項可以選擇配置一個
# 當然也可以兩個都配置,預設情況下兩個都配置,配置如下。

# The number of messages to accept before forcing a flush of data to disk
# 訊息重新整理到磁碟中的訊息條數閾值
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
# 訊息重新整理到磁碟生成一個log資料檔案的時間間隔
#log.flush.interval.ms=1000

############################# Log Retention Policy(資料保留策略) #############################

# The following configurations control the disposal(清理) of log segments(分片). The policy can
# be set to delete segments after a period of time, or after a given size has accumulated(累積).
# A segment will be deleted whenever(無論什麼時間) *either* of these criteria(標準) are met. Deletion always happens
# from the end of the log.
# 下面的配置用於控制資料片段的清理,只要滿足其中一個策略(基於時間或基於大小),分片就會被刪除

# The minimum age of a log file to be eligible for deletion
# 基於時間的策略,刪除日誌資料的時間,預設儲存7天
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes. 1G
# 基於大小的策略,1G
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
# 資料分片策略
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies 5分鐘
# 每隔多長時間檢測資料是否達到刪除條件
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=uplooking01:2181,uplooking02:2181,uplooking03:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

文章整合至:https://www.cnblogs.com/panpanwelcome/p/12580506.htmlhttps://blog.csdn.net/qq_22041375/article/details/106180415https://www.cnblogs.com/toutou/p/linux_install_kafka.html