kafka配置檔案記錄
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# 每一個broker在叢集中的唯一表示,要求是正數。當該伺服器的IP地址發生改變時,broker.id沒有變化,則不會影響consumers的訊息情況
broker.id=0
############################# Socket Server Settings #############################
listeners=PLAINTEXT://:9092
# The port the socket server listens on
#port=9092
# Hostname the broker will bind to. If not set, the server will bind to all interfaces
#host.name=master
# Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients>
# The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients>
# broker處理訊息的最大執行緒數,一般情況下不需要去修改
num.network.threads=3
# broker處理磁碟IO的執行緒數,數值應該大於你的硬碟數
num.io.threads=8
# socket的傳送緩衝區,socket的調優引數SO_SNDBUFF
socket.send.buffer.bytes=102400
# socket的接受緩衝區,socket的調優引數SO_RCVBUFF
socket.receive.buffer.bytes=102400
# socket請求的最大數值,防止serverOOM,message.max.bytes必然要小於socket.request.max.bytes,會被topic建立時的指定引數覆蓋
socket.request.max.bytes=104857600
############################# Log Basics #############################
# kafka資料的存放地址,多個地址的話用逗號分割 /data/kafka-logs-1,/data/kafka-logs-2
log.dirs=/tmp/kafka-logs
#每個topic的分割槽個數,若是在topic建立時候沒有指定的話會被topic建立時的指定引數覆蓋
num.partitions=2
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# 每個資料目錄用來日誌恢復的執行緒數目
num.recovery.threads.per.data.dir=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# 資料儲存的最大時間保24*7 7 天
log.retention.hours=168
# 每個topic下每個partition儲存資料的總量;注意,這是每個partitions的上限,因此這個數值乘以partitions的個數就是每個topic儲存的資料總量。同時注意:如果log.retention.hours和log.retention.bytes都設定了,則超過了任何一個限制都會造成刪除一個段檔案。
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# 檢查日誌分段檔案的間隔時間,以確定是否檔案屬性是否到達刪除要求。
log.retention.check.interval.ms=300000
#當這個屬性設定為false時,一旦日誌的儲存時間或者大小達到上限時,就會被刪除;如果設定為true,則當儲存屬性達到上限時,就會進行log compaction。
log.cleaner.enable=false
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
Here is our server production server configuration:
# Replication configurations
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
# Log configuration
num.partitions=8
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=168
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
# ZK configuration
zookeeper.connection.timeout.ms=6000
zookeeper.sync.time.ms=2000
# Socket server configuration
num.io.threads=8
num.network.threads=8
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
相關推薦
kafka配置檔案記錄
# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distrib
Kafka配置檔案詳解
Kafka配置檔案詳解 1. 生產端的配置檔案 producer.properties 2. 消費端的配置檔案 consumer.properties: 3.服務端的配置檔案 server.properties
kafka(06)——kafka配置檔案的說明
server.properties的配置說明 #broker的全域性唯一編號,不能重複 broker.id=0 #用來監聽連結的埠,producer或consumer將在此埠建立連線 port=9092 #處理網路請求的執行緒數量 num.network.t
unity 讀寫安卓xml配置檔案——記錄貼
最近在研究安卓讀寫xml檔案,感覺被虐的死去活來的,找不到問題的原因,如今已順利解決,來記錄下來之不易的經驗。 我採用的方案是將檔案存入StreamingAssets資料夾下,然後執行的時候自動載入到安卓的持久化目錄下Application.persiste
kafka配置檔案詳解之:server.properties
#每一個broker在叢集中的唯一表示,要求是正數。當該伺服器的IP地址發生改變時,broker.id沒有變化,則不會影響consumers的訊息情況broker.id=0#broker server服務埠 port =9092#處理網路請求的執行緒數量num
Flume同步kafka配置檔案
到flume官網下載flume,解壓 cd $FLUME_HOME/conf cp flume-conf.properties.template applog-conf.properties 修改applog-conf.properties屬性
kafka配置檔案詳解之:producer.properties
#指定節點列表 metadata.broker.list=kafka01:9092,kafka02:9092,kafka03:9092 #指定分割槽處理類。預設kafka.producer.D
Kafka之——Kafka配置檔案server.properties(三個版本)
前言 其實每個版本都有些許改動,只不過改動大小而已,但是網上的教程都真的太老了,其實更新一下也費不了多少時間 0.9.0 # Licensed to the Apache Software Foundation (ASF) under one or more # con
kafka實戰教程(python操作kafka),kafka配置檔案詳解
全棧工程師開發手冊 (作者:欒鵬) kafka介紹 1.1. 主要功能 根據官網的介紹,ApacheKafka®是一個分散式流媒體平臺,它主要有3種功能: 1:It lets you publish and subscribe to strea
kafka 配置檔案引數詳解
kafka的配置分為 broker、producter、consumer三個不同的配置 一 BROKER 的全域性配置 最為核心的三個配置 broker.id、log.dir、zookeeper.connect 。 ------------------------
VIM配置檔案與Gnome記錄
無論是使用mac還是各個版本的linux,順手的vim總是必不可少。 在這裡記錄一下自己的vim配置 sudo vim /etc/vim/vimrc 完整版 set nu highlight LineNr ctermfg=gray set tabstop=4 set shi
Ibator生成iBATIS配置檔案、DO及DAO操作記錄
Ibator是iBATIS的程式碼發生器, Ibator可以生成一個數據庫中的一個表(或多個表)的DAO層、DO層及符合iBATIS規範的配置,它減少了我們編寫配置檔案、建立DO及DAO的工作量,並且可以建立簡單的CRUD(建立,
【bug記錄】Eclipse執行Spring Boot專案讀取不到配置檔案
專案是spring boot專案, 編寫好程式碼後,我以spring boot app的形式執行專案,結果控制檯報錯,大概是說mybatis的mapper注入失敗,原因是datasource沒找到。檢查配置檔案的資料庫配置以及mybaits的配置後,未發現錯誤。拿起八倍鏡再
Quartz 監控學習記錄(三)配置檔案配置項的含義
quartz.properties #org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore #=================================================
Mybatis學習記錄(三)--Mybatis配置檔案詳解
關於mybatis的配置檔案,主要有兩個,一個是SqlMapperConfig.xml檔案一個是mapper.xml檔案,分別對這兩個進行深入全面學習. 一.SqlMapperConfig.xml檔案 1.標籤概況 在SqlMapperConfig
log4j2配置ThresholdFilter,讓info檔案記錄error日誌
日誌級別: 是按嚴重(重要)程度來分的(如下6種): ALL < TRACE < DEBUG < INFO < WARN < ERROR < FATAL < OFF 列印日誌的規則: levelP
kafka中重要的引數配置 (記錄)
replica.fetch.max.bytes: 在kafka的配置中,如果沒有定義 replica.fetch.max.bytes的值,server 會給一個預設值(1M),在短訊息的應用場景下通常是不會有什麼問題的,但是在訊息比較大的情況下,雖然可以在to
Kafka的配置檔案詳細描述
#指定kafka節點列表,用於獲取metadata,不必全部指定 #需要kafka的伺服器地址,來獲取每一個topic的分片數等元資料資訊。 metadata.broker.list=kafka01:9092,kafka02:9092,kafka03:9092 #生產者生產的訊息被髮送到哪個block,需
spark讀取kafka資料(兩種方式比較及flume配置檔案)
a1.sources = r1 a1.channels = c1 a1.sinks = k1 a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1 a1.channels.c1.type = memory a1.channels.c1.capacity
Logstash學習10_Logstash從Kafka或檔案接收資料的配置demo介紹
下面介紹兩個Logstash的配置Demo: Demo1: input { kafka { zk_connect => "10.10.16.2:2181,10.10.16.3:2181,10.10.16.4:2181" group_id => "test