zookeeper 與 kafka的協同工作
First of all, zookeeper is needed only for high level consumer. SimpleConsumer
does
not require zookeeper to work.
The main reason zookeeper is needed for a high level consumer is to track consumed offsets and handle load balancing.
Now in more detail.
Regarding offset tracking, imagine following scenario: you start a consumer, consume 100 messages and shut the consumer down. Next time you start your consumer you'll probably want to resume from your last consumed offset (which is 100), and that means you
have to store the maximum consumed offset somewhere. Here's where zookeeper kicks in: it stores offsets for every group/topic/partition. So this way next time you start your consumer it may ask "hey zookeeper, what's the offset I should start consuming from?".
Kafka is actually moving towards being able to store offsets not only in zookeeper, but in other storages as well (for now only zookeeper
kafka
offset
storages are available and i'm not sure kafka
storage
is fully implemented).
Regarding load balancing, the amount of messages produced can be quite large to be handled by 1 machine and you'll probably want to add computing power at some point. Lets say you have a topic with 100 partitions and to handle this amount of messages you have 10 machines. There are several questions that arise here actually:
- how should these 10 machines divide partitions between each other?
- what happens if one of machines die?
- what happens if you want to add another machine?
And again, here's where zookeeper kicks in: it tracks all consumers in group and each high level consumer is subscribed for changes in this group. The point is that when a consumer appears or disappears, zookeeper notifies all consumers and triggers rebalance so that they split partitions near-equally (e.g. to balance load). This way it guarantees if one of consumer dies others will continue processing partitions that were owned by this consumer.
相關推薦
zookeeper 與 kafka的協同工作
First of all, zookeeper is needed only for high level consumer. SimpleConsumer does not require zookeeper to work. The main reason zookeeper is needed
zookeeper與kafka安裝部署及java環境搭建
3.4 項目目錄 tin bytes result zxvf util ise cat 1. ZooKeeper安裝部署 本文在一臺機器上模擬3個zk server的集群安裝。 1.1. 創建目錄、解壓 cd /usr/ #創建項目目錄 mkdir zookeepe
ZooKeeper與Kafka相關
blog kafak 相關 tails win ref windows href cnblogs Kafaka測試程序: 參考博文: ZooKeeper安裝為Windows服務:https://blog.csdn.net/yzy199391/articl
圖解Dubbo和ZooKeeper是如何協同工作的?
介紹 GitHub地址:https://github.com/erlieStar/study-dubbo 微服務是最近比較火的概念,而微服務框架目前主流的有Dubbo和Spring Cloud,兩者都是為了解決微服務遇到的各種問題而產生的,即遇到的問題是一樣的,但是解決的策略卻
docker:zookeeper與kafka實現分散式訊息佇列
一、安裝 下載映象 docker pull wurstmeister/zookeeper docker pull wurstmeister/kafka 通過docker-compose啟動 docker-compose.yml指令碼(zk+kafka版) vers
zookeeper與kafka 測試
1.zookeeper安裝包下載:http://zookeeper.apache.org/releases.html#download 2.kafka安裝包下載:http://kafka.apache.org/downloads.html 具體安裝細節見https:/
zookeeper與kafka的選舉演算法
學習kafka的過程中發現了Kafka 的選舉演算法的獨到之處,這裡通過與zk的選舉的對比,回顧一下zk的知識,同時也入門一下kafka的知識。 zookeeper 的選舉演算法: Phase 0: Leader election(選舉階段) 節點在一開始都處於選舉階段,只要有
zookeeper與kafka
zookeeper簡介 Zookeeper是一種在分散式系統中被廣泛用來作為:分散式狀態管理、分散式協調管理、分散式配置管理、和分散式鎖服務的叢集。kafka增加和減少伺服器都會在Zookeeper節點上觸發相應的事件kafka系統會捕獲這些事件,進行新一輪的負載均衡,
DNS與GTM協同工作原理
客戶訪問www.abc.com的dns請求流程如圖:1, 首先向其所在運營商的Local DNS發起www.abc.com域名的DNS請求,步驟1;2, 運營商的Local DNS伺服器從RootDNS得知www.abc.com由DNS-CTC、DNS-CNC、DNS-USA
Zookeeper 與 Kafka (1) : 分散式一致性原理與實踐
http://www.jianshu.com/p/fcc28b195fa9 多執行緒的最大副作用: 併發. 如果多個邏輯控制流在時間上發生了重疊, 就會產生併發.邏輯控制流是指一次程式操作. 如讀取或者更新記憶體變數的值.更新的併發性: 多執行緒同時更新記憶體值而產生的併
Linux下基於Hadoop的大資料環境搭建步驟詳解(Hadoop,Hive,Zookeeper,Kafka,Flume,Hbase,Spark等安裝與配置)
Linux下基於Hadoop的大資料環境搭建步驟詳解(Hadoop,Hive,Zookeeper,Kafka,Flume,Hbase,Spark等安裝與配置) 系統說明 搭建步驟詳述 一、節點基礎配置 二、H
如何配置 Apache TomCat 與 CE RAS 9 協同工作
本文中的知識涉及:水晶報表,水晶企業報表應用伺服器 9適用於:沒有對其它版本 TomCat 進行測試(譯者注:本文的配置方法同樣適用於 TomCat 5.x,已在 Tomcat 5.0.19 上進行了測試。)Apache TomCat部署 大綱 如何配置 Apache Tom
ffmpeg與ffserver的協同工作
本文轉自:http://www.cnblogs.com/liushunli/p/5303966.html ffmpeg和ffserver配合使用可以實現實時的流媒體服務,可以實時傳輸來自攝像頭的資料,客戶端可以採用HTTP、RTSP、RTP協議等播放視訊流。 一、
大資料(三十):zookeeper叢集與kafka叢集部署
一、安裝Zookeeper 1.叢集規劃 在hadoop102、hadoop103和hadoop104三個節點上部署Zookeeper。 2.解壓安裝 1.解壓zookeeper安裝包到/usr/local/目錄下 tar -zxvf zookeepe
node.js 與 redis 與 express 和session協同工作
var RedisStore = require('connect-redis')(express); var redis_ip='192.168.238.135', redis_port ='6379' ; app.use(express.sessio
Spark學習筆記:Spark Streaming與Spark SQL協同工作
Spark Streaming與Spark SQL協同工作 Spark Streaming可以和Spark Core,Spark SQL整合在一起使用,這也是它最強大的一個地方。 例項:實時統計搜尋次數大於3次的搜尋詞 package StreamingDemo i
dubbo協議下的單一長連線與多執行緒併發如何協同工作
開發十年,就只剩下這套架構體系了! >>>
【推薦】微服務分布式企業框架 Springmvc+mybatis+shiro+Dubbo+ZooKeeper+Redis+KafKa
分布式、微服務、雲架構 Spring SpringMVC Spring MVC+Mybatis Dubbo+Zookeeper Redis分布式緩存 FastDFS ActiveMQ 平臺簡介 Jeesz是一個分布式的框架,提供項目模塊化、服務
推薦】微服務分布式企業框架 Springmvc+mybatis+shiro+Dubbo+ZooKeeper+Redis+KafKa
分布式框架 maven springmvc mybatis redis dubbo zookeeper fastdfs 平臺簡介 Jeesz是一個分布式的框架,提供項目模塊化、服務化、熱插拔的思想,高度封裝安全性的Java EE快速開發平臺。 Jeesz本身
Elasticsearch 與 Kafka 整合剖析
簡單 prepare 3.2 ger 郵件 核心 pri servers 技術 1.概述 目前,隨著大數據的浪潮,Kafka 被越來越多的企業所認可,如今的Kafka已發展到0.10.x,其優秀的特性也帶給我們解決實際業務的方案。對於數據分流來說,既可以分流到離線存儲