Openshift部署Zookeeper和Kafka
部署Zookeeper
github網址
https://github.com/ericnie2015/zookeeper-k8s-openshift
1.在openshift目錄中,首先構建images
oc create -f buildconfig.yaml
oc new-app zk-builder -p IMAGE_STREAM_VERSION="3.4.13"
buildconfig.yaml中主要是啟動github中的docker型別的構建,並把結果push到imagestream中
- kind: BuildConfig apiVersion: v1 metadata: name: zk-builder spec: runPolicy: Serial triggers: - type: GitHub github: secret: ${GITHUB_HOOK_SECRET} - type: ConfigChange source: git: uri: ${GITHUB_REPOSITORY} ref: ${GITHUB_REF} strategy: type: Docker output: to: kind: ImageStreamTag name:"${IMAGE_STREAM_NAME}:${IMAGE_STREAM_VERSION}"
而Dockerfile更多的是一個從無到有的構建過程,為解決啟動無法訪問/opt/zookeeper/data目錄,把user設定成root
FROM openjdk:8-jre-alpine MAINTAINER Enrique Garcia <[email protected]> ARG ZOO_HOME=/opt/zookeeper ARG ZOO_USER=zookeeper ARG ZOO_GROUP=zookeeper ARG ZOO_VERSION="3.4.13" ENV ZOO_HOME=$ZOO_HOME \ ZOO_VERSION=$ZOO_VERSION \ ZOO_CONF_DIR=$ZOO_HOME/conf \ ZOO_REPLICAS=1 # Required packages RUN set -ex; \ apk add --update --no-cache \ bash tar wget curl gnupg openssl ca-certificates # Download zookeeper distribution under ZOO_HOME /zookeeper-3.4.13/ ADD zk_download.sh /tmp/ RUN set -ex; \ mkdir -p /opt/zookeeper/bin; \ mkdir -p /opt/zookeeper/conf; \ chmod a+x /tmp/zk_download.sh; RUN /tmp/zk_download.sh RUN set -ex \ rm -rf /tmp/zk_download.sh; \ apk del wget gnupg # Add custom scripts and configure user ADD zk_env.sh zk_setup.sh zk_status.sh /opt/zookeeper/bin/ RUN set -ex; \ chmod a+x $ZOO_HOME/bin/zk_*.sh; \ addgroup $ZOO_GROUP; \ addgroup sudo; \ adduser -h $ZOO_HOME -g "Zookeeper user" -s /sbin/nologin -D -G $ZOO_GROUP -G sudo $ZOO_USER; \ chown -R $ZOO_USER:$ZOO_GROUP $ZOO_HOME; \ ln -s $ZOO_HOME/bin/zk_*.sh /usr/bin USER root #USER $ZOO_USER WORKDIR $ZOO_HOME/bin/ # EXPOSE ${ZK_clientPort:-2181} ${ZOO_SERVER_PORT:-2888} ${ZOO_ELECTION_PORT:-3888} ENTRYPOINT ["./zk_env.sh"] #RUN echo "aaa" > /usr/alog #CMD ["tail","-f","/usr/alog"] CMD zk_setup.sh && ./zkServer.sh start-foreground
如果因為虛擬機器網路問題無法訪問外網,可以先在login到registry,然後直接在本地構建
ericdeMacBook-Pro:zookeeper-k8s-openshift ericnie$ docker build -t 172.30.1.1:5000/myproject/zookeeper:3.4.13 . Sending build context to Docker daemon 54.78kB Step 1/19 : FROM openjdk:8-jre-alpine Trying to pull repository registry.access.redhat.com/openjdk ... Trying to pull repository docker.io/library/openjdk ... sha256:e3168174d367db9928bb70e33b4750457092e61815d577e368f53efb29fea48b: Pulling from docker.io/library/openjdk 4fe2ade4980c: Pull complete 6fc58a8d4ae4: Pull complete d3e6d7e9702a: Pull complete Digest: sha256:e3168174d367db9928bb70e33b4750457092e61815d577e368f53efb29fea48b Status: Downloaded newer image for docker.io/openjdk:8-jre-alpine ---> 0fe3f0d1ee48
docker images
然後推送到映象庫
ericdeMacBook-Pro:zookeeper-k8s-openshift ericnie$ docker push 172.30.1.1:5000/myproject/zookeeper:3.4.13 The push refers to a repository [172.30.1.1:5000/myproject/zookeeper] 5fe222836c76: Pushed 55e1a1171f7a: Pushed 347a06ac9233: Pushed 03a33ce83585: Pushed 94058c4e233d: Pushed 984d85b76d76: Pushed cd4b8e8a8238: Pushed 12c374f8270a: Pushed 0c3170905795: Pushed df64d3292fd6: Pushed 3.4.13: digest: sha256:87bf78acf297bc2144d77ce4465294fec519fd50a4c197a1663cc4304c8040c9 size: 2413
完成後在console上看到imagestream
2.基於模版部署
建立部署模版
oc create -f zk-persistent.yaml
ericdeMacBook-Pro:openshift ericnie$ cat zk-persistent.yaml kind: Template apiVersion: v1 metadata: name: zk-persistent annotations: openshift.io/display-name: Zookeeper (Persistent) description: Create a replicated Zookeeper server with persistent storage iconClass: icon-database tags: database,zookeeper labels: template: zk-persistent component: zk parameters: - name: NAME value: zk-persistent required: true - name: SOURCE_IMAGE description: Container image value: 172.30.1.1:5000/myproject/zookeeper required: true - name: ZOO_VERSION description: Version value: "3.4.13" required: true - name: ZOO_REPLICAS description: Number of nodes value: "3" required: true - name: VOLUME_DATA_CAPACITY description: Persistent volume capacity for zookeeper dataDir directory (e.g. 512Mi, 2Gi) value: 1Gi required: true - name: VOLUME_DATALOG_CAPACITY description: Persistent volume capacity for zookeeper dataLogDir directory (e.g. 512Mi, 2Gi) value: 1Gi required: true - name: ZOO_TICK_TIME description: The number of milliseconds of each tick value: "2000" required: true - name: ZOO_INIT_LIMIT description: The number of ticks that the initial synchronization phase can take value: "5" required: true - name: ZOO_SYNC_LIMIT description: The number of ticks that can pass between sending a request and getting an acknowledgement value: "2" required: true - name: ZOO_CLIENT_PORT description: The port at which the clients will connect value: "2181" required: true - name: ZOO_SERVER_PORT description: Server port value: "2888" required: true - name: ZOO_ELECTION_PORT description: Election port value: "3888" required: true - name: ZOO_MAX_CLIENT_CNXNS description: The maximum number of client connections value: "60" required: true - name: ZOO_SNAP_RETAIN_COUNT description: The number of snapshots to retain in dataDir value: "3" required: true - name: ZOO_PURGE_INTERVAL description: Purge task interval in hours. Set to 0 to disable auto purge feature value: "1" required: true - name: ZOO_HEAP_SIZE description: JVM heap size value: "-Xmx960M -Xms960M" required: true - name: RESOURCE_MEMORY_REQ description: The memory resource request. value: "1Gi" required: true - name: RESOURCE_MEMORY_LIMIT description: The limits for memory resource. value: "1Gi" required: true - name: RESOURCE_CPU_REQ description: The CPU resource request. value: "1" required: true - name: RESOURCE_CPU_LIMIT description: The limits for CPU resource. value: "2" required: true objects: - apiVersion: v1 kind: Service metadata: name: ${NAME} labels: zk-name: ${NAME} annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports: - port: ${ZOO_CLIENT_PORT} name: client - port: ${ZOO_SERVER_PORT} name: server - port: ${ZOO_ELECTION_PORT} name: election clusterIP: None selector: zk-name: ${NAME} - apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: ${NAME} labels: zk-name: ${NAME} spec: podManagementPolicy: "Parallel" serviceName: ${NAME} replicas: ${ZOO_REPLICAS} template: metadata: labels: zk-name: ${NAME} template: zk-persistent component: zk annotations: scheduler.alpha.kubernetes.io/affinity: > { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [{ "labelSelector": { "matchExpressions": [{ "key": "zk-name", "operator": "In", "values": ["${NAME}"] }] }, "topologyKey": "kubernetes.io/hostname" }] } } spec: containers: - name: ${NAME} imagePullPolicy: IfNotPresent image: ${SOURCE_IMAGE}:${ZOO_VERSION} resources: requests: memory: ${RESOURCE_MEMORY_REQ} cpu: ${RESOURCE_CPU_REQ} limits: memory: ${RESOURCE_MEMORY_LIMIT} cpu: ${RESOURCE_CPU_LIMIT} ports: - containerPort: ${ZOO_CLIENT_PORT} name: client - containerPort: ${ZOO_SERVER_PORT} name: server - containerPort: ${ZOO_ELECTION_PORT} name: election env: - name : SETUP_DEBUG value: "true" - name : ZOO_REPLICAS value: ${ZOO_REPLICAS} - name : ZK_HEAP_SIZE value: ${ZOO_HEAP_SIZE} - name : ZK_tickTime value: ${ZOO_TICK_TIME} - name : ZK_initLimit value: ${ZOO_INIT_LIMIT} - name : ZK_syncLimit value: ${ZOO_SYNC_LIMIT} - name : ZK_maxClientCnxns value: ${ZOO_MAX_CLIENT_CNXNS} - name : ZK_autopurge_snapRetainCount value: ${ZOO_SNAP_RETAIN_COUNT} - name : ZK_autopurge_purgeInterval value: ${ZOO_PURGE_INTERVAL} - name : ZK_clientPort value: ${ZOO_CLIENT_PORT} - name : ZOO_SERVER_PORT value: ${ZOO_SERVER_PORT} - name : ZOO_ELECTION_PORT value: ${ZOO_ELECTION_PORT} - name : JAVA_ZK_JVMFLAG value: "\"${ZOO_HEAP_SIZE}\"" readinessProbe: exec: command: - zk_status.sh initialDelaySeconds: 20 timeoutSeconds: 10 livenessProbe: exec: command: - zk_status.sh initialDelaySeconds: 20 timeoutSeconds: 10 volumeMounts: - name: datadir mountPath: /opt/zookeeper/data - name: datalogdir mountPath: /opt/zookeeper/data-log volumeClaimTemplates: - metadata: name: datadir annotations: volume.alpha.kubernetes.io/storage-class: anything spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: ${VOLUME_DATA_CAPACITY} - metadata: name: datalogdir annotations: volume.alpha.kubernetes.io/storage-class: anything spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: ${VOLUME_DATALOG_CAPACITY}
oc new-app zk-persistent -p NAME=myzk --> Deploying template "test/zk-persistent" to project test Zookeeper (Persistent) --------- Create a replicated Zookeeper server with persistent storage * With parameters: * NAME=myzk * SOURCE_IMAGE=bbvalabs/zookeeper * ZOO_VERSION=3.4.13 * ZOO_REPLICAS=3 * VOLUME_DATA_CAPACITY=1Gi * VOLUME_DATALOG_CAPACITY=1Gi * ZOO_TICK_TIME=2000 * ZOO_INIT_LIMIT=5 * ZOO_SYNC_LIMIT=2 * ZOO_CLIENT_PORT=2181 * ZOO_SERVER_PORT=2888 * ZOO_ELECTION_PORT=3888 * ZOO_MAX_CLIENT_CNXNS=60 * ZOO_SNAP_RETAIN_COUNT=3 * ZOO_PURGE_INTERVAL=1 * ZOO_HEAP_SIZE=-Xmx960M -Xms960M * RESOURCE_MEMORY_REQ=1Gi * RESOURCE_MEMORY_LIMIT=1Gi * RESOURCE_CPU_REQ=1 * RESOURCE_CPU_LIMIT=2 --> Creating resources ... service "myzk" created statefulset "myzk" created --> Success Run 'oc status' to view your app. $ oc get all,pvc,statefulset -l zk-name=myzk NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/myzk None <none> 2181/TCP,2888/TCP,3888/TCP 11m NAME DESIRED CURRENT AGE statefulsets/myzk 3 3 11m NAME READY STATUS RESTARTS AGE po/myzk-0 1/1 Running 0 2m po/myzk-1 1/1 Running 0 1m po/myzk-2 1/1 Running 0 46s NAME STATUS VOLUME CAPACITY ACCESSMODES AGE pvc/datadir-myzk-0 Bound pvc-a654d055-6dfa-11e7-abe1-42010a840002 1Gi RWO 11m pvc/datadir-myzk-1 Bound pvc-a6601148-6dfa-11e7-abe1-42010a840002 1Gi RWO 11m pvc/datadir-myzk-2 Bound pvc-a667fa41-6dfa-11e7-abe1-42010a840002 1Gi RWO 11m pvc/datalogdir-myzk-0 Bound pvc-a657ff77-6dfa-11e7-abe1-42010a840002 1Gi RWO 11m pvc/datalogdir-myzk-1 Bound pvc-a664407a-6dfa-11e7-abe1-42010a840002 1Gi RWO 11m pvc/datalogdir-myzk-2 Bound pvc-a66b85f7-6dfa-11e7-abe1-42010a840002 1Gi RWO 11m NAME DESIRED CURRENT AGE statefulsets/myzk 3 3 11m
如果在cdk或者minishift上,因為只有一個節點,所以只會啟動一個myzk
ericdeMacBook-Pro:openshift ericnie$ oc get pods NAME READY STATUS RESTARTS AGE myzk-0 1/1 Running 0 1m myzk-1 0/1 Pending 0 1m myzk-2 0/1 Pending 0 1m
進行驗證
ericdeMacBook-Pro:openshift ericnie$ for i in 0 1 2; do oc exec myzk-$i -- hostname; done myzk-0 myzk-1 myzk-2 ericdeMacBook-Pro:openshift ericnie$ for i in 0 1 2; do echo "myid myzk-$i";oc exec myzk-$i -- cat /opt/zookeeper/data/myid; done myid myzk-0 1 myid myzk-1 2 myid myzk-2 3 ericdeMacBook-Pro:openshift ericnie$ for i in 0 1 2; do oc exec myzk-$i -- hostname -f; done myzk-0.myzk.myproject.svc.cluster.local myzk-1.myzk.myproject.svc.cluster.local myzk-2.myzk.myproject.svc.cluster.local
3.刪除例項
oc delete all,statefulset,pvc -l zk-name=myzk
中間遇到的問題
- pod啟動以後處於crash狀態,看日誌是zk_status.sh沒找到,後來調了半天的Dockerfile,發現部署模版呼叫的是zookeeper:3.4.13下載的版本,並非本地版本,所以強制修改為172.30.1.1:5000/myproject/zookeeper
- pod無法啟動,報錯沒有訪問/opt/zookeeper/data目錄的許可權,去掉SecurityContext的Runas語句後,以root啟動避免
部署kafka
過程和zookeeper類似
1.clone程式碼
ericdeMacBook-Pro:minishift ericnie$ git clone https://github.com/ericnie2015/kafka-k8s-openshift.git Cloning into 'kafka-k8s-openshift'... remote: Enumerating objects: 607, done. remote: Total 607 (delta 0), reused 0 (delta 0), pack-reused 607 Receiving objects: 100% (607/607), 102.01 KiB | 24.00 KiB/s, done. Resolving deltas: 100% (382/382), done.
2.本地構建並push映象倉庫
ericdeMacBook-Pro:kafka-k8s-openshift ericnie$ docker build -t 172.30.1.1:5000/myproject/kafka:2.12-2.0.0 . Sending build context to Docker daemon 86.53kB Step 1/19 : FROM openjdk:8-jre-alpine ---> 0fe3f0d1ee48 Step 2/19 : MAINTAINER Enrique Garcia <[email protected]> ---> Using cache ---> e51b1e313e0c Step 3/19 : ARG KAFKA_HOME=/opt/kafka ---> Running in 0a464e9d1781 ---> abadcf5d52d5 Removing intermediate container 0a464e9d1781 Step 4/19 : ARG KAFKA_USER=kafka ---> Running in b2e50be2d35b ---> e3f1455c4aca
ericdeMacBook-Pro:kafka-k8s-openshift ericnie$ docker push 172.30.1.1:5000/myproject/kafka:2.12-2.0.0 The push refers to a repository [172.30.1.1:5000/myproject/kafka] 84cb97552ea5: Pushed 681963d6c624: Pushed 47afbbc52b62: Pushed 81d8600a6e97: Pushed 8457712c19b8: Pushed 6286fd332b87: Pushed c2f9d211658b: Pushed 12c374f8270a: Mounted from myproject/zookeeper 0c3170905795: Mounted from myproject/zookeeper df64d3292fd6: Mounted from myproject/zookeeper 2.12-2.0.0: digest: sha256:9ed95c9c7682b49f76d4b5454a704db5ba9561127fe86fe6ca52bd673c279ee5 size: 2413
3.基於模版部署
ericdeMacBook-Pro:openshift ericnie$ cat kafka-persistent.yaml kind: Template apiVersion: v1 metadata: name: kafka-persistent annotations: openshift.io/display-name: Kafka (Persistent) description: Create a Kafka cluster, with persistent storage. iconClass: icon-database tags: messaging,kafka labels: template: kafka-persistent component: kafka parameters: - name: NAME description: Name. required: true value: kafka - name: KAFKA_VERSION description: Kafka Version (Scala and kafka version). required: true value: "2.12-2.0.0" - name: SOURCE_IMAGE description: Container image source. value: 172.30.1.1:5000/myproject/kafka required: true - name: REPLICAS description: Number of replicas. required: true value: "3" - name: KAFKA_HEAP_OPTS description: Kafka JVM Heap options. Consider value of params RESOURCE_MEMORY_REQ and RESOURCE_MEMORY_LIMIT. required: true value: "-Xmx1960M -Xms1960M" - name: SERVER_NUM_PARTITIONS description: > The default number of log partitions per topic. More partitions allow greater parallelism for consumption, but this will also result in more files across the brokers. required: true value: "1" - name: SERVER_DELETE_TOPIC_ENABLE description: > Topic deletion enabled. Switch to enable topic deletion or not, default value is 'true' value: "true" - name: SERVER_LOG_RETENTION_HOURS description: > Log retention hours. The minimum age of a log file to be eligible for deletion. value: "2147483647" - name: SERVER_ZOOKEEPER_CONNECT description: > Zookeeper conection string, a list as URL with nodes separated by ','. value: "zk-persistent-0.zk-persistent:2181,zk-persistent-1.zk-persistent:2181,zk-persistent-2.zk-persistent:2181" required: true - name: SERVER_ZOOKEEPER_CONNECT_TIMEOUT description: > The max time that the client waits to establish a connection to zookeeper (ms). value: "6000" required: true - name: VOLUME_KAFKA_CAPACITY description: Kafka logs capacity. required: true value: "10Gi" - name: RESOURCE_MEMORY_REQ description: The memory resource request. value: "2Gi" - name: RESOURCE_MEMORY_LIMIT description: The limits for memory resource. value: "2Gi" - name: RESOURCE_CPU_REQ description: The CPU resource request. value: "1" - name: RESOURCE_CPU_LIMIT description: The limits for CPU resource. value: "2" - name: LP_INITIAL_DELAY description: > LivenessProbe initial delay in seconds. value: "30" objects: - apiVersion: v1 kind: Service metadata: name: ${NAME} labels: app: ${NAME} annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports: - port: 9092 name: server - port: 2181 name: zkclient - port: 2888 name: zkserver - port: 3888 name: zkleader clusterIP: None selector: app: ${NAME} - apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: ${NAME} labels: app: ${NAME} spec: podManagementPolicy: "Parallel" serviceName: ${NAME} replicas: ${REPLICAS} template: metadata: labels: app: ${NAME} template: kafka-persistent component: kafka annotations: # Use this annotation if you want allocate each pod on different node # Note the number of nodes must be upper than REPLICAS parameter. scheduler.alpha.kubernetes.io/affinity: > { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [{ "labelSelector": { "matchExpressions": [{ "key": "app", "operator": "In", "values": ["${NAME}"] }] }, "topologyKey": "kubernetes.io/hostname" }] } } spec: containers: - name: ${NAME} imagePullPolicy: IfNotPresent image: ${SOURCE_IMAGE}:${KAFKA_VERSION} resources: requests: memory: ${RESOURCE_MEMORY_REQ} cpu: ${RESOURCE_CPU_REQ} limits: memory: ${RESOURCE_MEMORY_LIMIT} cpu: ${RESOURCE_CPU_LIMIT} ports: - containerPort: 9092 name: server - containerPort: 2181 name: zkclient - containerPort: 2888 name: zkserver - containerPort: 3888 name: zkleader env: - name : KAFKA_REPLICAS value: ${REPLICAS} - name: KAFKA_ZK_LOCAL value: "false" - name: KAFKA_HEAP_OPTS value: ${KAFKA_HEAP_OPTS} - name: SERVER_num_partitions value: ${SERVER_NUM_PARTITIONS} - name: SERVER_delete_topic_enable value: ${SERVER_DELETE_TOPIC_ENABLE} - name: SERVER_log_retention_hours value: ${SERVER_LOG_RETENTION_HOURS} - name: SERVER_zookeeper_connect value: ${SERVER_ZOOKEEPER_CONNECT} - name: SERVER_log_dirs value: "/opt/kafka/data/logs" - name: SERVER_zookeeper_connection_timeout_ms value: ${SERVER_ZOOKEEPER_CONNECT_TIMEOUT} livenessProbe: exec: command: - kafka_server_status.sh initialDelaySeconds: ${LP_INITIAL_DELAY} timeoutSeconds: 5 volumeMounts: - name: kafka-data mountPath: /opt/kafka/data volumeClaimTemplates: - metadata: name: kafka-data annotations: volume.alpha.kubernetes.io/storage-class: anything spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: ${VOLUME_KAFKA_CAPACITY}
修改模版的映象庫和zookeeper地址,我只有一個所以只修改成一個了
ericdeMacBook-Pro:openshift ericnie$ oc create -f kafka-persistent.yaml template "kafka-persistent" created ericdeMacBook-Pro:openshift ericnie$ oc new-app kafka-persistent --> Deploying template "myproject/kafka-persistent" to project myproject Kafka (Persistent) --------- Create a Kafka cluster, with persistent storage. * With parameters: * NAME=kafka * KAFKA_VERSION=2.12-2.0.0 * SOURCE_IMAGE=172.30.1.1:5000/myproject/kafka * REPLICAS=3 * KAFKA_HEAP_OPTS=-Xmx1960M -Xms1960M * SERVER_NUM_PARTITIONS=1 * SERVER_DELETE_TOPIC_ENABLE=true * SERVER_LOG_RETENTION_HOURS=2147483647 * SERVER_ZOOKEEPER_CONNECT=myzk-0.myzk.myproject.svc.cluster.local:2181,myzk-1.myzk.myproject.svc.cluster.local:2181,myzk-2.myzk.myproject.svc.cluster.local:2181 * SERVER_ZOOKEEPER_CONNECT_TIMEOUT=6000 * VOLUME_KAFKA_CAPACITY=10Gi * RESOURCE_MEMORY_REQ=0.2Gi * RESOURCE_MEMORY_LIMIT=2Gi * RESOURCE_CPU_REQ=0.2 * RESOURCE_CPU_LIMIT=2 * LP_INITIAL_DELAY=30 --> Creating resources ... service "kafka" created statefulset "kafka" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/kafka' Run 'oc status' to view your app.
因為記憶體不夠,無法跑起來。
4.刪除
ericdeMacBook-Pro:openshift ericnie$ oc delete all,statefulset,pvc -l app=kafka statefulset "kafka" deleted service "kafka" deleted persistentvolumeclaim "kafka-data-kafka-0" deleted persistentvolumeclaim "kafka-data-kafka-1" deleted persistentvolumeclaim "kafka-data-kafka-2" deleted
相關推薦
Openshift部署Zookeeper和Kafka
部署Zookeeper github網址 https://github.com/ericnie2015/zookeeper-k8s-openshift 1.在openshift目錄中,首先構建images oc create -f buildconfig.yaml oc new-app zk-bu
centos 7中docker 部署zookeeper和kafka(單節點適合開發環境)
網上看了一些教程,發現多數有坑,現在自己寫一篇,本人親測可用 各個軟體版本: 下載zookeeper和kafka映象 docker pull wurstmeister/zookeeper docker pull wurstmeister/kafka
docker環境下的zookeeper和kafka部署
kafka簡單介紹 Kafka 是 LinkedIn 開源的一種高吞吐量的分散式釋出訂閱訊息系統,kafka的誕生就是為了處理海量日誌資料,所以kafka處理訊息的效率非常高,即使是非常普通的硬體也可以支援每秒數百萬的訊息。kafka 天然支援叢集負載均衡,使用 zookeeper 進行分散式協
zookeeper和kafka叢集部署
叢集zookeeper部署 1.找到每臺物理節點的zookeeper配置檔案所在目錄: /home/soft/NodeServer/zookeeper/conf/zoo.cfg 2.修改配置檔案中的IP資訊: Server.1 = IP1:2887
docker環境下部署的微服務架構: zookeeper和kafka部署
轉載自:http://www.jianshu.com/p/263164fdcac7 kafka簡單介紹 Kafka 是 LinkedIn 開源的一種高吞吐量的分散式釋出訂閱訊息系統,kafka的誕生就是為了處理海量日誌資料,所以kafka處理訊息的效率非常高,即使是非常
ZOOKEEPER和KAFKA簡介
中心 概念 ras ice 規模 PE 傳遞 group 客戶端訪問 目錄KAFKA1. kafka的特性2. Kafka的架構組件簡介3. 重要組件或概念詳解Topic、Partition、OffsetProducersConsumers4. Ka
Window上安裝配置Zookeeper和Kafka
安裝配置Zookeeper 下載zookeeper安裝包 :http://zookeeper.apache.org/doc/r3.5.4-beta/ 解壓後可以看到: 將conf下的zoo-sample.cfg更名為zoo.cfg,因為剛下下來的bin/zkEnv.cmd裡
zookeeper和kafka安裝文件
一:kafka簡介: Kafka 被稱為下一代分散式訊息系統,是非營利性組織ASF(Apache Software Foundation,簡稱為ASF)基金會中的一個開源專案,比如HTTP Server、Hadoop、ActiveMQ、Tomcat等開源軟體都屬於Apach
zookeeper 和 kafka 常用的命令
zookeeper 常用的命令 連線登陸到zookeeper bin/zkCli.sh -server localhost:2181 此時,輸入“help”可以檢視命令引數: 2.“ls path”用於檢視路徑path下的所有直接子節點:
使用Docker快速搭建Zookeeper和kafka叢集
使用Docker快速搭建Zookeeper和kafka叢集 映象選擇 Zookeeper和Kafka叢集分別執行在不同的容器中zookeeper官方映象,版本3.4kafka採用wurstmeister/kafka映象 叢集規劃 hostname Ip addr
使用Docker快速搭建Zookeeper和kafka集群
lan new vol data 分享圖片 pwd servers server p地址 集群搭建 鏡像選擇 Zookeeper和Kafka集群分別運行在不同的容器中zookeeper官方鏡像,版本3.4kafka采用wurstmeister/kafka鏡像 集群規劃
centos安裝ZooKeeper和kafka
centos安裝ZooKeeper和kafka A Zookeeper安裝與配置 一:下載zookeeper安裝包 二:上傳並配置zookeeper B kafka安裝與配置 一:從官網下載安裝包 二:安
zookeeper和kafka的啟動和關閉順序
一定要先啟動ZooKeeper 再啟動Kafka 順序不可以改變。 先關閉kafka ,再關閉zookeeper。 zookeeper啟動: 分別在三臺機器上執行:zkServer.sh start 檢查ZooKeeper狀態:zkServer.sh status
Zookeeper和Kafka的關係,為啥Kafka依賴Zookeeper
更多瞭解Zookeeper可以看《從Paxos到zookeeper分散式一致性原理與實踐》理解 zookeeper和Kafka的關係 1.在Kafka的設計中,選擇了使用Zookeeper來進行所有Broker的管理,體現在zookeeper上會有一個專門用來進行Broker伺服器列表記
在Windows中安裝zookeeper和kafka單點環境
1.首先需要先安裝zookeeper 在windows下安裝單點測試 a.下載zookeeper-3.4.9.tar.gz,解壓在conf\下把zoo_sample.cfg 複製為zoo.cfg
zookeeper 和 kafka 叢集搭建
#進入conf目錄 /opt/zookeeper/zookeeper-3.4.6/conf #檢視 [[email protected]]$ ll -rw-rw-r--. 1 1000 1000 535 Feb 20 2014 configuration.xsl -rw-rw-r--. 1
zookeeper和kafka實踐
1、使用kafka,依賴zookeeper和scala,對應版本從網上找 2、都安裝完成後,進入kafka的安裝路徑下面的bin目錄,開啟 存在這些檔案 常用命令說明: kafka-console-producer.sh(命令模式下發送訊息)【./kafka-conso
zookeeper和kafka的使用
arc target hive arch 應用場景 zookeeper baidu 介紹 tps zookeeper使用和原理探究(一) http://www.blogjava.net/BucketLi/archive/2010/12/21/341268.html zo
CentOS 7搭建Zookeeper和Kafka叢集
# 環境 * CentOS 7.4 * Zookeeper-3.6.1 * Kafka_2.13-2.4.1 * Kafka-manager-2.0.0.2 本次安裝的軟體全部在 `/home/javateam` 目錄下。 # Zookeeper 叢集搭建 1. 新增三臺機器的 `hosts`,使用
zookeeper與kafka安裝部署及java環境搭建
3.4 項目目錄 tin bytes result zxvf util ise cat 1. ZooKeeper安裝部署 本文在一臺機器上模擬3個zk server的集群安裝。 1.1. 創建目錄、解壓 cd /usr/ #創建項目目錄 mkdir zookeepe