1. 程式人生 > >druid叢集的安裝和驗證

druid叢集的安裝和驗證

核心
1、環境和架構
2、druid的安裝
3、druid的配置
4、overlord json
5、overlord csv

1、druid 環境和架構
環境資訊
Centos6.5
32GB 8C *5
Zookeeper 3.4.5
Druid 0.9.2
Hadoop-2.6.5
Jdk1.7
架構
10.20.23.42 Broker Real-time datanode NodeManager QuorumPeerMain
10.20.23.29 middleManager datanode NodeManager
10.20.23.38 overlord datanode NodeManager QuorumPeerMain
10.20.23.82 coordinator namenode ResourceManager
10.20.23.41 historical datanode NodeManager QuorumPeerMain

2、druid安裝
Hadoop的安裝就不介紹了,之前一直用Hadoop2.3.0安裝但是沒有成功,所以換成了2.6.5
和單機一樣的流程

1、 先解壓

2、 拷貝檔案
拷貝Hadoop的配置檔案到 ${DRUID_HOME}/conf/druid/_common目錄下面,拷貝4個core-site.xml hdfs-site.xml mapred-site.xml yarn-site.xml

3、 建立目錄,拷貝jar包
${DRUID_HOME} /hadoop-dependencies/hadoop-client目錄下面建立一個2.6.5(建議選擇Hadoop的版本號)的資料夾,將Hadoop的jar包拷貝到這個目錄下面

4、 修改配置檔案

注意:配置檔案特別繁瑣,只要有一個地方配置錯誤任務就不能執行

#配置元資料資訊,修改成druid-hdfs-storage和mysql-metadata-storage
druid.extensions.loadList=["druid-hdfs-storage","mysql-metadata-storage"]
#配置zookeeper的資訊
druid.zk.service.host=10.20.23.82:2181
druid.zk.paths.base=/druid/cluster
#配置元資料MySQL的資訊
druid.metadata.storage
.type=mysql druid.metadata.storage.connector.connectURI=jdbc:mysql://10.20.23.42:3306/druid druid.metadata.storage.connector.user=root druid.metadata.storage.connector.password=123456 # 配置儲存的資訊 # Deep storage # # For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp): druid.storage.type=hdfs druid.storage.storageDirectory=/druid/segments #配置日誌儲存的資訊 # Indexing service logs # # For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp): druid.indexer.logs.type=hdfs druid.indexer.logs.directory=/druid/indexing-logs

broker的配置

broker的配置,主要配置根據實際情況修改記憶體分配的大小。新增druid.host引數和修改Duser.timezone的值,因為druid預設的時區是Z。所以我們需要加上+0800

[[email protected] broker]$ cat jvm.config 
-server
-Xms1g
-Xmx1g
-XX:MaxDirectMemorySize=4096m
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
[[email protected] broker]$ cat runtime.properties 
druid.host=10.20.23.82
druid.service=druid/broker
druid.port=8082

# HTTP server threads
druid.broker.http.numConnections=5
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numThreads=7

# Query cache
druid.broker.cache.useCache=true
druid.broker.cache.populateCache=true
druid.cache.type=local
druid.cache.sizeInBytes=2000000000

coordinator的配置

coordinator的配置,主要配置根據實際情況修改記憶體分配的大小。新增druid.host引數和修改Duser.timezone的值,因為druid預設的時區是Z。所以我們需要加上+0800

 [[email protected] coordinator]$ cat jvm.config 
-server
-Xms1g
-Xmx1g
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
-Dderby.stream.error.file=var/druid/derby.log
[[email protected] coordinator]$ cat runtime.properties 
druid.host=10.20.23.82
druid.service=druid/coordinator
druid.port=18091

druid.coordinator.startDelay=PT30S
druid.coordinator.period=PT30S

historical 的配置

historical的配置,主要配置根據實際情況修改記憶體分配的大小。新增druid.host引數和修改Duser.timezone的值,因為druid預設的時區是Z。所以我們需要加上+0800

 [[email protected] historical]$ cat jvm.config 
-server
-Xms1g
-Xmx1g
-XX:MaxDirectMemorySize=4960m
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
[[email protected] historical]$ cat runtime.properties 
druid.host=10.20.23.82
druid.service=druid/historical
druid.port=8083

# HTTP server threads
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numThreads=7

# Segment storage
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
druid.server.maxSize=130000000000

middleManager 的配置

middleManager的配置,主要配置根據實際情況修改記憶體分配的大小。新增druid.host引數和修改Duser.timezone的值,因為druid預設的時區是Z。所以我們需要加上+0800
其中hadoop-client:2.6.5 這個2.6.5是和第3點中建立的路徑名字是一樣的,

 [[email protected] middleManager]$ cat jvm.config 
-server
-Xms64m
-Xmx64m
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
[hado[email protected] middleManager]$ cat runtime.properties 
druid.service=druid/middleManager
druid.port=8091

# Number of tasks per middleManager
druid.worker.capacity=3

# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC+0800 -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task

# HTTP server threads
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numThreads=2

# Hadoop indexing
druid.host=10.20.23.82
druid.indexer.task.hadoopWorkingPath=/druid/hadoop-tmp
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.6.5"]

overlord 的配置

overlord的配置,主要配置根據實際情況修改記憶體分配的大小。新增druid.host引數和修改Duser.timezone的值,因為druid預設的時區是Z。所以我們需要加上+0800

 [[email protected] overlord]$ cat jvm.config 
-server
-Xms1g
-Xmx1g
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
[[email protected] overlord]$ cat runtime.properties 
druid.host=10.20.23.82
druid.service=druid/overlord
druid.port=8090

druid.indexer.queue.startDelay=PT30S

druid.indexer.runner.type=remote
druid.indexer.storage.type=metadata

5、 在 通過scp拷貝到其他的機器上面去
6、 在對應機器啟動各個程序

java `cat conf/druid/historical/jvm.config | xargs` -cp "conf/druid/_common:conf/druid/historical:lib/*" io.druid.cli.Main server historical

java `cat conf/druid/broker/jvm.config | xargs` -cp "conf/druid/_common:conf/druid/broker:lib/*" io.druid.cli.Main server broker

java `cat conf/druid/coordinator/jvm.config | xargs` -cp "conf/druid/_common:conf/druid/coordinator:lib/*" io.druid.cli.Main server coordinator

java `cat conf/druid/overlord/jvm.config | xargs` -cp "conf/druid/_common:conf/druid/overlord:lib/*" io.druid.cli.Main server overlord

java `cat conf/druid/middleManager/jvm.config | xargs` -cp "conf/druid/_common:conf/druid/middleManager:lib/*" io.druid.cli.Main server middleManager

3、overlord json檔案

[[email protected]-L0038784 hadoop-client]$ hadoop fs -ls /druid
drwxr-xr-x   - hadoop supergroup           0 2017-05-30 16:02 /druid/hadoop-tmp
drwxr-xr-x   - hadoop supergroup           0 2017-05-30 16:00 /druid/indexing-logs
drwxr-xr-x   - hadoop supergroup           0 2017-05-30 15:39 /druid/segments
-rw-r--r--   3 hadoop supergroup         153 2017-05-29 16:58 /druid/wikipedia_data.csv
-rw-r--r--   3 hadoop supergroup    17106256 2017-05-29 10:54 /druid/wikiticker-2015-09-12-sampled.json

執行overlord命令

curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/wikiticker-index.json 10.20.23.38:8090/druid/indexer/v1/task

在監控頁面可以檢視到SUCCESS說明已經overlord成功了
這裡寫圖片描述
查詢

[hadoop@SZB-L0038787 druid-0.9.2]$ curl -L -H'Content-Type: application/json' -XPOST --data-binary @quickstart/wikiticker-top-pages.json http://10.20.23.42:8082/druid/v2/?pretty
[ {
  "timestamp" : "2015-09-12T00:46:58.771Z",
  "result" : [ {
    "page" : "Wikipedia:Vandalismusmeldung",
    "edits" : 20
  }, {
    "page" : "Jeremy Corbyn",
    "edits" : 18
  }, {
    "page" : "User talk:Dudeperson176123",
    "edits" : 17
  }, {
    "page" : "Utente:Giulio Mainardi/Sandbox",
    "edits" : 16
  }, {
    "page" : "User:Cyde/List of candidates for speedy deletion/Subpage",
    "edits" : 15
  }, {
    "page" : "Wikipédia:Le Bistro/12 septembre 2015",
    "edits" : 14
  }, {
    "page" : "Wikipedia:Administrators' noticeboard/Incidents",
    "edits" : 12
  }, {
    "page" : "Kim Davis (county clerk)",
    "edits" : 11
  }, {
    "page" : "The Naked Brothers Band (TV series)",
    "edits" : 10
  }, {
    "page" : "Гомосексуальный образ жизни",
    "edits" : 10
  }, {
    "page" : "Wikipedia:Administrator intervention against vandalism",
    "edits" : 9
  }, {
    "page" : "Wikipedia:De kroeg",
    "edits" : 9
  }, {
    "page" : "Wikipedia:Files for deletion/2015 September 12",
    "edits" : 9
  }, {
    "page" : "التهاب السحايا",
    "edits" : 9
  }, {
    "page" : "Chess World Cup 2015",
    "edits" : 8
  }, {
    "page" : "The Book of Souls",
    "edits" : 8
  }, {
    "page" : "Wikipedia:Requests for page protection",
    "edits" : 8
  }, {
    "page" : "328-я стрелковая дивизия (2-го формирования)",
    "edits" : 7
  }, {
    "page" : "Campanya dels Balcans (1914-1918)",
    "edits" : 7
  }, {
    "page" : "Homo naledi",
    "edits" : 7
  }, {
    "page" : "List of shipwrecks in August 1944",
    "edits" : 7
  }, {
    "page" : "User:Tokyogirl79/sandbox4",
    "edits" : 7
  }, {
    "page" : "Via Lliure",
    "edits" : 7
  }, {
    "page" : "Vorlage:Revert-Statistik",
    "edits" : 7
  }, {
    "page" : "Wikipedia:Löschkandidaten/12. September 2015",
    "edits" : 7
  } ]
} ]

Json檔案的內容 特別注意需要加上jobProperties 這個不然程式會報錯

json index的配置

[hadoop@SZB-L0038787 druid-0.9.2]$ cat quickstart/wikiticker-index.json
{
  "type" : "index_hadoop",
  "spec" : {
    "ioConfig" : {
      "type" : "hadoop",
      "inputSpec" : {
        "type" : "static",
        "paths" : "/druid/wikiticker-2015-09-12-sampled.json"
      }
    },
    "dataSchema" : {
      "dataSource" : "wikiticker",
      "granularitySpec" : {
        "type" : "uniform",
        "segmentGranularity" : "day",
        "queryGranularity" : "none",
        "intervals" : ["2015-09-12/2015-09-13"]
      },
      "parser" : {
        "type" : "hadoopyString",
        "parseSpec" : {
          "format" : "json",
          "dimensionsSpec" : {
            "dimensions" : [
              "channel",
              "cityName",
              "comment",
              "countryIsoCode",
              "countryName",
              "isAnonymous",
              "isMinor",
              "isNew",
              "isRobot",
              "isUnpatrolled",
              "metroCode",
              "namespace",
              "page",
              "regionIsoCode",
              "regionName",
              "user"
            ]
          },
          "timestampSpec" : {
            "format" : "auto",
            "column" : "time"
          }
        }
      },
      "metricsSpec" : [
        {
          "name" : "count",
          "type" : "count"
        },
        {
          "name" : "added",
          "type" : "longSum",
          "fieldName" : "added"
        },
        {
          "name" : "deleted",
          "type" : "longSum",
          "fieldName" : "deleted"
        },
        {
          "name" : "delta",
          "type" : "longSum",
          "fieldName" : "delta"
        },
        {
          "name" : "user_unique",
          "type" : "hyperUnique",
          "fieldName" : "user"
        }
      ]
    },
    "tuningConfig" : {
      "type" : "hadoop",
      "partitionsSpec" : {
        "type" : "hashed",
        "targetPartitionSize" : 5000000
      },
       "jobProperties" : {
        "mapreduce.job.classloader": "true",
        "mapreduce.job.classloader.system.classes": "-javax.validation.,java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop."
      }
    }
  }
}

查詢的json檔案

[hadoop@SZB-L0038787 druid-0.9.2]$ cat quickstart/wikiticker-top-pages.json 
{
  "queryType" : "topN",
  "dataSource" : "wikiticker",
  "intervals" : ["2015-09-12/2015-09-13"],
  "granularity" : "all",
  "dimension" : "page",
  "metric" : "edits",
  "threshold" : 25,
  "aggregations" : [
    {
      "type" : "longSum",
      "name" : "edits",
      "fieldName" : "count"
    }
  ]
}

5、overlord csv檔案
我們先準備一些csv的資料

[[email protected] data]$ cat test
2017-08-01T01:02:33Z,10202111900173056925,30202111900037998891,2020211,20202000434,2,1,B18,3,4,J,B,2020003088,,,,,,01,,00000655,,,,,0.00,OLAPMAN,2017-01-0421:16:08+08:00,OLAPMAN,2017-01-0421:16:08+08:00,2015-06-0910:56:03+08:00,
2017-07-16T01:02:33Z,10202111900164385197,30202111900034745280,2020211,20202000434,2,1,B18,3,4,J,B,2020003454,,,,,,01,,00000655,,,,,-2000.00,OLAPMAN,2017-01-0421:16:08+08:00,OLAPMAN,2017-01-0421:16:08+08:00,2015-04-1510:42:26+08:00,
2017-05-15T01:02:33Z,13024011900164473005,33024011900035728305,2302401,2302401,2,1,A01,2,1,G,H,2300000212,,,,30240061,,01,309,,,,,,59.25,OLAPMAN,2017-01-0421:16:08+08:00,OLAPMAN,2017-01-0421:16:08+08:00,2015-04-1517:23:31+08:00,
2017-08-01T01:02:33Z,10202111900173999588,30202111900038540746,2020211,20202000434,2,1,B18,3,4,J,B,2020003155,,,,,,01,,00000655,,,,,0.00,OLAPMAN,2017-01-0421:16:08+08:00,OLAPMAN,2017-01-0421:16:08+08:00,2015-06-1515:41:34+08:00,
2017-08-01T01:02:33Z,10202111900174309914,30202111900038542126,2020211,20202000434,2,1,B18,3,4,J,B,2020003155,,,,,,01,,00000655,,,,,0.00,OLAPMAN,2017-01-0421:16:08+08:00,OLAPMAN,2017-01-0421:16:08+08:00,2015-06-1710:36:16+08:00,
2017-08-01T01:02:33Z,10202111900176540667,30202111900038893351,2020211,20202000434,2,1,B18,3,4,J,B,2020003155,,,,,,01,,00000655,,,,,0.00,OLAPMAN,2017-01-0421:16:08+08:00,OLAPMAN,2017-01-0421:16:08+08:00,2015-06-2913:54:09+08:00,
2017-06-18T01:02:33Z,12078001900174397522,32078001900038476523,22078,22078002835,2,1,A56,2,2,C,A,2200041441,,,,20760002,,01,999,,,,,,0.00,OLAPMAN,2017-01-0421:16:08+08:00,OLAPMAN,2017-01-0421:16:08+08:00,2015-06-1717:36:41+08:00,
2017-12-24T01:02:33Z,11414021900149429403,31414021900036312816,2141402,21414020238,2,1,A01,2,2,8,9,2141400018,,,,14140018,,01,402,,,,,,0.00,OLAPMAN,2017-01-0421:16:08+08:00,OLAPMAN,2017-01-0421:16:08+08:00,2014-12-2612:15:31+08:00,
2017-06-01T01:02:33Z,10202111900165839017,30202111900035354013,2020211,20202000434,2,1,B18,3,4,J,B,2020003088,,,,,,01,,00000655,,,,,0.00,OLAPMAN,2017-01-0421:16:08+08:00,OLAPMAN,2017-01-0421:16:08+08:00,2015-04-2314:32:53+08:00,

準備csv的json檔案

[hadoop@SZB-L0038787 quickstart]$ cat test-index.json
{
  "type": "index_hadoop",
  "spec": {
    "dataSchema": {
      "dataSource": "test",
      "parser": {
        "type": "string",

 "parseSpec":
 {
       "format" : "csv",
       "timestampSpec" : 
   {
         "column" : "stat_date"
       },
       "columns" : [
        "stat_date",
                "policy_no",
                "endorse_no",
                "department_code",
                "sale_group_code",
                "business_type",
                "business_mode",
                "plan_code",
                "business_source_code",
                "business_source_detail_code",
                "channel_source_code",
                "channel_source_detail_code",
                "sale_agent_code",
                "primary_introducer_code",
                "renewal_type",
                "purchase_year",
                "agent_code",
                "partner_id",
                "currency_code",
                "parent_company_code",
                "broker_code",
                "dealer_code",
                "auto_series_id",
                "usage_attribute_code",
                "new_channel_ground_mark",
                "ply_prem_day",
                "created_by",
                "date_created",
                "updated_by",
                "date_updated",
                "underwrite_time",
                "partner_worknet_code"             

             ],
      "dimensionsSpec" : 
   {
        "dimensions" : [
           "department_code",
                "sale_group_code",
                "business_type",
                "business_mode",
                "plan_code",
                "business_source_code",
                "business_source_detail_code",
                "channel_source_code",
                "channel_source_detail_code",
                "sale_agent_code"
            ]
       } 
        }
      },
      "metricsSpec": [
        {
          "type": "count",
          "name": "count"
        },
        {
          "type": "doubleSum",
          "name": "ply_prem_day",
          "fieldName": "ply_prem_day"
        }
      ],
      "granularitySpec": {
        "type": "uniform",
        "segmentGranularity": "DAY",
        "queryGranularity": "NONE",
        "intervals": ["2017-05-15/2017-12-25"]
      }
    },
    "ioConfig" : {
      "type" : "hadoop",
      "inputSpec" : {
        "type" : "static",
        "paths" : "/druid/test"
      }
    },
    "tuningConfig" : {
      "type": "hadoop",
     "jobProperties" : {
        "mapreduce.job.classloader": "true",
        "mapreduce.job.classloader.system.classes": "-javax.validation.,java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop."
      }
    }
  }
}

準備csv的查詢json檔案

[hadoop@SZB-
            
           

相關推薦

druid叢集安裝驗證

核心 1、環境和架構 2、druid的安裝 3、druid的配置 4、overlord json 5、overlord csv 1、druid 環境和架構 環境資訊 Centos6.5 32GB 8C *5 Zookeeper 3.4.5

Sqoop的安裝驗證

connector RKE bin 關系 user linu cos req config   Sqoop是一個用來完成Hadoop和關系型數據庫中的數據相互轉移的工具,它可以將關系型數據庫中的數據導入到Hadoop的HDFS中,也可以將HDFS的數據導入到關系型數據庫中。

03. CouchBase叢集安裝配置(02)-CouchBase從0到50

4.叢集配置 couchbase叢集可以採用2種方式配置 直接ip叢集互聯 通過hostname叢集互聯 為了方便以後的維護和變更,我們採用hostname的進行配置。 首先確保三臺測試機之間網路互通,防護牆,selinux和埠之類的都配置ok了。 4.1 hosts設定 配置三臺機

storm概述、叢集安裝簡單的命令列操作

http://storm.apache.org Apache Storm是一個免費的開源分散式實時計算系統。Storm可以輕鬆可靠地處理無限資料流,實現Hadoop對批處理所做的實時處理。Storm非常簡單,可以與任何程式語言一起使用,並且使用起 來很有趣! Storm有許多用例:實時分析,

kafka2.9.2的分散式叢集安裝demo(java api)測試

問題導讀1、什麼是kafka?2、kafka的官方網站在哪裡?3、在哪裡下載?需要哪些元件的支援?4、如何安裝?  一、什麼是kafka?  kafka是LinkedIn開發並開源的一個分散式MQ系統,現在是Apache的一個孵化專案。在它的主頁描述kafka為一個高吞吐量

Spark叢集安裝WordCount編寫

一、Spark概述 官網:http://spark.apache.org/ Apache Spark™是用於大規模資料處理的統一分析引擎。 為大資料處理而設計的快速通用的計算引擎。 Spark加州大學伯克利分校AMP實驗室。不同於mapreduce的是一個Sp

CentOS6u9 Oracle11g RAC 搭建部署(三)叢集安裝PSU補丁升級

6-叢集安裝: 1° 安裝grid: # 將安裝包上傳到某一個節點即可 chown grid: /tmp/p13390677_112040_Linux-x86-64_3of7.zip su - grid cd /tmp/ unzip p13390677_

Spark叢集安裝使用

本文主要記錄 CDH5 叢集中 Spark 叢集模式的安裝過程配置過程並測試 Spark 的一些基本使用方法。 安裝環境如下: 作業系統:CentOs 6.5Hadoop 版本:cdh-5.3.0Spark 版本:cdh5-1.2.0_5.3.0關於 yum 源的配置以及 Hadoop 叢集的安裝,請參考

Centos7 Redis5.0.5 三主三從叢集安裝環境配置

Centos7 Redis5.0.5 三主三從叢集安裝和環境配置 1.下載Redis 開啟redis官網https://red

elasticsearch叢集安裝+安全驗證+kibana安裝

## 準備環境 * 啟動4個centos容器, 並暴露相對應埠 (我的本機ip為172.16.1.236,以下涉及到的地方需要修改為自己的ip) | node_name | ip | http port| transport port | | --- | --- | ---| ---| | es01 |

nginx安裝測試 (已驗證

fig figure lib pan 首頁 min 正常 如果 nginx安裝 進入:/usr/local/nginx 目錄註意:為了保證各插件之間的版本兼容和穩定,建議先通過以下版本進行測試驗證。一、下載版本 下載nginx: wget http://nginx.o

linux環境下pytesseract的安裝央行征信中心的登錄驗證碼識別實戰

int tab 權限 linux a-z 都是 提示 解釋 text 首先是安裝,我參考的是這個 http://blog.csdn.net/xinghun_4/article/details/47860645 我是centos,使用yum yum install pyt

第5章 選舉模式ZooKeeper的叢集安裝 5-1 叢集的一些基本概念

xx就是我們的master,也就是我們的主節點。心跳機制,當有一個節點掛掉之後,整個叢集還是可以工作的。選舉模式,我們現在的master是正常執行的,但是在某些情況下它宕機了宕機了,那麼這個時候它這個叢集裡面就少了master,沒有master兩個slave需要去競爭。競爭完之後slave1把slave2給幹

spark2.2.2安裝叢集搭建

1.環境準備 安裝Hadoop-2.7.2 安裝scala-2.11.8 安裝jdk-1.8.0_171 準備安裝包:spark-2.2.2-bin-hadoop2.7.tgz,並解壓至hadoop使用者目錄. tar zxvf spark-2.2.2-bin-hadoop2.7.tgz mv spa

zookeeper3.4.8安裝叢集搭建

1.環境準備 建立zookeeper使用者. 準備安裝包: zookeeper-3.4.8.tar.gz. 拷貝至安裝目錄並解壓 tar zxvf zookeeper-3.3.6.tar.gz mv zookeeper-3.3.6 zookeeper 2.配置檔案 zookeeper/c

Kafka2.10安裝叢集搭建

1.安裝前準備 jdk1.8.0_171 kafka_2.10-0.10.0.0.tgz 將壓縮包解壓至kafka使用者目錄. 2.配置檔案 config/server.properties #修改一下幾項,其他不動 #唯一標識,叢集內各個broker.id不能重複 broker.i

【zookeeper】zookeeper介紹及安裝叢集配置

1.什麼是zookeeper ?     zookeeper 英文直譯是動物管理員,試想下,動物園裡有很多動物,如果沒有動物管理員去做管理的話,各種動物混在一起很可能出現打架問題,疾病,髒,等等一系列問題,這個時候就需要有個主人去把這些動物統一管理起來,zookeeper其實

linux系統MySQL的安裝hive叢集安裝詳細步驟及講解

此安裝步驟是多年經驗總結,一定要嚴格按照步驟進行,一定要細心!!! MySQL的安裝是重中之重!!!出現錯誤很難修改!!! 另外安裝命令不要複製!!!自己手敲!!!不要複製!!!複製可能導致出錯!!! 空格,小數點要格外注意,都不能少 一、先刪除Linux自帶的MySQL 1、找到

WAS叢集安裝<一>系統環境配置規劃

說明: 一直使用WAS,但是裡面的很多知識一直沒有時間來寫文章共享,現在比較有時間了,就把以前的知識進行了一個詳細梳理,整理出Was叢集安裝和Was對應用的處理注意事項等一些文章,共享給開源中國的朋友們,希望能幫助到有需要的朋友們。 工具及系統:VM10、SecureCRT、CentO

linux下單節點叢集安裝zookeeper(詳細步驟)

單節點安裝zookeeper  1、解壓zookeeper檔案 將下載到的zookeeper-3.4.6.tar.gz安裝檔案上傳到伺服器的/home目錄,解壓後進入根目錄建立data資料夾和logs資料夾 cd /usr/local