1. 程式人生 > 實用技巧 >【ClickHouse】7:clickhouse多例項安裝

【ClickHouse】7:clickhouse多例項安裝

背景介紹:

有三臺CentOS7伺服器安裝了ClickHouse

HostName IP 安裝程式 例項1埠 例項2埠
centf8118.sharding1.db 192.168.81.18 clickhouse-server,clickhouse-client 9000 9002
centf8119.sharding2.db 192.168.81.19 clickhouse-server,clickhouse-client 9000 9002
centf8120.sharding3.db 192.168.81.20 clickhouse-server,clickhouse-client 9000 9002

安裝多例項是為了用三臺伺服器測試3分片2備份叢集。最後的部署如下表:

備份1 備份2
分片1 192.168.81.18:9000 192.168.81.19:9002
分片2 192.168.81.19:9000 192.168.81.20:9002
分片3 192.168.81.20:9000 192.168.81.18:9002

一:安裝clickhouse

【ClickHouse】1:clickhouse安裝 (CentOS7)

二:新增一個clickhouse例項服務

2.1:將/etc/clickhouse-server/config.xml檔案拷貝一份改名

[root@centf8118 clickhouse-server]# cp
/etc/clickhouse-server/config.xml /etc/clickhouse-server/config9002.xml

2.2:編輯/etc/clickhouse-server/config9002.xml更改以下內容將兩個服務區分開來

多例項修改的config9002.xml:原來內容
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<http_port>8123
</http_port> <tcp_port>9000</tcp_port> <mysql_port>9004</mysql_port> <interserver_http_port>9009</interserver_http_port> <path>/data/clickhouse/</path> <tmp_path>/data/clickhouse/tmp/</tmp_path> <user_files_path>/data/clickhouse/user_files/</user_files_path> <access_control_path>/data/clickhouse/access/</access_control_path> <include_from>/etc/clickhouse-server/metrika.xml</include_from> #叢集配置檔案 多例項修改的config9002.xml:調整後內容 <log>/var/log/clickhouse-server/clickhouse-server-9002.log</log> <errorlog>/var/log/clickhouse-server/clickhouse-server-9002.err.log</errorlog> <http_port>8124</http_port> <tcp_port>9002</tcp_port> <mysql_port>9005</mysql_port> <interserver_http_port>9010</interserver_http_port> <interserver_http_port>9009</interserver_http_port> <path>/data/clickhouse9002/</path> <tmp_path>/data/clickhouse9002/tmp/</tmp_path> <user_files_path>/data/clickhouse9002/user_files/</user_files_path> <access_control_path>/data/clickhouse9002/access/</access_control_path> <include_from>/etc/clickhouse-server/metrika9002.xml</include_from>

2.3:建立對應的目錄

[root@centf8118 data]# mkdir -p /data/clickhouse9002
[root@centf8118 data]# chown -R clickhouse:clickhouse /data/clickhouse9002

PS: 一定要記得更改目錄的所屬組和使用者為clickhouse。

2.4:增加例項對應的服務啟動指令碼

[root@centf8118 init.d]# cp /etc/init.d/clickhouse-server /etc/init.d/clickhouse-server9002
[root@centf8118 init.d]# vim /etc/init.d/clickhouse-server9002 
調整內容如下:
調整後內容:
CLICKHOUSE_CONFIG=$CLICKHOUSE_CONFDIR/config9002.xml
CLICKHOUSE_PIDFILE="$CLICKHOUSE_PIDDIR/$PROGRAM-9002.pid"

調整前內容:
CLICKHOUSE_CONFIG=$CLICKHOUSE_CONFDIR/config.xml
CLICKHOUSE_PIDFILE="$CLICKHOUSE_PIDDIR/$PROGRAM.pid"

2.5:centf81.18完成上述操作後,在其他兩臺伺服器做以上完全一樣的操作。

三:叢集配置(三分片兩備份)

3.1:六個metrika*.xml共同部分:

<yandex>
    <!-- 叢集配置 -->
    <clickhouse_remote_servers>
        <!-- 3分片2備份 -->
        <xinchen_3shards_2replicas>
            <shard>
                <weight>1</weight>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>192.168.81.18</host>
                    <port>9000</port>
                </replica>
                <replica>
                    <host>192.168.81.19</host>
                    <port>9002</port>
                </replica>
            </shard>
            <shard>
                <weight>1</weight>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>192.168.81.19</host>
                    <port>9000</port>
                </replica>
                <replica>
                    <host>192.168.81.20</host>
                    <port>9002</port>
                </replica>
            </shard>
            <shard>
                <weight>1</weight>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>192.168.81.20</host>
                    <port>9000</port>
                </replica>
                <replica>
                    <host>192.168.81.18</host>
                    <port>9002</port>
                </replica>
            </shard>
        </xinchen_3shards_2replicas> 
    </clickhouse_remote_servers>
    
    <!-- zookeeper 配置 -->
    <zookeeper-servers>
        <node index="1">
            <host>192.168.81.18</host>
            <port>4181</port>
        </node>
        <node index="2">
            <host>192.168.81.19</host>
            <port>4181</port>
        </node>
        <node index="3">
            <host>192.168.81.20</host>
            <port>4181</port>
        </node>
    </zookeeper-servers>
    
    <!-- macros配置 -->
    <macros>
        <!-- <replica>192.168.81.18</replica> -->
        <layer>01</layer>
        <shard>01</shard>
        <replica>cluster01-01-1</replica>
    </macros>
    
    <networks>
        <ip>::/0</ip>
    </networks>
    
    <clickhouse_compression>
        <case>
            <min_part_size>10000000000</min_part_size>
            <min_part_size_ratio>0.01</min_part_size_ratio>
            <method>lz4</method>
        </case>
    </clickhouse_compression>
</yandex>

3.2:metrika*.xml不同部分修改如下:

centf81.18例項1(埠:9000)對應metrika.xml調整:
<macros>
    <!-- <replica>centf81.18</replica> -->
    <layer>01</layer>
    <shard>01</shard>
    <replica>cluster01-01-1</replica>
</macros>


centf81.18例項2(埠:9002)對應metrika9002.xml調整:
<macros>
    <!-- <replica>centf81.18</replica> -->
    <layer>01</layer>
    <shard>03</shard>
    <replica>cluster01-03-2</replica>
</macros>

centf81.19例項1(埠:9000)對應metrika.xml調整:
<macros>
    <!-- <replica>centf81.19</replica> -->
    <layer>01</layer>
    <shard>02</shard>
    <replica>cluster01-02-1</replica>
</macros>


centf81.19例項2(埠:9002)對應metrika9002.xml調整:
<macros>
    <!-- <replica>centf81.19</replica> -->
    <layer>01</layer>
    <shard>01</shard>
    <replica>cluster01-01-2</replica>
</macros>





centf81.20例項1(埠:9000)對應metrika.xml調整:
<macros>
    <!-- <replica>centf81.20</replica> -->
    <layer>01</layer>
    <shard>03</shard>
    <replica>cluster01-03-1</replica>
</macros>


centf81.20例項2(埠:9002)對應metrika9002.xml調整:
<macros>
    <!-- <replica>centf81.20</replica> -->
    <layer>01</layer>
    <shard>02</shard>
    <replica>cluster01-02-2</replica>
</macros>

說明:這其中的規律顯而易見,這裡不再說明。如果還不明白可以參照開頭的兩個表格內容便於理解。
其中layer是雙級分片設定,這裡是01;然後是shard表示分片編號;最後是replica是副本標識。
這裡使用了cluster{layer}-{shard}-{replica}的表示方式,比如cluster01-02-1表示cluster01叢集的02分片下的1號副本,這樣既非常直觀的表示又唯一確定副本。

額外提醒:如果一直是跟著我前面的教程來操作的。在這裡操作前,先把之前建立的分割槽表都刪掉。否則會導致服務啟動失敗。

因為這裡macros中的備份名稱引數值改了:原來是01/02/03的,現在改為cluster01-01-1這種形式。這樣會導致之前已建好的分割槽表找不到對應的備份資料。

    <macros>
        <!-- <replica>192.168.81.18</replica> -->
        <layer>01</layer>
        <shard>01</shard>
        <replica>cluster01-01-1</replica>
    </macros>

3.3:啟動高可用clickhouse叢集

[root@centf8118 clickhouse-server]# /etc/init.d/clickhouse-server start
[root@centf8118 clickhouse-server]# /etc/init.d/clickhouse-server9002 start

在三個節點上都執行上面的指令碼。

3.4:登入資料庫檢視叢集資訊

centf8119.sharding2.db :) select * from system.clusters;

SELECT *
FROM system.clusters

┌─cluster───────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ xinchen_3shards_2replicas         │         111192.168.81.18│192.168.81.1890000 │ default │                  │            00 │
│ xinchen_3shards_2replicas         │         112192.168.81.19│192.168.81.1990020 │ default │                  │            00 │
│ xinchen_3shards_2replicas         │         211192.168.81.19│192.168.81.1990001 │ default │                  │            00 │
│ xinchen_3shards_2replicas         │         212192.168.81.20│192.168.81.2090020 │ default │                  │            00 │
│ xinchen_3shards_2replicas         │         311192.168.81.20│192.168.81.2090000 │ default │                  │            00 │
│ xinchen_3shards_2replicas         │         312192.168.81.18│192.168.81.1890020 │ default │                  │            00 │
└───────────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘

登入每一個例項,檢視叢集資訊驗證配置是否正確。都符合自己之前的預設,說明叢集配置成功了。

clickhouse-client --host 10.30.81.18 --port 9000
clickhouse-client --host 10.30.81.18 --port 9002
clickhouse-client --host 10.30.81.19 --port 9000
clickhouse-client --host 10.30.81.19 --port 9002
clickhouse-client --host 10.30.81.20 --port 9000
clickhouse-client --host 10.30.81.20 --port 9002

三:叢集高可用驗證。

3.1:高可用原理

zookeeper+ReplicatedMergeTree(複製表)+Distributed(分散式表)

3.2:首先建立ReplicatedMergeTree引擎表。

需要在三個節點六個例項中都建立,建立sql如下:

CREATE TABLE test_clusters_ha\
(\
    dt Date,\
    path String \
)\
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{layer}-{shard}/test_clusters_ha','{replica}',dt, dt, 8192);

解釋:

ENGINE = ReplicatedMergeTree('/clickhouse/tables/{layer}-{shard}/test_clusters_ha','{replica}',dt, dt, 8192);

第一個引數為ZooKeeper中該表的路徑。

第二個引數為ZooKeeper中該表的副本名稱。

這裡的{layer},{shard}和{replica}就是metrika*.xml中的macros標籤對應的值。

3.3:建立分佈表。

這裡只要一個節點建立就行了,假定在sharding1上的9000例項建立。

CREATE TABLE test_clusters_ha_all AS test_clusters_ha ENGINE = Distributed(xinchen_3shards_2replicas, default, test_clusters_ha, rand());

3.4:插入並檢視資料

insert into test_clusters_ha_all values('2020-09-01','path1');
insert into test_clusters_ha_all values('2020-09-02','path2');
insert into test_clusters_ha_all values('2020-09-03','path3');
insert into test_clusters_ha_all values('2020-09-04','path4');
insert into test_clusters_ha_all values('2020-09-05','path5');
insert into test_clusters_ha_all values('2020-09-06','path6');
insert into test_clusters_ha_all values('2020-09-07','path7');
insert into test_clusters_ha_all values('2020-09-08','path8');
insert into test_clusters_ha_all values('2020-09-09','path9');

檢視資料結果就不貼出來了。總資料9條,分佈在三個9000例項的分片中。對應9002是三個分片的備份副本。

3.5:驗證某個節點宕機現有資料查詢一致性

a.將sharding2上的兩個例項服務全部停止,模擬sharding2節點宕機。

[root@centf8119 ~]# service clickhouse-server stop
Stop clickhouse-server service: DONE
[root@centf8119 ~]# service clickhouse-server9002 stop
Stop clickhouse-server service: DONE

b.先驗證在分散式表中查詢資料總量。

centf8118.sharding1.db :) select count(*) from test_clusters_ha_all;

SELECT count(*)
FROM test_clusters_ha_all

┌─count()─┐
│       9 │
└─────────┘

1 rows in set. Elapsed: 0.010 sec. 

結果是單節點宕機資料一致性得到保證。同理如果只是宕機sharding3也是一樣的結果。

c.如果sharding2和sharding3同時宕機會如何呢?

[root@centf8120 ~]# service clickhouse-server stop
Stop clickhouse-server service: DONE
[root@centf8120 ~]# service clickhouse-server9002 stop
Stop clickhouse-server service: DONE

d.停掉sharding3兩個例項後,再查詢分佈表:

centf8118.sharding1.db :) select count(*) from test_clusters_ha_all;

SELECT count(*)
FROM test_clusters_ha_all

↗ Progress: 2.00 rows, 8.21 KB (17.20 rows/s., 70.58 KB/s.) 
Received exception from server (version 20.6.4):
Code: 279. DB::Exception: Received from 192.168.81.18:9000. DB::Exception: All connection tries failed. Log: 

Code: 32, e.displayText() = DB::Exception: Attempt to read after eof (version 20.6.4.44 (official build))
Code: 210, e.displayText() = DB::NetException: Connection refused (192.168.81.19:9000) (version 20.6.4.44 (official build))
Code: 210, e.displayText() = DB::NetException: Connection refused (192.168.81.20:9002) (version 20.6.4.44 (official build))
Code: 210, e.displayText() = DB::NetException: Connection refused (192.168.81.19:9000) (version 20.6.4.44 (official build))
Code: 210, e.displayText() = DB::NetException: Connection refused (192.168.81.20:9002) (version 20.6.4.44 (official build))
Code: 210, e.displayText() = DB::NetException: Connection refused (192.168.81.19:9000) (version 20.6.4.44 (official build))

: While executing Remote. 

0 rows in set. Elapsed: 0.119 sec. 

直接報錯,可見在該方案中,sharding1作為主節點不能宕機,sharding2,sharding3只允許一個節點宕機。

e. 然後啟動sharding3的兩個例項。再檢視查詢分佈表是否正常。(能查到資料)

f.在sharding1分佈表中插入9條新的資料。再檢視sharding1和sharding3的本地表是否有新增資料。 (有新資料)

sharding1: 9000檢視分割槽表: 總共18條資料

本地表( sharding1:9000 + sharding3:9000 + sharding3:9002 ) = 18條資料 = 分割槽表資料

g.然後重啟sharding2:9000服務,檢視sharding2:9000本地表資料是否和sharding3:9002本地表資料一致?