TiDB TiUP 方式部署叢集
阿新 • • 發佈:2020-07-31
一. 軟硬體環境檢查
1. 系統版本要求
2.在TiKV部署機器上 掛載資料盤 (如果沒有直接跳過)
Linux 作業系統平臺 | 版本 |
Red Hat Enterprise Linux | 7.3 及以上 |
CentOS | 7.3 及以上 |
Oracle Enterprise Linux | 7.3 及以上 |
Ubuntu LTS | 16.04 及以上 |
1. 檢視 資料盤
fdisk -l
2. 建立分割槽
parted -s -a optimal /dev/nvme0n1 mklabel gpt -- mkpart primary ext4 1 -1
3. 格式化
mkfs.ext4 /dev/nvme0n1p1
4. 檢視分割槽UUID
lsblk -f
nvme0n1
└─nvme0n1p1 ext4 c51eb23b-195c-4061-92a9-3fad812cc12f
5. 編輯/etc/fstab 檔案 新增 nodelalloc 掛載引數
vi /etc/fstab
UUID=c51eb23b-195c-4061-92a9-3fad812cc12f /data1 ext4 defaults,nodelalloc,noatime 0 2
6. 掛載資料盤
mkdir /data1 && \
mount -a
7. 檢查 是否生效
mount -t ext4
# 引數中包含nodelalloc 表示生效
/dev/nvme0n1p1 on /data1 type ext4 (rw,noatime,nodelalloc,data=ordered)
3. 關閉swap 分割槽
```bash echo "vm.swappiness = 0">> /etc/sysctl.conf swapoff -a && swapon -a sysctl -p ```
4. 關閉 目標機器防火牆
1. 檢測方防火牆狀態 (CentOS 7.6系統)
sudo firewall-cmd --state
sudo systemctl status firewalld.service
2. 關閉防火牆
sudo systemctl stop firewalld.service
3. 關閉防火牆自啟動服務
sudo systemctl disable firewalld.service
4. 檢查防火牆狀態
sudo systemctl status firewalld.service
5. 安裝ntp 服務
1.檢查 ntp 服務狀態
sudo systemctl status ntpd.service
# runing 表示執行
ntpstat
# synchronised to NTP server 表示正常同步
2. 如果沒有執行請安裝,
sudo yum install ntp ntpdate && \
sudo systemctl start ntpd.service && \
sudo systemctl enable ntpd.service
6. 手動安裝SSH 互信
1. 以 root 使用者登入到目標機器, 並 建立 tidb 使用者
useradd tidb && \
passwd tidb
2. visudo 並在最後新增 tidb ALL=(ALL) NOPASSWD: ALL 配置sudo 免密碼
visudo
tidb ALL=(ALL) NOPASSWD: ALL
3. 以 tidb 使用者登入到 控制機上 執行命令 配置互信
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.5.52
3. ssh 連結 目標機器, 並sudo 到root 測試SSH互信和sudo 免密是否生效
ssh 172.16.5.52
sudo su - tidb
7. 安裝 numactl 工具 為了單機多例項 時 隔離 cpu 資源
1. 手動 連結到目標機 上進行安裝
sudo yum -y install numactl
2. 在 安裝tiup cluster 元件後 批量對目標機進行安裝
tiup cluster exec tidb-test --sudo --command "yum -y install numactl"
二. 在控制機上安裝 TiUP 元件
1. 安裝TiUP 元件
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
2. 重新整理環境變數
source .bash_profile
3. 檢查 TiUP 元件是否安裝成功, 安裝TiUP cluster 元件 並檢查 cluster元件版本資訊
which tiup
tiup cluster
tiup --binary cluster
三. 編輯初始化配置檔案 (這裡我們使用的最小拓撲安裝, 其他的也安裝了, 安裝過程基本一樣)
1.編輯配置檔案 topology.yaml (這裡我們對埠和路徑進行了自定義)
[tidb@CentOS76_VM ~]$ vim topology.yaml # # Global variables are applied to all deployments and used as the default value of # # the deployments if a specific deployment value is missing. global: user: "tidb" ssh_port: 22 deploy_dir: "/tidb-deploy" data_dir: "/tidb-data" # # Monitored variables are applied to all the machines. monitored: node_exporter_port: 19100 blackbox_exporter_port: 19115 deploy_dir: "/tidb-deploy/test/monitored-9100" data_dir: "/tidb-data/test/monitored-9100" log_dir: "/tidb-deploy/test/monitored-9100/log" # # Server configs are used to specify the runtime configuration of TiDB components. # # All configuration items can be found in TiDB docs: # # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/ # # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/ # # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/ # # All configuration items use points to represent the hierarchy, e.g: # # readpool.storage.use-unified-pool # # # # You can overwrite this configuration via the instance-level `config` field. server_configs: tidb: log.slow-threshold: 300 binlog.enable: false binlog.ignore-error: false tikv: # server.grpc-concurrency: 4 # raftstore.apply-pool-size: 2 # raftstore.store-pool-size: 2 # rocksdb.max-sub-compactions: 1 # storage.block-cache.capacity: "16GB" # readpool.unified.max-thread-count: 12 readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: schedule.leader-schedule-limit: 4 schedule.region-schedule-limit: 2048 schedule.replica-schedule-limit: 64 pd_servers: - host: 172.16.5.52 ssh_port: 22 name: "pd-1" client_port: 23794 peer_port: 23804 deploy_dir: "/tidb-deploy/test/pd-2379" data_dir: "/tidb-data/test/pd-2379" log_dir: "/tidb-deploy/test/pd-2379/log" # numa_node: "0,1" # # The following configs are used to overwrite the `server_configs.pd` values. # config: # schedule.max-merge-region-size: 20 # schedule.max-merge-region-keys: 200000 - host: 172.16.4.29 ssh_port: 22 name: "pd-2" client_port: 23794 peer_port: 23804 deploy_dir: "/tidb-deploy/test/pd-2379" data_dir: "/tidb-data/test/pd-2379" log_dir: "/tidb-deploy/test/pd-2379/log" - host: 172.16.4.56 ssh_port: 22 name: "pd-3" client_port: 23794 peer_port: 23804 deploy_dir: "/tidb-deploy/test/pd-2379" data_dir: "/tidb-data/test/pd-2379" log_dir: "/tidb-deploy/test/pd-2379/log" tidb_servers: - host: 172.16.5.52 ssh_port: 22 port: 4004 status_port: 10084 deploy_dir: "/tidb-deploy/test/tidb-4000" log_dir: "/tidb-deploy/test/tidb-4000/log" # numa_node: "0,1" # # The following configs are used to overwrite the `server_configs.tidb` values. # config: # log.slow-query-file: tidb-slow-overwrited.log - host: 172.16.4.29 ssh_port: 22 port: 4004 status_port: 10084 deploy_dir: "/tidb-deploy/test/tidb-4000" log_dir: "/tidb-deploy/test/tidb-4000/log" - host: 172.16.4.56 ssh_port: 22 port: 4004 status_port: 10084 deploy_dir: "/tidb-deploy/test/tidb-4000" log_dir: "/tidb-deploy/test/tidb-4000/log" tikv_servers: - host: 172.16.4.30 ssh_port: 22 port: 20164 status_port: 20184 deploy_dir: "/tidb-deploy/test/tikv-20160" data_dir: "/tidb-data/test/tikv-20160" log_dir: "/tidb-deploy/test/tikv-20160/log" # numa_node: "0,1" # # The following configs are used to overwrite the `server_configs.tikv` values. # config: # server.grpc-concurrency: 4 # server.labels: { zone: "zone1", dc: "dc1", host: "host1" } - host: 172.16.4.224 ssh_port: 22 port: 20164 status_port: 20184 deploy_dir: "/tidb-deploy/test/tikv-20160" data_dir: "/tidb-data/test/tikv-20160" log_dir: "/tidb-deploy/test/tikv-20160/log" - host: 172.16.5.208 ssh_port: 22 port: 20164 status_port: 20184 deploy_dir: "/tidb-deploy/test/tikv-20160" data_dir: "/tidb-data/test/tikv-20160" log_dir: "/tidb-deploy/test/tikv-20160/log" monitoring_servers: - host: 172.16.5.52 ssh_port: 22 port: 9490 deploy_dir: "/tidb-deploy/test/prometheus-8249" data_dir: "/tidb-data/test/prometheus-8249" log_dir: "/tidb-deploy/test/prometheus-8249/log" grafana_servers: - host: 172.16.5.52 port: 3004 deploy_dir: /tidb-deploy/test/grafana-3000 alertmanager_servers: - host: 172.16.5.52 ssh_port: 22 web_port: 9493 cluster_port: 9494 deploy_dir: "/tidb-deploy/test/alertmanager-9093" data_dir: "/tidb-data/test/alertmanager-9093" log_dir: "/tidb-deploy/test/alertmanager-9093/log"四. 部署 檢查 啟動 管理 叢集 1. 執行部署命令
tiup cluster deploy tidb-test v4.0.0 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]
通過 TiUP cluster 部署的叢集名稱為 tidb-test
部署版本為 v4.0.0,最新版本可以通過執行 tiup list tidb 來檢視 TiUP 支援的版本
初始化配置檔案為 topology.yaml
--user root:通過 root 使用者登入到目標主機完成叢集部署,該使用者需要有 ssh 到目標機器的許可權,
並且在目標機器有 sudo 許可權。也可以用其他有 ssh 和 sudo 許可權的使用者完成部署。
[-i] 及 [-p]:非必選項,如果已經配置免密登陸目標機,則不需填寫。否則選擇其一即可,
[-i] 為可登入到目標機的 root 使用者(或 --user 指定的其他使用者)的私鑰,也可使用 [-p] 互動式輸入該使用者的密碼
成功部署後會輸出 Deployed cluster `tidb-test` successfully
2. 檢視TiUP 管理的叢集
tiup cluster list
[tidb@CentOS76_VM ~]$ tiup cluster list
Starting component `cluster`: list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
tidb-binlog tidb v4.0.0 /home/tidb/.tiup/storage/cluster/clusters/tidb-binlog /home/tidb/.tiup/storage/cluster/clusters/tidb-binlog/ssh/id_rsa
tidb-test tidb v4.0.0 /home/tidb/.tiup/storage/cluster/clusters/tidb-test /home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa
tidb-ticdc tidb v4.0.0 /home/tidb/.tiup/storage/cluster/clusters/tidb-ticdc /home/tidb/.tiup/storage/cluster/clusters/tidb-ticdc/ssh/id_rsa
tidb-tiflash tidb v4.0.0 /home/tidb/.tiup/storage/cluster/clusters/tidb-tiflash /home/tidb/.tiup/storage/cluster/clusters/tidb-tiflash/ssh/id_rsa
3. 檢視叢集 狀態
tiup cluster display tidb-test
[tidb@CentOS76_VM ~]$ tiup cluster display tidb-test
Starting component `cluster`: display tidb-test
TiDB Cluster: tidb-test
TiDB Version: v4.0.0
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
172.16.5.52:9493 alertmanager 172.16.5.52 9493/9494 linux/x86_64 inactive /tidb-data/test/alertmanager-9093 /tidb-deploy/test/alertmanager-9093
172.16.5.52:3004 grafana 172.16.5.52 3004 linux/x86_64 inactive - /tidb-deploy/test/grafana-3000
172.16.4.29:23794 pd 172.16.4.29 23794/23804 linux/x86_64 Down /tidb-data/test/pd-2379 /tidb-deploy/test/pd-2379
172.16.4.56:23794 pd 172.16.4.56 23794/23804 linux/x86_64 Down /tidb-data/test/pd-2379 /tidb-deploy/test/pd-2379
172.16.5.52:23794 pd 172.16.5.52 23794/23804 linux/x86_64 Down /tidb-data/test/pd-2379 /tidb-deploy/test/pd-2379
172.16.5.52:9490 prometheus 172.16.5.52 9490 linux/x86_64 inactive /tidb-data/test/prometheus-8249 /tidb-deploy/test/prometheus-8249
172.16.4.29:4004 tidb 172.16.4.29 4004/10084 linux/x86_64 Down - /tidb-deploy/test/tidb-4000
172.16.4.56:4004 tidb 172.16.4.56 4004/10084 linux/x86_64 Down - /tidb-deploy/test/tidb-4000
172.16.5.52:4004 tidb 172.16.5.52 4004/10084 linux/x86_64 Down - /tidb-deploy/test/tidb-4000
172.16.4.224:20164 tikv 172.16.4.224 20164/20184 linux/x86_64 Down /tidb-data/test/tikv-20160 /tidb-deploy/test/tikv-20160
172.16.4.30:20164 tikv 172.16.4.30 20164/20184 linux/x86_64 Down /tidb-data/test/tikv-20160 /tidb-deploy/test/tikv-20160
172.16.5.208:20164 tikv 172.16.5.208 20164/20184 linux/x86_64 Down /tidb-data/test/tikv-20160 /tidb-deploy/test/tikv-20160
4. 啟動叢集
tiup cluster start tidb-test
5. 啟動後, 驗證叢集狀態
1. tiup cluster display tidb-test
# status 全部 UP 啟動成功
2. 我們可以連線資料庫, 看能否成功連線
mysql -u root -h 10.0.1.4 -P 4004
有道雲地址>>