tidb安裝部署(離線方式)
環境
OS:Centos7
db:v6.0.0
機器與各角色分配如下:
192.168.1.118 pd,tidb,tikv,tiflash,monitoring,grafana,alertmanager
192.168.1.85 pd,tidb,tikv,tiflash
192.168.1.134 pd,tidb,tikv
1.下載安裝介質
我這裡下載的社群版
https://pingcap.com/zh/product-community/
這裡下載的介質如下
這裡的介質我放在192.168.1.118機器上面
[root@localhost tidb]# ls -1
tidb-community-server-v6.0.0-linux-amd64.tar.gz
tidb-community-toolkit-v6.0.0-linux-amd64.tar.gz
2.解壓
192.168.1.118機器上執行
[root@localhost tidb]# tar -xvf tidb-community-server-v6.0.0-linux-amd64.tar.gz
3.部署TiUP環境
192.168.1.118機器上執行
[root@localhost tidb-community-server-v6.0.0-linux-amd64]# sh local_install.sh
Disable telemetry success
Successfully set mirror to /soft/tidb/tidb-community-server-v6.0.0-linux-amd64
Detected shell: bash
Shell profile: /root/.bash_profile
/root/.bash_profile has been modified to to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
1. source /root/.bash_profile
2. Have a try: tiup playground
===============================================
根據提示重新開啟一個終端
[root@localhost ~]# which tiup
/root/.tiup/bin/tiup
或者在當前終端執行
[root@localhost tidb-community-server-v6.0.0-linux-amd64]# source /root/.bash_profile
[root@localhost tidb-community-server-v6.0.0-linux-amd64]# which tiup
/root/.tiup/bin/tiup
local_install.sh 會自動將映象切換到本地目錄,這樣就不會去訪問外網; tiup mirror show 可以檢視映象地址
[root@localhost tidb-community-server-v6.0.0-linux-amd64]# tiup mirror show
/soft/tidb/tidb-community-server-v6.0.0-linux-amd64
4.編輯配置檔案
192.168.1.118機器上執行
生成初始化拓撲模板
[root@localhost tidb]# tiup cluster template > topology.yaml
tiup is checking updates for component cluster ...
A new version of cluster is available:
The latest version: v1.9.4
Local installed version:
Update current component: tiup update cluster
Update all components: tiup update --all
The component `cluster` version is not installed; downloading from repository.
Starting component `cluster`: /root/.tiup/components/cluster/v1.9.4/tiup-cluster /root/.tiup/components/cluster/v1.9.4/tiup-cluster template
[root@localhost tidb]# pwd
/soft/tidb
對模板進行編輯,修改IP地址為自己部署的IP地址
配置檔案如下:
[root@localhost tidb]# more topology.yaml # # Global variables are applied to all deployments and used as the default value of # # the deployments if a specific deployment value is missing. global: # # The user who runs the tidb cluster. user: "tidb" # # group is used to specify the group name the user belong to if it's not the same as user. # group: "tidb" # # SSH port of servers in the managed cluster. ssh_port: 22 # # Storage directory for cluster deployment files, startup scripts, and configuration files. deploy_dir: "/tidb-deploy" # # TiDB Cluster data storage directory data_dir: "/tidb-data" # # Supported values: "amd64", "arm64" (default: "amd64") arch: "amd64" # # Resource Control is used to limit the resource of an instance. # # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html # # Supports using instance-level `resource_control` to override global `resource_control`. # resource_control: # # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#MemoryLimit=bytes # memory_limit: "2G" # # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#CPUQuota= # # The percentage specifies how much CPU time the unit shall get at maximum, relative to the total CPU time available on one CPU. Use values > 100% for allotting CPU time on more than one CPU. # # Example: CPUQuota=200% ensures that the executed processes will never get more than two CPU time. # cpu_quota: "200%" # # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#IOReadBandwidthMax=device%20bytes # io_read_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M" # io_write_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M" # # Monitored variables are applied to all the machines. monitored: # # The communication port for reporting system information of each node in the TiDB cluster. node_exporter_port: 9100 # # Blackbox_exporter communication port, used for TiDB cluster port monitoring. blackbox_exporter_port: 9115 # # Storage directory for deployment files, startup scripts, and configuration files of monitoring components. # deploy_dir: "/tidb-deploy/monitored-9100" # # Data storage directory of monitoring components. # data_dir: "/tidb-data/monitored-9100" # # Log storage directory of the monitoring component. # log_dir: "/tidb-deploy/monitored-9100/log" # # Server configs are used to specify the runtime configuration of TiDB components. # # All configuration items can be found in TiDB docs: # # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/ # # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/ # # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/ # # - TiFlash: https://docs.pingcap.com/tidb/stable/tiflash-configuration # # # # All configuration items use points to represent the hierarchy, e.g: # # readpool.storage.use-unified-pool # # ^ ^ # # - example: https://github.com/pingcap/tiup/blob/master/examples/topology.example.yaml. # # You can overwrite this configuration via the instance-level `config` field. # server_configs: # tidb: # tikv: # pd: # tiflash: # tiflash-learner: # # Server configs are used to specify the configuration of PD Servers. pd_servers: # # The ip address of the PD Server. - host: 192.168.1.118 # # SSH port of the server. # ssh_port: 22 # # PD Server name # name: "pd-1" # # communication port for TiDB Servers to connect. # client_port: 2379 # # Communication port among PD Server nodes. # peer_port: 2380 # # PD Server deployment file, startup script, configuration file storage directory. # deploy_dir: "/tidb-deploy/pd-2379" # # PD Server data storage directory. # data_dir: "/tidb-data/pd-2379" # # PD Server log file storage directory. # log_dir: "/tidb-deploy/pd-2379/log" # # numa node bindings. # numa_node: "0,1" # # The following configs are used to overwrite the `server_configs.pd` values. # config: # schedule.max-merge-region-size: 20 # schedule.max-merge-region-keys: 200000 - host: 192.168.1.85 # ssh_port: 22 # name: "pd-1" # client_port: 2379 # peer_port: 2380 # deploy_dir: "/tidb-deploy/pd-2379" # data_dir: "/tidb-data/pd-2379" # log_dir: "/tidb-deploy/pd-2379/log" # numa_node: "0,1" # config: # schedule.max-merge-region-size: 20 # schedule.max-merge-region-keys: 200000 - host: 192.168.1.134 # ssh_port: 22 # name: "pd-1" # client_port: 2379 # peer_port: 2380 # deploy_dir: "/tidb-deploy/pd-2379" # data_dir: "/tidb-data/pd-2379" # log_dir: "/tidb-deploy/pd-2379/log" # numa_node: "0,1" # config: # schedule.max-merge-region-size: 20 # schedule.max-merge-region-keys: 200000 # # Server configs are used to specify the configuration of TiDB Servers. tidb_servers: # # The ip address of the TiDB Server. - host: 192.168.1.118 # # SSH port of the server. # ssh_port: 22 # # The port for clients to access the TiDB cluster. # port: 4000 # # TiDB Server status API port. # status_port: 10080 # # TiDB Server deployment file, startup script, configuration file storage directory. # deploy_dir: "/tidb-deploy/tidb-4000" # # TiDB Server log file storage directory. # log_dir: "/tidb-deploy/tidb-4000/log" # # The ip address of the TiDB Server. - host: 192.168.1.85 # ssh_port: 22 # port: 4000 # status_port: 10080 # deploy_dir: "/tidb-deploy/tidb-4000" # log_dir: "/tidb-deploy/tidb-4000/log" - host: 192.168.1.134 # ssh_port: 22 # port: 4000 # status_port: 10080 # deploy_dir: "/tidb-deploy/tidb-4000" # log_dir: "/tidb-deploy/tidb-4000/log" # # Server configs are used to specify the configuration of TiKV Servers. tikv_servers: # # The ip address of the TiKV Server. - host: 192.168.1.118 # # SSH port of the server. # ssh_port: 22 # # TiKV Server communication port. # port: 20160 # # TiKV Server status API port. # status_port: 20180 # # TiKV Server deployment file, startup script, configuration file storage directory. # deploy_dir: "/tidb-deploy/tikv-20160" # # TiKV Server data storage directory. # data_dir: "/tidb-data/tikv-20160" # # TiKV Server log file storage directory. # log_dir: "/tidb-deploy/tikv-20160/log" # # The following configs are used to overwrite the `server_configs.tikv` values. # config: # log.level: warn # # The ip address of the TiKV Server. - host: 192.168.1.85 # ssh_port: 22 # port: 20160 # status_port: 20180 # deploy_dir: "/tidb-deploy/tikv-20160" # data_dir: "/tidb-data/tikv-20160" # log_dir: "/tidb-deploy/tikv-20160/log" # config: # log.level: warn - host: 192.168.1.134 # ssh_port: 22 # port: 20160 # status_port: 20180 # deploy_dir: "/tidb-deploy/tikv-20160" # data_dir: "/tidb-data/tikv-20160" # log_dir: "/tidb-deploy/tikv-20160/log" # config: # log.level: warn # # Server configs are used to specify the configuration of TiFlash Servers. tiflash_servers: # # The ip address of the TiFlash Server. - host: 192.168.1.118 # # SSH port of the server. # ssh_port: 22 # # TiFlash TCP Service port. # tcp_port: 9000 # # TiFlash HTTP Service port. # http_port: 8123 # # TiFlash raft service and coprocessor service listening address. # flash_service_port: 3930 # # TiFlash Proxy service port. # flash_proxy_port: 20170 # # TiFlash Proxy metrics port. # flash_proxy_status_port: 20292 # # TiFlash metrics port. # metrics_port: 8234 # # TiFlash Server deployment file, startup script, configuration file storage directory. # deploy_dir: /tidb-deploy/tiflash-9000 ## With cluster version >= v4.0.9 and you want to deploy a multi-disk TiFlash node, it is recommended to ## check config.storage.* for details. The data_dir will be ignored if you defined those configurations. ## Setting data_dir to a ','-joined string is still supported but deprecated. ## Check https://docs.pingcap.com/tidb/stable/tiflash-configuration#multi-disk-deployment for more details. # # TiFlash Server data storage directory. # data_dir: /tidb-data/tiflash-9000 # # TiFlash Server log file storage directory. # log_dir: /tidb-deploy/tiflash-9000/log # # The ip address of the TiKV Server. - host: 192.168.1.85 # ssh_port: 22 # tcp_port: 9000 # http_port: 8123 # flash_service_port: 3930 # flash_proxy_port: 20170 # flash_proxy_status_port: 20292 # metrics_port: 8234 # deploy_dir: /tidb-deploy/tiflash-9000 # data_dir: /tidb-data/tiflash-9000 # log_dir: /tidb-deploy/tiflash-9000/log # # Server configs are used to specify the configuration of Prometheus Server. monitoring_servers: # # The ip address of the Monitoring Server. - host: 192.168.1.118 # # SSH port of the server. # ssh_port: 22 # # Prometheus Service communication port. # port: 9090 # # ng-monitoring servive communication port # ng_port: 12020 # # Prometheus deployment file, startup script, configuration file storage directory. # deploy_dir: "/tidb-deploy/prometheus-8249" # # Prometheus data storage directory. # data_dir: "/tidb-data/prometheus-8249" # # Prometheus log file storage directory. # log_dir: "/tidb-deploy/prometheus-8249/log" # # Server configs are used to specify the configuration of Grafana Servers. grafana_servers: # # The ip address of the Grafana Server. - host: 192.168.1.118 # # Grafana web port (browser access) # port: 3000 # # Grafana deployment file, startup script, configuration file storage directory. # deploy_dir: /tidb-deploy/grafana-3000 # # Server configs are used to specify the configuration of Alertmanager Servers. alertmanager_servers: # # The ip address of the Alertmanager Server. - host: 192.168.1.118 # # SSH port of the server. # ssh_port: 22 # # Alertmanager web service port. # web_port: 9093 # # Alertmanager communication port. # cluster_port: 9094 # # Alertmanager deployment file, startup script, configuration file storage directory. # deploy_dir: "/tidb-deploy/alertmanager-9093" # # Alertmanager data storage directory. # data_dir: "/tidb-data/alertmanager-9093" # # Alertmanager log file storage directory. # log_dir: "/tidb-deploy/alertmanager-9093/log"
5.建立使用者tidb使用者,每個節點上都要執行
[root@pxc04 /]# groupadd tidb
[root@pxc04 /]# useradd -g tidb -G tidb -s /bin/bash tidb
6.檢查系統是否滿足TiDB安裝要求
192.168.1.118機器上執行
配置後,使用tiup cluster check ./topology.yaml來檢查系統是否滿足TiDB安裝要求;
---注:TiUP和其他角色機器需要配置免密登陸,pass和warning是可以接受的,fail項則需要修改
[root@localhost tidb]# tiup cluster check ./topology.yaml tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.9.4/tiup-cluster /root/.tiup/components/cluster/v1.9.4/tiup-cluster check ./topology.yaml + Detect CPU Arch - Detecting node 192.168.1.118 ... Done - Detecting node 192.168.1.85 ... Done - Detecting node 192.168.1.134 ... Error Error: failed to fetch cpu arch: executor.ssh.execute_failed: Failed to execute command over SSH for '[email protected]:22' {ssh_stderr: , ssh_stdout: , ssh_command: export LANG=C; PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin uname -m}, cause: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Verbose debug logs has been written to /root/.tiup/logs/tiup-cluster-debug-2022-05-18-10-26-15.log.
這裡是在root賬號下執行的,需要對root賬號在各節點建立等效連線,可以參考如下文件
http://blog.chinaunix.net/uid-77311-id-5757281.html
繼續執行按照前檢測 [root@localhost tidb]# tiup cluster check ./topology.yaml tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.9.4/tiup-cluster /root/.tiup/components/cluster/v1.9.4/tiup-cluster check ./topology.yaml + Detect CPU Arch - Detecting node 192.168.1.118 ... Done - Detecting node 192.168.1.85 ... Done - Detecting node 192.168.1.134 ... Done + Download necessary tools - Downloading check tools for linux/amd64 ... Done + Collect basic system information + Collect basic system information + Collect basic system information + Collect basic system information - Getting system info of 192.168.1.118:22 ... Done - Getting system info of 192.168.1.85:22 ... Done - Getting system info of 192.168.1.134:22 ... Done + Check system requirements - Checking node 192.168.1.118 ... ? CheckSys: host=192.168.1.118 type=exist + Check system requirements + Check system requirements - Checking node 192.168.1.118 ... ? Shell: host=192.168.1.118, sudo=false, command=`cat /etc/security/limits.conf` + Check system requirements + Check system requirements + Check system requirements + Check system requirements + Check system requirements - Checking node 192.168.1.118 ... Done - Checking node 192.168.1.85 ... Done - Checking node 192.168.1.118 ... Done - Checking node 192.168.1.85 ... Done - Checking node 192.168.1.134 ... Done - Checking node 192.168.1.118 ... Done - Checking node 192.168.1.85 ... Done - Checking node 192.168.1.134 ... Done - Checking node 192.168.1.118 ... Done - Checking node 192.168.1.85 ... Done - Checking node 192.168.1.134 ... Done - Checking node 192.168.1.118 ... Done - Checking node 192.168.1.118 ... Done - Checking node 192.168.1.118 ... Done + Cleanup check files - Cleanup check files on 192.168.1.118:22 ... Done - Cleanup check files on 192.168.1.85:22 ... Done - Cleanup check files on 192.168.1.118:22 ... Done - Cleanup check files on 192.168.1.85:22 ... Done - Cleanup check files on 192.168.1.134:22 ... Done - Cleanup check files on 192.168.1.118:22 ... Done - Cleanup check files on 192.168.1.85:22 ... Done - Cleanup check files on 192.168.1.134:22 ... Done - Cleanup check files on 192.168.1.118:22 ... Done - Cleanup check files on 192.168.1.85:22 ... Done - Cleanup check files on 192.168.1.134:22 ... Done - Cleanup check files on 192.168.1.118:22 ... Done - Cleanup check files on 192.168.1.118:22 ... Done - Cleanup check files on 192.168.1.118:22 ... Done Node Check Result Message ---- ----- ------ ------- 192.168.1.118 swap Fail swap is enabled, please disable it for best performance 192.168.1.118 disk Warn mount point / does not have 'noatime' option set 192.168.1.118 selinux Pass SELinux is disabled 192.168.1.118 limits Fail soft limit of 'nofile' for user 'tidb' is not set or too low 192.168.1.118 limits Fail hard limit of 'nofile' for user 'tidb' is not set or too low 192.168.1.118 limits Fail soft limit of 'stack' for user 'tidb' is not set or too low 192.168.1.118 listening-port Fail port 8123 is already in use 192.168.1.118 listening-port Fail port 9000 is already in use 192.168.1.118 os-version Pass OS is CentOS Linux 7 (Core) 7.5.1804 192.168.1.118 memory Pass memory size is 12288MB 192.168.1.118 disk Fail multiple components tikv:/tidb-data/tikv-20160,tiflash:/tidb-data/tiflash-9000 are using the same partition 192.168.1.118:/ as data dir 192.168.1.118 sysctl Fail net.core.somaxconn = 128, should be greater than 32768 192.168.1.118 sysctl Fail net.ipv4.tcp_syncookies = 1, should be 0 192.168.1.118 sysctl Fail vm.swappiness = 60, should be 0 192.168.1.118 thp Fail THP is enabled, please disable it for best performance 192.168.1.118 command Pass numactl: policy: default 192.168.1.118 cpu-cores Pass number of CPU cores / threads: 2 192.168.1.118 cpu-governor Warn Unable to determine current CPU frequency governor policy 192.168.1.118 network Fail network speed of ens3 is 100MB too low, needs 1GB or more 192.168.1.85 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009 192.168.1.85 memory Pass memory size is 8192MB 192.168.1.85 disk Warn mount point / does not have 'noatime' option set 192.168.1.85 disk Fail multiple components tikv:/tidb-data/tikv-20160,tiflash:/tidb-data/tiflash-9000 are using the same partition 192.168.1.85:/ as data dir 192.168.1.85 command Pass numactl: policy: default 192.168.1.85 swap Fail swap is enabled, please disable it for best performance 192.168.1.85 selinux Fail SELinux is not disabled 192.168.1.85 listening-port Fail port 8123 is already in use 192.168.1.85 listening-port Fail port 9000 is already in use 192.168.1.85 cpu-cores Pass number of CPU cores / threads: 4 192.168.1.85 sysctl Fail net.core.somaxconn = 128, should be greater than 32768 192.168.1.85 sysctl Fail net.ipv4.tcp_syncookies = 1, should be 0 192.168.1.85 sysctl Fail vm.swappiness = 30, should be 0 192.168.1.85 thp Fail THP is enabled, please disable it for best performance 192.168.1.85 cpu-governor Warn Unable to determine current CPU frequency governor policy 192.168.1.85 network Fail network speed of ens3 is 100MB too low, needs 1GB or more 192.168.1.85 limits Fail soft limit of 'nofile' for user 'tidb' is not set or too low 192.168.1.85 limits Fail hard limit of 'nofile' for user 'tidb' is not set or too low 192.168.1.85 limits Fail soft limit of 'stack' for user 'tidb' is not set or too low 192.168.1.134 sysctl Fail net.core.somaxconn = 128, should be greater than 32768 192.168.1.134 sysctl Fail net.ipv4.tcp_syncookies = 1, should be 0 192.168.1.134 sysctl Fail vm.swappiness = 60, should be 0 192.168.1.134 selinux Pass SELinux is disabled 192.168.1.134 thp Fail THP is enabled, please disable it for best performance 192.168.1.134 os-version Pass OS is CentOS Linux 7 (Core) 7.5.1804 192.168.1.134 network Fail network speed of ens3 is 100MB too low, needs 1GB or more 192.168.1.134 disk Warn mount point / does not have 'noatime' option set 192.168.1.134 memory Pass memory size is 12288MB 192.168.1.134 limits Fail soft limit of 'nofile' for user 'tidb' is not set or too low 192.168.1.134 limits Fail hard limit of 'nofile' for user 'tidb' is not set or too low 192.168.1.134 limits Fail soft limit of 'stack' for user 'tidb' is not set or too low 192.168.1.134 command Fail numactl not usable, bash: numactl: command not found 192.168.1.134 cpu-cores Pass number of CPU cores / threads: 2 192.168.1.134 cpu-governor Warn Unable to determine current CPU frequency governor policy 192.168.1.134 swap Fail swap is enabled, please disable it for best performance
這裡發現很多的告警,嘗試使用如下命令自動修復
tiup cluster check ./topology.yaml –-apply
使用--apply命令進行修復
7.安裝部署
我這裡是在192.168.1.118機器上執行
tiup cluster deploy <cluster-name> <version> <topology.yaml>
<cluster-name> 表示新叢集的名字,不能和現有叢集同名
<version> 為要部署的 TiDB 叢集版本號,如 v4.0.9
<topology.yaml> 為事先編寫好的拓撲檔案
[root@localhost tidb]# tiup cluster deploy mytidb_cluster v6.0.0 ./topology.yaml tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.9.4/tiup-cluster /root/.tiup/components/cluster/v1.9.4/tiup-cluster deploy mytidb_cluster v6.0.0 ./topology.yaml + Detect CPU Arch - Detecting node 192.168.1.118 ... Done - Detecting node 192.168.1.85 ... Done - Detecting node 192.168.1.134 ... Done Please confirm your topology: Cluster type: tidb Cluster name: mytidb_cluster Cluster version: v6.0.0 Role Host Ports OS/Arch Directories ---- ---- ----- ------- ----------- pd 192.168.1.118 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379 pd 192.168.1.85 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379 pd 192.168.1.134 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379 tikv 192.168.1.118 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160 tikv 192.168.1.85 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160 tikv 192.168.1.134 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160 tidb 192.168.1.118 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000 tidb 192.168.1.85 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000 tidb 192.168.1.134 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000 tiflash 192.168.1.118 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000 tiflash 192.168.1.85 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000 prometheus 192.168.1.118 9090/12020 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090 grafana 192.168.1.118 3000 linux/x86_64 /tidb-deploy/grafana-3000 alertmanager 192.168.1.118 9093/9094 linux/x86_64 /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093 Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host. Do you want to continue? [y/N]: (default=N) y + Generate SSH keys ... Done + Download TiDB components - Download pd:v6.0.0 (linux/amd64) ... Done - Download tikv:v6.0.0 (linux/amd64) ... Done - Download tidb:v6.0.0 (linux/amd64) ... Done - Download tiflash:v6.0.0 (linux/amd64) ... Done - Download prometheus:v6.0.0 (linux/amd64) ... Done - Download grafana:v6.0.0 (linux/amd64) ... Done - Download alertmanager: (linux/amd64) ... Done - Download node_exporter: (linux/amd64) ... Done - Download blackbox_exporter: (linux/amd64) ... Done + Initialize target host environments - Prepare 192.168.1.118:22 ... Done - Prepare 192.168.1.85:22 ... Done - Prepare 192.168.1.134:22 ... Done + Deploy TiDB instance - Copy pd -> 192.168.1.118 ... Done - Copy pd -> 192.168.1.85 ... Done - Copy pd -> 192.168.1.134 ... Done - Copy tikv -> 192.168.1.118 ... Done - Copy tikv -> 192.168.1.85 ... Done - Copy tikv -> 192.168.1.134 ... Done - Copy tidb -> 192.168.1.118 ... Done - Copy tidb -> 192.168.1.85 ... Done - Copy tidb -> 192.168.1.134 ... Done - Copy tiflash -> 192.168.1.118 ... Done - Copy tiflash -> 192.168.1.85 ... Done - Copy prometheus -> 192.168.1.118 ... Done - Copy grafana -> 192.168.1.118 ... Done - Copy alertmanager -> 192.168.1.118 ... Done - Deploy node_exporter -> 192.168.1.118 ... Done - Deploy node_exporter -> 192.168.1.85 ... Done - Deploy node_exporter -> 192.168.1.134 ... Done - Deploy blackbox_exporter -> 192.168.1.118 ... Done - Deploy blackbox_exporter -> 192.168.1.85 ... Done - Deploy blackbox_exporter -> 192.168.1.134 ... Done + Copy certificate to remote host + Init instance configs - Generate config pd -> 192.168.1.118:2379 ... Done - Generate config pd -> 192.168.1.85:2379 ... Done - Generate config pd -> 192.168.1.134:2379 ... Done - Generate config tikv -> 192.168.1.118:20160 ... Done - Generate config tikv -> 192.168.1.85:20160 ... Done - Generate config tikv -> 192.168.1.134:20160 ... Done - Generate config tidb -> 192.168.1.118:4000 ... Done - Generate config tidb -> 192.168.1.85:4000 ... Done - Generate config tidb -> 192.168.1.134:4000 ... Done - Generate config tiflash -> 192.168.1.118:9000 ... Done - Generate config tiflash -> 192.168.1.85:9000 ... Done - Generate config prometheus -> 192.168.1.118:9090 ... Done - Generate config grafana -> 192.168.1.118:3000 ... Done - Generate config alertmanager -> 192.168.1.118:9093 ... Done + Init monitor configs - Generate config node_exporter -> 192.168.1.118 ... Done - Generate config node_exporter -> 192.168.1.85 ... Done - Generate config node_exporter -> 192.168.1.134 ... Done - Generate config blackbox_exporter -> 192.168.1.85 ... Done - Generate config blackbox_exporter -> 192.168.1.134 ... Done - Generate config blackbox_exporter -> 192.168.1.118 ... Done Enabling component pd Enabling instance 192.168.1.134:2379 Enabling instance 192.168.1.118:2379 Enabling instance 192.168.1.85:2379 Enable instance 192.168.1.118:2379 success Enable instance 192.168.1.85:2379 success Enable instance 192.168.1.134:2379 success Enabling component tikv Enabling instance 192.168.1.118:20160 Enabling instance 192.168.1.85:20160 Enabling instance 192.168.1.134:20160 Enable instance 192.168.1.85:20160 success Enable instance 192.168.1.118:20160 success Enable instance 192.168.1.134:20160 success Enabling component tidb Enabling instance 192.168.1.134:4000 Enabling instance 192.168.1.118:4000 Enabling instance 192.168.1.85:4000 Enable instance 192.168.1.118:4000 success Enable instance 192.168.1.85:4000 success Enable instance 192.168.1.134:4000 success Enabling component tiflash Enabling instance 192.168.1.85:9000 Enabling instance 192.168.1.118:9000 Enable instance 192.168.1.118:9000 success Enable instance 192.168.1.85:9000 success Enabling component prometheus Enabling instance 192.168.1.118:9090 Enable instance 192.168.1.118:9090 success Enabling component grafana Enabling instance 192.168.1.118:3000 Enable instance 192.168.1.118:3000 success Enabling component alertmanager Enabling instance 192.168.1.118:9093 Enable instance 192.168.1.118:9093 success Enabling component node_exporter Enabling instance 192.168.1.85 Enabling instance 192.168.1.134 Enabling instance 192.168.1.118 Enable 192.168.1.85 success Enable 192.168.1.118 success Enable 192.168.1.134 success Enabling component blackbox_exporter Enabling instance 192.168.1.85 Enabling instance 192.168.1.134 Enabling instance 192.168.1.118 Enable 192.168.1.118 success Enable 192.168.1.85 success Enable 192.168.1.134 success Cluster `mytidb_cluster` deployed successfully, you can start it with command: `tiup cluster start mytidb_cluster --init`
8.啟動
[root@localhost tidb]# tiup cluster start mytidb_cluster --init tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.9.4/tiup-cluster /root/.tiup/components/cluster/v1.9.4/tiup-cluster start mytidb_cluster --init Starting cluster mytidb_cluster... + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=192.168.1.85 + [Parallel] - UserSSH: user=tidb, host=192.168.1.134 + [Parallel] - UserSSH: user=tidb, host=192.168.1.118 + [Parallel] - UserSSH: user=tidb, host=192.168.1.85 + [Parallel] - UserSSH: user=tidb, host=192.168.1.134 + [Parallel] - UserSSH: user=tidb, host=192.168.1.118 + [Parallel] - UserSSH: user=tidb, host=192.168.1.85 + [Parallel] - UserSSH: user=tidb, host=192.168.1.118 + [Parallel] - UserSSH: user=tidb, host=192.168.1.118 + [Parallel] - UserSSH: user=tidb, host=192.168.1.118 + [Parallel] - UserSSH: user=tidb, host=192.168.1.118 + [Parallel] - UserSSH: user=tidb, host=192.168.1.85 + [Parallel] - UserSSH: user=tidb, host=192.168.1.134 + [Parallel] - UserSSH: user=tidb, host=192.168.1.118 + [ Serial ] - StartCluster Starting component pd Starting instance 192.168.1.85:2379 Starting instance 192.168.1.118:2379 Starting instance 192.168.1.134:2379 Start instance 192.168.1.118:2379 success Start instance 192.168.1.85:2379 success Start instance 192.168.1.134:2379 success Starting component tikv Starting instance 192.168.1.134:20160 Starting instance 192.168.1.118:20160 Starting instance 192.168.1.85:20160 Start instance 192.168.1.85:20160 success Start instance 192.168.1.118:20160 success Start instance 192.168.1.134:20160 success Starting component tidb Starting instance 192.168.1.134:4000 Starting instance 192.168.1.118:4000 Starting instance 192.168.1.85:4000 Start instance 192.168.1.134:4000 success Start instance 192.168.1.85:4000 success Start instance 192.168.1.118:4000 success Starting component tiflash Starting instance 192.168.1.85:9000 Starting instance 192.168.1.118:9000 Start instance 192.168.1.118:9000 success Start instance 192.168.1.85:9000 success Starting component prometheus Starting instance 192.168.1.118:9090 Start instance 192.168.1.118:9090 success Starting component grafana Starting instance 192.168.1.118:3000 Start instance 192.168.1.118:3000 success Starting component alertmanager Starting instance 192.168.1.118:9093 Start instance 192.168.1.118:9093 success Starting component node_exporter Starting instance 192.168.1.85 Starting instance 192.168.1.134 Starting instance 192.168.1.118 Start 192.168.1.118 success Start 192.168.1.85 success Start 192.168.1.134 success Starting component blackbox_exporter Starting instance 192.168.1.85 Starting instance 192.168.1.134 Starting instance 192.168.1.118 Start 192.168.1.85 success Start 192.168.1.134 success Start 192.168.1.118 success + [ Serial ] - UpdateTopology: cluster=mytidb_cluster Started cluster `mytidb_cluster` successfully The root password of TiDB database has been changed. The new password is: '56tf*ZbYQ93_dB^0@4'. Copy and record it to somewhere safe, it is only displayed once, and will not be stored. The generated password can NOT be get and shown again.
注意這裡生成臨時密碼,到時登陸資料庫後可以自行修改
9.連線
這裡的密碼是上面給出的密碼
[root@localhost bin]# /opt/mysql5730/bin/mysql -h 192.168.1.118 -P4000 -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 407 Server version: 5.7.25-TiDB-v6.0.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | INFORMATION_SCHEMA | | METRICS_SCHEMA | | PERFORMANCE_SCHEMA | | mysql | | test | +--------------------+ 5 rows in set (0.00 sec)
其他節點也可以登入
[root@localhost bin]# /opt/mysql5730/bin/mysql -h 192.168.1.85 -P4000 -uroot -p
[root@localhost bin]# /opt/mysql5730/bin/mysql -h 192.168.1.134 -P4000 -uroot -p
10.檢視叢集狀態
[root@localhost tidb]# tiup cluster display mytidb_cluster tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.9.4/tiup-cluster /root/.tiup/components/cluster/v1.9.4/tiup-cluster display mytidb_cluster Cluster type: tidb Cluster name: mytidb_cluster Cluster version: v6.0.0 Deploy user: tidb SSH type: builtin Dashboard URL: http://192.168.1.118:2379/dashboard ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 192.168.1.118:9093 alertmanager 192.168.1.118 9093/9094 linux/x86_64 Up /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093 192.168.1.118:3000 grafana 192.168.1.118 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000 192.168.1.118:2379 pd 192.168.1.118 2379/2380 linux/x86_64 Up|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379 192.168.1.134:2379 pd 192.168.1.134 2379/2380 linux/x86_64 Up /tidb-data/pd-2379 /tidb-deploy/pd-2379 192.168.1.85:2379 pd 192.168.1.85 2379/2380 linux/x86_64 Up|L /tidb-data/pd-2379 /tidb-deploy/pd-2379 192.168.1.118:9090 prometheus 192.168.1.118 9090/12020 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090 192.168.1.118:4000 tidb 192.168.1.118 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000 192.168.1.134:4000 tidb 192.168.1.134 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000 192.168.1.85:4000 tidb 192.168.1.85 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000 192.168.1.118:9000 tiflash 192.168.1.118 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000 192.168.1.85:9000 tiflash 192.168.1.85 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000 192.168.1.118:20160 tikv 192.168.1.118 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160 192.168.1.134:20160 tikv 192.168.1.134 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160 192.168.1.85:20160 tikv 192.168.1.85 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160 Total nodes: 14
11.關閉和啟動叢集
我這裡是在其中一個節點上執行,在192.168.1.118上面執行
[root@localhost tidb]#tiup cluster stop mytidb_cluster
啟動叢集
[root@localhost tidb]#tiup cluster start mytidb_cluster
12.修改root賬號密碼
select user,host,authentication_string from mysql.user;
密碼修改為mysql
mysql> set password for 'root'@'%' = 'mysql';
Query OK, 0 rows affected (0.12 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.04 sec)
13.建立資料庫
create database db_test DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
建立賬號並授權資料庫
mysql> create user hxl@'%' identified by 'mysql';
mysql> grant all privileges on db_test.* TO hxl@'%';
mysql> flush privileges;
這樣的話就可以通過客戶端工具連線了,如navicate
14.日常使用命令
檢視叢集列表
tiup cluster list
檢視叢集狀態
tiup cluster display mytidb_cluster
15.資料庫監控介面
http://192.168.1.118:2379/dashboard/#/overview