1. 程式人生 > >TIDB 資料庫叢集部署

TIDB 資料庫叢集部署

 一、TiDB資料介紹

 1.1、TiDB資料簡介

TiDB 是 PingCAP 公司設計的開源分散式 HTAP (Hybrid Transactional and Analytical Processing) 資料庫,結合了傳統的 RDBMS 和 NoSQL 的最佳特性。TiDB 相容 MySQL,支援無限的水平擴充套件,具備強一致性和高可用性。TiDB 的目標是為 OLTP (Online Transactional Processing) 和 OLAP (Online Analytical Processing) 場景提供一站式的解決方案。

TiDB 具備如下特性:

高度相容 MySQL

大多數情況下,無需修改程式碼即可從 MySQL 輕鬆遷移至 TiDB,分庫分表後的 MySQL 叢集亦可通過 TiDB 工具進行實時遷移。

水平彈性擴充套件

通過簡單地增加新節點即可實現 TiDB 的水平擴充套件,按需擴充套件吞吐或儲存,輕鬆應對高併發、海量資料場景。

分散式事務

TiDB 100% 支援標準的 ACID 事務。

真正金融級高可用

相比於傳統主從 (M-S) 複製方案,基於 Raft 的多數派選舉協議可以提供金融級的 100% 資料強一致性保證,且在不丟失大多數副本的前提下,可以實現故障的自動恢復 (auto-failover),無需人工介入。

一站式 HTAP 解決方案

TiDB 作為典型的 OLTP 行存資料庫,同時兼具強大的 OLAP 效能,配合 TiSpark,可提供一站式 HTAP 解決方案,一份儲存同時處理 OLTP & OLAP,無需傳統繁瑣的 ETL 過程。

雲原生 SQL 資料庫

TiDB 是為雲而設計的資料庫,支援公有云、私有云和混合雲,使部署、配置和維護變得十分簡單。

TiDB Server

TiDB Server 負責接收 SQL 請求,處理 SQL 相關的邏輯,並通過 PD 找到儲存計算所需資料的 TiKV 地址,與 TiKV 互動獲取資料,最終返回結果。TiDB Server 是無狀態的,其本身並不儲存資料,只負責計算,可以無限水平擴充套件,可以通過負載均衡元件(如LVS、HAProxy 或 F5)對外提供統一的接入地址。

PD Server

Placement Driver (簡稱 PD) 是整個叢集的管理模組,其主要工作有三個:一是儲存叢集的元資訊(某個 Key 儲存在哪個 TiKV 節點);二是對 TiKV 叢集進行排程和負載均衡(如資料的遷移、Raft group leader 的遷移等);三是分配全域性唯一且遞增的事務 ID。

PD 是一個叢集,需要部署奇數個節點,一般線上推薦至少部署 3 個節點

TiKV Server

TiKV Server 負責儲存資料,從外部看 TiKV 是一個分散式的提供事務的 Key-Value 儲存引擎。儲存資料的基本單位是 Region,每個 Region 負責儲存一個 Key Range(從 StartKey 到 EndKey 的左閉右開區間)的資料,每個 TiKV 節點會負責多個 Region。TiKV 使用 Raft 協議做複製,保持資料的一致性和容災。副本以 Region 為單位進行管理,不同節點上的多個 Region 構成一個 Raft Group,互為副本。資料在多個 TiKV 之間的負載均衡由 PD 排程,這裡也是以 Region 為單位進行排程

TiSpark

TiSpark 作為 TiDB 中解決使用者複雜 OLAP 需求的主要元件,將 Spark SQL 直接執行在 TiDB 儲存層上,同時融合 TiKV 分散式叢集的優勢,並融入大資料社群生態。至此,TiDB 可以通過一套系統,同時支援 OLTP 與 OLAP,免除使用者資料同步的煩惱

1.2、Tidb 資料基本操作

建立、檢視和刪除資料庫

CREATE DATABASE db_name [options];
CREATE DATABASE IF NOT EXISTS samp_db;
DROP DATABASE samp_db;
DROP TABLE IF EXISTS person;
CREATE INDEX person_num ON person (number);
ALTER TABLE person ADD INDEX person_num (number);
CREATE UNIQUE INDEX person_num ON person (number);
CREATE USER 'tiuser'@'localhost' IDENTIFIED BY '123456';
GRANT SELECT ON samp_db.* TO 'tiuser'@'localhost';
SHOW GRANTS for [email protected];
DROP USER 'tiuser'@'localhost';
GRANT ALL PRIVILEGES ON test.* TO 'xxxx'@'%' IDENTIFIED BY 'yyyyy';
REVOKE ALL PRIVILEGES ON `test`.* FROM 'genius'@'localhost';
SHOW GRANTS for 'root'@'%';
SELECT Insert_priv FROM mysql.user WHERE user='test' AND host='%';
FLUSH PRIVILEGES;

二、TiDB Ansible 部署

 2.1、安裝Tidb叢集基礎環境

使用三臺物理機搭建Tidb叢集,三臺機器ip 為 172.16.5.50,172.16.5.51,172.16.5.10,其中172.16.5.51作為中控機。

軟體安裝如下:

172.16.5.51 TiDB,PD,TiKV

172.16.5.50 TiKV

172.16.5.10 TiKV

安裝中控機軟體

在中控機上建立 tidb 使用者,並生成 ssh key

# 建立tidb使用者
useradd -m -d /home/tidb tidb && passwd tidb
# 配置tidb使用者sudo許可權
visudo
tidb ALL=(ALL) NOPASSWD: ALL
# 使用tidb賬戶生成 ssh key
su tidb && ssh-keygen -t rsa -C [email protected]

在中控機器上下載 TiDB-Ansible

1 # 下載Tidb-Ansible 版本
  cd /home/tidb && git clone -b release-2.0 https://github.com/pingcap/tidb-ansible.git
2 # 安裝ansible及依賴
  cd /home/tidb/tidb-ansible/ && pip install -r ./requirements.txt

在中控機上配置部署機器ssh互信及sudo 規則

# 配置hosts.ini
su tidb && cd /home/tidb/tidb-ansible
vim hosts.ini
[servers]
172.16.5.50
172.16.5.51
172.16.5.52
[all:vars]
username = tidb
ntp_server = pool.ntp.org
# 配置ssh 互信
ansible-playbook -i hosts.ini create_users.yml -u root -k

在目標機器上安裝ntp服務

1 # 中控機器上給目標主機安裝ntp服務
2 cd /home/tidb/tidb-ansible
3 ansible-playbook -i hosts.ini deploy_ntp.yml -u tidb -b

目標機器上調整cpufreq

1 # 檢視cpupower 調節模式,目前虛擬機器不支援,調節10伺服器cpupower
2 cpupower frequency-info --governors
3 analyzing CPU 0:
4 available cpufreq governors: Not Available
5 # 配置cpufreq調節模式
6 cpupower frequency-set --governor performance

目標機器上新增資料盤ext4 檔案系統掛載

# 建立分割槽表
parted -s -a optimal /dev/nvme0n1 mklabel gpt -- mkpart primary ext4 1 -1
# 手動建立分割槽
parted dev/sdb
mklabel gpt
mkpart primary 0KB 210GB
# 格式化分割槽
mkfs.ext4 /dev/sdb
# 檢視資料盤分割槽 UUID
[[email protected] ~]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 xfs f41c3b1b-125f-407c-81fa-5197367feb39 /boot
├─sda2 xfs 8119193b-c774-467f-a057-98329c66b3b3 /
├─sda3
└─sda5 xfs 42356bb3-911a-4dc4-b56e-815bafd08db2 /home
sdb ext4 532697e9-970e-49d4-bdba-df386cac34d2
# 分別在三臺機器上,編輯 /etc/fstab 檔案,新增 nodelalloc 掛載引數
vim /etc/fstab
UUID=8119193b-c774-467f-a057-98329c66b3b3 / xfs defaults 0 0
UUID=f41c3b1b-125f-407c-81fa-5197367feb39 /boot xfs defaults 0 0
UUID=42356bb3-911a-4dc4-b56e-815bafd08db2 /home xfs defaults 0 0
UUID=532697e9-970e-49d4-bdba-df386cac34d2 /data ext4 defaults,nodelalloc,noatime 0 2
# 掛載資料盤
mkdir /data
mount -a
mount -t ext4
/dev/sdb on /data type ext4 (rw,noatime,seclabel,nodelalloc,data=ordered)

分配機器資源,編輯inventory.ini 檔案

# 單機Tikv例項
Name HostIP Services
tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
tidb-tikv2 172.16.5.51 PD2, TiKV2
tidb-tikv3 172.16.5.52 PD3, TiKV3
# 編輯inventory.ini 檔案
cd /home/tidb/tidb-ansible
vim inventory.ini
## TiDB Cluster Part
[tidb_servers]
172.16.5.50
172.16.5.51

[tikv_servers]
172.16.5.50
172.16.5.51
172.16.5.52

[pd_servers]
172.16.5.50
172.16.5.51
172.16.5.52

## Monitoring Part
# prometheus and pushgateway servers
[monitoring_servers]
172.16.5.50

# node_exporter and blackbox_exporter servers
[monitored_servers]
172.16.5.50
172.16.5.51
172.16.5.52

[all:vars]
#deploy_dir = /home/tidb/deploy
deploy_dir = /data/deploy
# 檢測ssh互信
[[email protected] tidb-ansible]$ ansible -i inventory.ini all -m shell -a 'whoami'
172.16.5.51 | SUCCESS | rc=0 >>
tidb
172.16.5.52 | SUCCESS | rc=0 >>
tidb
172.16.5.50 | SUCCESS | rc=0 >>
tidb
# 檢測tidb 使用者 sudo 免密碼配置
[[email protected] tidb-ansible]$ ansible -i inventory.ini all -m shell -a 'whoami' -b
172.16.5.52 | SUCCESS | rc=0 >>
root
172.16.5.51 | SUCCESS | rc=0 >>
root
172.16.5.50 | SUCCESS | rc=0 >>
root
# 執行 local_prepare.yml playbook,聯網下載 TiDB binary 到中控機
ansible-playbook local_prepare.yml
# 初始化系統環境,修改核心引數
ansible-playbook bootstrap.yml

2.2、安裝Tidb叢集

1 ansible-playbook deploy.yml

2.3、啟動Tidb叢集

1 ansible-playbook start.yml

2.4、測試叢集

# 使用 MySQL 客戶端連線測試,TCP 4000 埠是 TiDB 服務預設埠
mysql -u root -h 172.16.5.50 -P 4000
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| INFORMATION_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql |
| test |
+--------------------+
4 rows in set (0.00 sec)
# 通過瀏覽器訪問監控平臺
地址:http://172.16.5.51:3000 預設帳號密碼是:admin/admin

 三、TIDB叢集擴容

 3.1、擴容 TiDB/TiKV 節點

# 單機Tikv例項
Name HostIP Services
tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
tidb-tikv2 172.16.5.51 PD2, TiKV2
tidb-tikv3 172.16.5.52 PD3, TiKV3
# 新增一臺TIDB節點
新增一個 TiDB 節點(tidb-tikv4),IP 地址為 172.16.5.53
# 編輯inventory.ini 檔案
cd /home/tidb/tidb-ansible
vim inventory.ini
  ------------------start---------------------------
## TiDB Cluster Part
[tidb_servers]
172.16.5.50
172.16.5.51
172.16.5.53

[tikv_servers]
172.16.5.50
172.16.5.51
172.16.5.52

[pd_servers]
172.16.5.50
172.16.5.51
172.16.5.52

## Monitoring Part
# prometheus and pushgateway servers
[monitoring_servers]
172.16.5.50

# node_exporter and blackbox_exporter servers
[monitored_servers]
172.16.5.50
172.16.5.51
172.16.5.52
172.16.5.53
    ----------------------end-------------------
# 拓撲結構如下
Name HostIP Services
tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
tidb-tikv2 172.16.5.51 PD2, TiKV2
tidb-tikv3 172.16.5.52 PD3, TiKV3
tidb-tikv4 172.16.5.53 TiDB2
# 初始化新增節點
ansible-playbook bootstrap.yml -l 172.16.5.53
# 部署新增節點
ansible-playbook deploy.yml -l 172.16.5.53
# 啟動新節點服務
ansible-playbook start.yml -l 172.16.5.53
# 更新 Prometheus 配置並重啟
ansible-playbook rolling_update_monitor.yml --tags=prometheus

3.2、擴容PD節點

# 拓撲結構如下# 單機Tikv例項
Name HostIP Services
tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
tidb-tikv2 172.16.5.51 PD2, TiKV2
tidb-tikv3 172.16.5.52 PD3, TiKV3
# 新增一臺PD節點
新增一個 PD 節點(tidb-pd1),IP 地址為 172.16.5.54
# 編輯inventory.ini 檔案
cd /home/tidb/tidb-ansible
vim inventory.ini
## TiDB Cluster Part
[tidb_servers]
172.16.5.50
172.16.5.51

[tikv_servers]
172.16.5.50
172.16.5.51
172.16.5.52

[pd_servers]
172.16.5.50
172.16.5.51
172.16.5.52
172.16.5.54

## Monitoring Part
# prometheus and pushgateway servers
[monitoring_servers]
172.16.5.50

# node_exporter and blackbox_exporter servers
[monitored_servers]
172.16.5.50
172.16.5.51
172.16.5.52
172.16.5.54
# 拓撲結構如下
Name HostIP Services
tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
tidb-tikv2 172.16.5.51 PD2, TiKV2
tidb-tikv3 172.16.5.52 PD3, TiKV3
tidb-pd1 172.16.5.54 PD4
# 初始化新增節點
ansible-playbook bootstrap.yml -l 172.16.5.54
# 部署新增節點
ansible-playbook deploy.yml -l 172.16.5.54
# 登入新增的 PD 節點,編輯啟動指令碼:{deploy_dir}/scripts/run_pd.sh
1、移除 --initial-cluster="xxxx" \ 配置。
2、新增 --join="http://172.16.10.1:2379" \,IP 地址 (172.16.10.1) 可以是叢集內現有 PD IP 地址中的任意一個。
3、在新增 PD 節點中手動啟動 PD 服務:
{deploy_dir}/scripts/start_pd.sh
4、使用 pd-ctl 檢查新節點是否新增成功:
/home/tidb/tidb-ansible/resources/bin/pd-ctl -u "http://172.16.10.1:2379" -d member
# 滾動升級整個叢集
ansible-playbook rolling_update.yml
# 更新 Prometheus 配置並重啟
ansible-playbook rolling_update_monitor.yml --tags=prometheus

四、tidb叢集測試

 4.1、sysbench基準庫測試

sysbench安裝

1 # 二進位制安裝
2 curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | sudo bash
3 sudo yum -y install sysbench

效能測試

# cpu效能測試
sysbench --test=cpu --cpu-max-prime=20000 run
----------------------------------start----------------------------------------
Number of threads: 1
Initializing random number generator from current time
Prime numbers limit: 20000
Initializing worker threads...
Threads started!
CPU speed:
events per second: 286.71
General statistics:
total time: 10.0004s
total number of events: 2868
Latency (ms):
min: 3.46
avg: 3.49
max: 4.49
95th percentile: 3.55
sum: 9997.23
Threads fairness:
events (avg/stddev): 2868.0000/0.00
execution time (avg/stddev): 9.9972/0.00
-----------------------------------end-------------------------------------------
# 執行緒測試
sysbench --test=threads --num-threads=64 --thread-yields=100 --thread-locks=2 run
------------------------------------start-----------------------------------------
Number of threads: 64
Initializing random number generator from current time
Initializing worker threads...
Threads started!
General statistics:
total time: 10.0048s
total number of events: 108883
Latency (ms):
min: 0.05
avg: 5.88
max: 49.15
95th percentile: 17.32
sum: 640073.32
Threads fairness:
events (avg/stddev): 1701.2969/36.36
execution time (avg/stddev): 10.0011/0.00
-----------------------------------end-----------------------------------------
# 磁碟IO測試
sysbench --test=fileio --num-threads=16 --file-total-size=3G --file-test-mode=rndrw prepare
----------------------------------start-----------------------------------------
128 files, 24576Kb each, 3072Mb total
Creating files for the test...
Extra file open flags: (none)
Creating file test_file.0
Creating file test_file.1
Creating file test_file.2
Creating file test_file.3
Creating file test_file.4
Creating file test_file.5
Creating file test_file.6
Creating file test_file.7
Creating file test_file.8
Creating file test_file.9
Creating file test_file.10
Creating file test_file.11
Creating file test_file.12
Creating file test_file.13
Creating file test_file.14
Creating file test_file.15
Creating file test_file.16
Creating file test_file.17
Creating file test_file.18
Creating file test_file.19
Creating file test_file.20
Creating file test_file.21
Creating file test_file.22
Creating file test_file.23
Creating file test_file.24
Creating file test_file.25
Creating file test_file.26
Creating file test_file.27
Creating file test_file.28
Creating file test_file.29
Creating file test_file.30
Creating file test_file.31
Creating file test_file.32
Creating file test_file.33
Creating file test_file.34
Creating file test_file.35
Creating file test_file.36
Creating file test_file.37
Creating file test_file.38
Creating file test_file.39
Creating file test_file.40
Creating file test_file.41
Creating file test_file.42
Creating file test_file.43
Creating file test_file.44
Creating file test_file.45
Creating file test_file.46
Creating file test_file.47
Creating file test_file.48
Creating file test_file.49
Creating file test_file.50
Creating file test_file.51
Creating file test_file.52
Creating file test_file.53
Creating file test_file.54
Creating file test_file.55
Creating file test_file.56
Creating file test_file.57
Creating file test_file.58
Creating file test_file.59
Creating file test_file.60
Creating file test_file.61
Creating file test_file.62
Creating file test_file.63
Creating file test_file.64
Creating file test_file.65
Creating file test_file.66
Creating file test_file.67
Creating file test_file.68
Creating file test_file.69
Creating file test_file.70
Creating file test_file.71
Creating file test_file.72
Creating file test_file.73
Creating file test_file.74
Creating file test_file.75
Creating file test_file.76
Creating file test_file.77
Creating file test_file.78
Creating file test_file.79
Creating file test_file.80
Creating file test_file.81
Creating file test_file.82
Creating file test_file.83
Creating file test_file.84
Creating file test_file.85
Creating file test_file.86
Creating file test_file.87
Creating file test_file.88
Creating file test_file.89
Creating file test_file.90
Creating file test_file.91
Creating file test_file.92
Creating file test_file.93
Creating file test_file.94
Creating file test_file.95
Creating file test_file.96
Creating file test_file.97
Creating file test_file.98
Creating file test_file.99
Creating file test_file.100
Creating file test_file.101
Creating file test_file.102
Creating file test_file.103
Creating file test_file.104
Creating file test_file.105
Creating file test_file.106
Creating file test_file.107
Creating file test_file.108
Creating file test_file.109
Creating file test_file.110
Creating file test_file.111
Creating file test_file.112
Creating file test_file.113
Creating file test_file.114
Creating file test_file.115
Creating file test_file.116
Creating file test_file.117
Creating file test_file.118
Creating file test_file.119
Creating file test_file.120
Creating file test_file.121
Creating file test_file.122
Creating file test_file.123
Creating file test_file.124
Creating file test_file.125
Creating file test_file.126
Creating file test_file.127
3221225472 bytes written in 339.76 seconds (9.04 MiB/sec)
----------------------------------end------------------------------------------
sysbench --test=fileio --num-threads=16 --file-total-size=3G --file-test-mode=rndrw run
----------------------------------start-----------------------------------------
Number of threads: 16
Initializing random number generator from current time
Extra file open flags: (none)
128 files, 24MiB each
3GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...
Threads started!
File operations:
reads/s: 299.19
writes/s: 199.46
fsyncs/s: 816.03
Throughput:
read, MiB/s: 4.67
written, MiB/s: 3.12
General statistics:
total time: 10.8270s
total number of events: 12189
Latency (ms):
min: 0.00
avg: 13.14
max: 340.58
95th percentile: 92.42
sum: 160186.15
Threads fairness:
events (avg/stddev): 761.8125/216.01
execution time (avg/stddev): 10.0116/0.01
--------------------------------------end---------------------------------------
sysbench --test=fileio --num-threads=16 --file-total-size=3G --file-test-mode=rndrw cleanup
# 記憶體測試
sysbench --test=memory --memory-block-size=8k --memory-total-size=4G run
------------------------------------start-----------------------------------------
Number of threads: 1
Initializing random number generator from current time
Running memory speed test with the following options:
block size: 8KiB
total size: 4096MiB
operation: write
scope: global
Initializing worker threads...
Threads started!
Total operations: 524288 (1111310.93 per second)
4096.00 MiB transferred (8682.12 MiB/sec)
General statistics:
total time: 0.4692s
total number of events: 524288
Latency (ms):
min: 0.00
avg: 0.00
max: 0.03
95th percentile: 0.00
sum: 381.39

Threads fairness:
events (avg/stddev): 524288.0000/0.00
execution time (avg/stddev): 0.3814/0.00
-------------------------------------end---------------------------------------

4.2、OLTP測試

# 登入tidb建立測試資料庫
mysql -u root -P 4000 -h 172.16.5.50
create database sbtest
# 準備測試資料
sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.16.5.50 --mysql-port=4000 --mysql-user=root --tables=20 --table_size=20000000 --threads=100 --max-requests=0 prepare
--tables=20 # 建立20個表
--table_size=20000000 # 每個表兩千萬資料
--threads=100 # 使用100個執行緒數
---------------------------------報錯資訊如下------------------------------------------
FATAL: mysql_drv_query() returned error 9001 (PD server timeout[try again later]
2018/11/23 11:23:19.236 log.go:82: [warning] etcdserver: [timed out waiting for read index response]
2018/11/23 14:15:17.329 heartbeat_streams.go:97: [error] [store 1] send keepalive message fail: EOF
2018/11/23 14:14:04.603 leader.go:312: [info] leader is deleted
2018/11/23 14:14:04.603 leader.go:103: [info] pd2 is not etcd leader, skip campaign leader and check later
2018/11/23 14:21:10.071 coordinator.go:570: [info] [region 1093] send schedule command: transfer leader from store 7 to store 2
FATAL: mysql_drv_query() returned error 1105 (Information schema is out of date)
------------------------------------end-----------------------------------------------
# 調整執行緒數為10,表數量為10,表資料為2000000 做測試
sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.16.5.50 --mysql-port=4000 --mysql-user=root --tables=1 --table_size=2000000 --threads=10 --max-requests=0 prepare
--------------------------------------start--------------------------------------------
FATAL: mysql_drv_query() returned error 1105 (Information schema is out of date) 超時報錯
成功寫入2張表,其餘8張表資料並未寫滿,寫好索引
# 對tidb叢集進行讀寫測試
sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-host=172.16.5.50 --mysql-port=4000 --mysql-user=root --tables=1 --table_size=2000000 --threads=10 --max-requests=0 run
----------------------------------------start--------------------------------------
Number of threads: 10
Initializing random number generator from current time
Initializing worker threads...
Threads started!
SQL statistics:
queries performed:
read: 868
write: 62
other: 310
total: 1240
transactions: 62 (5.60 per sec.)
queries: 1240 (112.10 per sec.)
ignored errors: 0 (0.00 per sec.)
reconnects: 0 (0.00 per sec.)
General statistics:
total time: 11.0594s
total number of events: 62
Latency (ms):
min: 944.55
avg: 1757.78
max: 2535.05
95th percentile: 2320.55
sum: 108982.56
Threads fairness:
events (avg/stddev): 6.2000/0.40
execution time (avg/stddev): 10.8983/0.31
------------------------------------end----------------------------------------
# 使用mysql對比測試
mysql -uroot -P 3306 -h 172.15.5.154
create database sbtest
sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.16.5.154 --mysql-port=3306 --mysql-user=root --mysql-password=root --tables=20 --table_size=20000000 --threads=10 --max-requests=0 prepare
使用mysql 做測試未發現報錯情況

4.3、業務資料測試

sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-host=172.16.5.50 --mysql-port=4000 --mysql-user=root --tables=20 --table_size=2000000 --threads=10 --max-requests=0 run