030:Cetus中間件和MHA讀寫分離
030:Cetus中間件和MHA讀寫分離
一、主機環境
- 虛擬機配置
CPU | 內存 | 硬盤 | OS版本 | MySQL版本 | MHA版本 | Cetus版本 |
---|---|---|---|---|---|---|
2-core | 4G | 500G | CentOS 7.5.1804 | 5.7.18 | 0.57 | v1.0.0-44 |
- 主機信息
主機名 | IP地址 | Server_ID | MHA Manager | MHA Node | Cetus | 備註 |
---|---|---|---|---|---|---|
node05 | 192.168.222.175 | 部署 | 部署 | 部署 | 監控/MySQL主庫的故障轉移 | |
node02 | 192.168.222.172 | 172 | - | 部署 | - | |
node03 | 192.168.222.173 | 173 | - | 部署 | - | |
node04 | 192.168.222.174 | 174 | - | 部署 | - |
二、搭建主從
1、配置主從
- 搭建基於GTID主從復制環境
- 若開啟主從延遲檢測需創建庫proxy_heart_beat和表tb_heartbeat
- 創建用戶和密碼(默認用戶對tb_heartbeat有讀寫權限)
- 確認Cetus可以遠程登錄MySQL
0.主庫創建同步帳號grant RELOAD,REPLICATION SLAVE, REPLICATION CLIENT on *.* to repl@'%' identified by password 'xxxxxx'; flush privileges; 1.mysqldump 出主庫數據,增加 master-data=2; 2.導入從庫; 3.change master to master_host="192.168.222.171", master_port=3306, master_user='repl',master_password='repl', master_auto_position=1; 4.start slave; 5.創建mysql cetus測試賬號 grant all on *.* to gcdb@'%' identified by password 'xxxxxx'; flush privileges; 6.備庫設置set global read_only=1; 7.創建延遲校庫proxy_heart_beat和表tb_heartbeat CREATE DATABASE proxy_heart_beat; USE proxy_heart_beat; CREATE TABLE tb_heartbeat ( p_id varchar(128) NOT NULL, p_ts timestamp(3) NOT NULL DEFAULT CURRENT_TIMESTAMP(3), PRIMARY KEY (p_id) ) ENGINE = InnoDB DEFAULT CHARSET = utf8; 8.創建測試庫ttt create database if not exists ttt; create table ttt.t1(id int(4)primary key not null auto_increment,nums int(20) not null); insert into ttt.t1(nums) values(1),(2),(3),(4),(5); update ttt.t1 set nums=100 where id =3; delete from ttt.t1 where id =4; select * from ttt.t1; 9.安裝sendmail服務 yum install sendmail -y
2、 配置 hosts
- 添加hosts
cat <<EOF >/etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.222.245 node01.mysql.com node01 192.168.222.246 mycat01.mysql.com mycat01 192.168.222.247 node02.mysql.com node02 192.168.222.248 node03.mysql.com node03 192.168.222.249 node04.mysql.com node04 192.168.222.59 node05.mysql.com node05 192.168.222.251 redis01.mysql.com redis01 192.168.222.252 redis02.mysql.com redis02 EOF
3、配置免秘鑰
- mha管理節點:
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub "[email protected]"
ssh-copy-id -i /root/.ssh/id_rsa.pub "[email protected]"
ssh-copy-id -i /root/.ssh/id_rsa.pub "[email protected]"
- mysql節點各自生成公私鑰,並將公鑰拷貝給其他mysql節點
# 192.168.222.172
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub "[email protected]"
ssh-copy-id -i /root/.ssh/id_rsa.pub "[email protected]"
ssh-copy-id -i /root/.ssh/id_rsa.pub "[email protected]"
# 192.168.222.173
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub "[email protected]"
ssh-copy-id -i /root/.ssh/id_rsa.pub "[email protected]"
ssh-copy-id -i /root/.ssh/id_rsa.pub "[email protected]"
# 192.168.222.174
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub "[email protected]"
ssh-copy-id -i /root/.ssh/id_rsa.pub "[email protected]"
ssh-copy-id -i /root/.ssh/id_rsa.pub "[email protected]"
- 測試ssh是否免密碼登錄
三、安裝MHA和Cetus
1、下載包和安裝依賴包
mha4mysql-manager-0.57.tar.gz和mha4mysql-node-0.57.tar.gz
下載地址
cetus源碼下載
git clone https://github.com/Lede-Inc/cetus.git
# 安裝Cetus依賴包
yum -y install cmake gcc glib2-devel flex libevent-devel mysql-devel gperftools-libs
# 安裝mha依賴包
yum -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker perl-CPAN perl-ExtUtils-Manifest
2、安裝MHA Node
tar zxvf mha4mysql-node-0.57.tar.gz
cd mha4mysql-node-0.57
perl Makefile.PL
make && make install
-- Node安裝完成後會在/usr/local/bin/下生成這些文件
#save_binary_logs : 保存和復制master的二進制日誌。
#apply_diff_relay_logs : 識別差異的中繼日誌事件並應用於其它slave。
#filter_mysqlbinlog : 去除不必要的ROLLBACK事件(MHA已不再使用這個工具)。
#purge_relay_logs : 清除中繼日誌(不會阻塞SQL線程)。
3、安裝MHA manager
- 在所有的 MySQL上安裝
Node 節點
;在 MHA 的管理節點安裝manager 節點
。
tar zvxf mha4mysql-manager-0.57.tar.gz
cd mha4mysql-manager-0.57
perl Makefile.PL
make && make install
-- Manage安裝完成後會在/usr/local/bin/下生成這些文件
#masterha_check_ssh : 檢查MHA的SSH配置。
#masterha_check_repl : 檢查MySQL復制。
#masterha_manager : 啟動MHA。
#masterha_check_status : 檢測當前MHA運行狀態。
#masterha_master_monitor : 監測master是否宕機。
#masterha_master_switch : 控制故障轉移(自動或手動)。
#masterha_conf_host : 添加或刪除配置的server信息。
cp -r /software/mha4mysql-manager-0.57/samples/scripts/* /usr/local/bin/
-- 拷貝/software/mha4mysql-manager-0.57/samples/scripts/下的腳本到/usr/local/bin/
#master_ip_failover :自動切換時vip管理的腳本,不是必須,如果我們使用keepalived的,我們可以自己編寫腳本完成對vip的管理,比如監控mysql,如果mysql異常,我們停止keepalived就行,這樣vip就會自動漂移
#master_ip_online_change:在線切換時vip的管理,不是必須,同樣可以可以自行編寫簡單的shell完成
#power_manager:故障發生後關閉主機的腳本,不是必須
#send_report:因故障切換後發送報警的腳本,不是必須,可自行編寫簡單的shell完成。
4、安裝Cetus
1.安裝說明
Cetus利用自動化建構系統CMake進行編譯安裝,其中描述構建過程的構建文件CMakeLists.txt已經在源碼中的主目錄和子目錄中,下載源碼並解壓後具體安裝步驟如下:
- 創建編譯目錄:在源碼主目錄下創建獨立的目錄build,並轉到該目錄下
mkdir build/
cd build/
- 編譯:利用cmake進行編譯,指令如下
讀寫分離版本:
cmake ../ -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/home/user/cetus_install -DSIMPLE_PARSER=ON
分庫版本:
cmake ../ -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/home/user/cetus_install -DSIMPLE_PARSER=OFF
其中CMAKE_BUILD_TYPE變量可以選擇生成 debug 版和或release 版的程序,CMAKE_INSTALL_PREFIX變量確定軟件的實際安裝目錄的絕對路徑,安裝目錄建議以/home/user/日期.編譯版本.分支.commit_id的方式命名;SIMPLE_PARSER變量確定軟件的編譯版本,設置為ON則編譯讀寫分離版本,否則編譯分庫版本。
該過程會檢查您的系統是否缺少一些依賴庫和依賴軟件,可以根據錯誤代碼安裝相應依賴。
- 安裝:執行make install進行安裝
make install
- 配置:Cetus運行前還需要編輯配置文件
cd /home/user/cetus_install/conf/
cp XXX.json.example XXX.json
cp XXX.conf.example XXX.conf
vi XXX.json
vi XXX.conf
配置文件在make insatll後存在示例文件,以.example結尾,目錄為/home/user/cetus_install/conf/,包括用戶設置文件(users.json)、變量處理配置文件(variables.json)、分庫版本的分片規則配置文件(sharding.json)、讀寫分離版本的啟動配置文件(proxy.conf)和分庫版本的啟動配置文件(shard.conf)。
根據具體編譯安裝的版本編輯相關配置文件,若使用讀寫分離功能則需配置users.json和proxy.conf,若使用sharding功能則需配置users.json、sharding.json和shard.conf,其中兩個版本的variables.json均可選配。
配置文件的具體說明見Cetus 讀寫分離版配置文件說明和Cetus 分庫(sharding)版配置文件說明。
- 啟動:Cetus可以利用bin/cetus啟動
讀寫分離版本:
bin/cetus --defaults-file=conf/proxy.conf [--conf-dir=/home/user/cetus_install/conf/]
分庫版本:
bin/cetus --defaults-file=conf/shard.conf [--conf-dir=/home/user/cetus_install/conf/]
其中Cetus啟動時可以添加命令行選項,--defaults-file選項用來加載啟動配置文件(proxy.conf或者shard.conf),且在啟動前保證啟動配置文件的權限為660;--conf-dir是可選項,用來加載其他配置文件(.json文件),默認為當前目錄下conf文件夾。
Cetus可起動守護進程後臺運行,也可在進程意外終止自動啟動一個新進程,可通過啟動配置選項進行設置。
2.安裝實施
#進入源碼目錄
[root@node05 software]# git clone https://github.com/Lede-Inc/cetus.git
[root@node05 software]# cd cetus/
[root@node05 cetus]# mkdir build/ && cd build
[root@node05 build]# cmake ../ -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/usr/local/cetus -DSIMPLE_PARSER=ON
[root@node05 build]# make install
[root@node05 build]# ll /usr/local/cetus/
total 0
drwxr-xr-x 2 root root 19 Aug 31 09:29 bin
drwxr-xr-x 2 root root 143 Aug 31 09:29 conf
drwxr-xr-x 4 root root 210 Aug 31 09:29 lib
drwxr-xr-x 2 root root 19 Aug 31 09:29 libexec
drwxr-xr-x 2 root root 23 Aug 31 09:29 logs
[root@node05 build]# ll /usr/local/share/perl5/MHA/
total 388
-r--r--r-- 1 root root 12997 May 31 2015 BinlogHeaderParser.pm
-r--r--r-- 1 root root 18783 May 31 2015 BinlogManager.pm
-r--r--r-- 1 root root 2251 May 31 2015 BinlogPosFinderElp.pm
-r--r--r-- 1 root root 1648 May 31 2015 BinlogPosFinder.pm
-r--r--r-- 1 root root 3130 May 31 2015 BinlogPosFinderXid.pm
-r--r--r-- 1 root root 5684 May 31 2015 BinlogPosFindManager.pm
-r--r--r-- 1 root root 17480 May 31 2015 Config.pm
-r--r--r-- 1 root root 27019 May 31 2015 DBHelper.pm
-r--r--r-- 1 root root 3075 May 31 2015 FileStatus.pm
-r--r--r-- 1 root root 20370 May 31 2015 HealthCheck.pm
-r--r--r-- 1 root root 10560 May 31 2015 ManagerAdmin.pm
-r--r--r-- 1 root root 3679 May 31 2015 ManagerAdminWrapper.pm
-r--r--r-- 1 root root 3508 May 31 2015 ManagerConst.pm
-r--r--r-- 1 root root 4612 May 31 2015 ManagerUtil.pm
-r--r--r-- 1 root root 74140 May 31 2015 MasterFailover.pm
-r--r--r-- 1 root root 23654 May 31 2015 MasterMonitor.pm
-r--r--r-- 1 root root 23466 May 31 2015 MasterRotate.pm
-r--r--r-- 1 root root 1308 May 31 2015 NodeConst.pm
-r--r--r-- 1 root root 6689 May 31 2015 NodeUtil.pm
-r--r--r-- 1 root root 44325 May 31 2015 ServerManager.pm
-r--r--r-- 1 root root 33213 May 31 2015 Server.pm
-r--r--r-- 1 root root 6531 May 31 2015 SlaveUtil.pm
-r--r--r-- 1 root root 4874 May 31 2015 SSHCheck.pm
[root@node05 build]#
5、cetus替換mha部分文件
#使用 mha_ld/src 替換所有文件/usr/share/perl5/vendor_perl/MHA/目錄的所有同名文件
[root@node05 build]# cd ../mha_ld/src/
[root@node05 src]# ls
BinlogHeaderParser.pm BinlogPosFindManager.pm HealthCheck.pm MasterFailover.pm NodeUtil.pm ServerManager.pm
BinlogManager.pm cetus.cnf ManagerAdmin.pm masterha_secondary_check ProxyManager.pm Server.pm
BinlogPosFinderElp.pm Config.pm ManagerAdminWrapper.pm MasterMonitor.pm sample.cnf SlaveUtil.pm
BinlogPosFinder.pm DBHelper.pm ManagerConst.pm MasterRotate.pm sendMail.sh SSHCheck.pm
BinlogPosFinderXid.pm FileStatus.pm ManagerUtil.pm NodeConst.pm sendmail.txt
[root@node05 src]# ls /usr/local/share/perl5/MHA/
BinlogHeaderParser.pm BinlogPosFinderXid.pm FileStatus.pm ManagerConst.pm MasterRotate.pm Server.pm
BinlogManager.pm BinlogPosFindManager.pm HealthCheck.pm ManagerUtil.pm NodeConst.pm SlaveUtil.pm
BinlogPosFinderElp.pm Config.pm ManagerAdmin.pm MasterFailover.pm NodeUtil.pm SSHCheck.pm
BinlogPosFinder.pm DBHelper.pm ManagerAdminWrapper.pm MasterMonitor.pm ServerManager.pm
[root@node05 src]# rsync /software/cetus/mha_ld/src/* /usr/local/share/perl5/MHA/
[root@node05 src]# ls /usr/local/share/perl5/MHA/
BinlogHeaderParser.pm BinlogPosFindManager.pm HealthCheck.pm MasterFailover.pm NodeUtil.pm ServerManager.pm
BinlogManager.pm cetus.cnf ManagerAdmin.pm masterha_secondary_check ProxyManager.pm Server.pm
BinlogPosFinderElp.pm Config.pm ManagerAdminWrapper.pm MasterMonitor.pm sample.cnf SlaveUtil.pm
BinlogPosFinder.pm DBHelper.pm ManagerConst.pm MasterRotate.pm sendMail.sh SSHCheck.pm
BinlogPosFinderXid.pm FileStatus.pm ManagerUtil.pm NodeConst.pm sendmail.txt
#使用 mha_ld/masterha_secondary_check替換masterha_secondary_check命令 which masterha_secondary_check
[root@node05 src]# which masterha_secondary_check
/usr/local/bin/masterha_secondary_check
[root@node05 src]# rsync /software/cetus/mha_ld/src/masterha_secondary_check /usr/local/bin/
[root@node05 src]# chmod +x /usr/local/bin/masterha_secondary_check
[root@node05 src]# ll /usr/local/bin/masterha_secondary_check
-r-xr-xr-x 1 root root 5186 Aug 31 11:48 /usr/local/bin/masterha_secondary_check
四、MHA和cetus配置
1、創建和修改配置文件
- 在manager節點上創建cetus配置文件(cetus.cnf)
[root@node05 src]# cp /software/cetus/mha_ld/cetus.cnf /etc/cetus.cnf
[root@node05 src]# vim /etc/cetus.cnf
[root@node05 src]# cat /etc/cetus.cnf
middle_ipport=192.168.222.175:23306
middle_user=admin
middle_pass=admin
- 在manager節點上創建cetus的user和proxy.配置文件(users.json和proxy.conf)
[root@node05 src]# cp /usr/local/cetus/conf/proxy.conf.example /usr/local/cetus/conf/proxy.conf
[root@node05 src]# cp /usr/local/cetus/conf/users.json.example /usr/local/cetus/conf/users.json
[root@node05 src]# vim /usr/local/cetus/conf/users.json
[root@node05 src]# cat /usr/local/cetus/conf/users.json
{
"users": [{
"user": "gcdb",
"client_pwd": "iforgot",
"server_pwd": "iforgot"
}, {
"user": "cetus_app1",
"client_pwd": "cetus_app1",
"server_pwd": "cetus_app1"
}]
}
[root@node05 src]# vim /usr/local/cetus/conf/proxy.conf
[root@node05 src]# cat /usr/local/cetus/conf/proxy.conf
[cetus]
# For mode-switch
daemon = true
# Loaded Plugins
plugins=proxy,admin
# Proxy Configuration, For example: MySQL master and salve host ip are both 192.0.0.1
proxy-address=192.168.222.175:13306
proxy-backend-addresses=192.168.222.172:3306
proxy-read-only-backend-addresses=192.168.222.173:3306,192.168.222.174:3306
# Admin Configuration
admin-address=192.168.222.175:23306
admin-username=admin
admin-password=admin
# Backend Configuration, use test db and username created
default-db=ttt
default-username=gcdb
default-pool-size=100
max-resp-size=10485760
long-query-time=100
# File and Log Configuration, put log in /data and marked by proxy port, /data/cetus needs to be created manually and has rw authority for cetus os user
max-open-files = 65536
pid-file = cetus6001.pid
plugin-dir=lib/cetus/plugins
log-file=/var/log/cetus.log
log-level=debug
# Check salve delay
disable-threads=false
check-slave-delay=true
slave-delay-down=5
slave-delay-recover=1
# For trouble
keepalive=true
verbose-shutdown=true
log-backtrace-on-crash=true
- 在manager節點創建全局配置文件(masterha_default.cnf)
[root@node05 software]# cat /etc/masterha_default.cnf
[server default]
#用於指定mha manager產生相關狀態文件全路徑
manager_workdir=/var/log/masterha
#指定mha manager的絕對路徑的文件名日誌文件
manager_log=/var/log/masterha/mha.log
# cetus porxy文件絕對路徑
proxy_conf=/etc/cetus.cnf
# 登陸mysql數據庫賬戶及密碼
user=gcdb
password=iforgot
# ssh用戶
ssh_user=root
ssh_port=22
# mysql數據庫master節點binlog的位置,該參數用於當master節點死掉後通過ssh方式順序讀取binlog event,需要配置,因為master節點死掉後無法通過replication機制來自動獲取binlog日誌位置
master_binlog_dir=/r2/mysqldata
#master_binlog_dir= /r2/mysqldata
# 用於檢測各節點間的連接性,此處詳細可參考MHA parameters描述部分
secondary_check_script= masterha_secondary_check -s 192.168.222.172 -s 192.168.222.173 -s 192.168.222.174
ping_interval=3
ping_type=select
remote_workdir=/tmp
#定義用於實現VIP漂移的腳本,後面的是shutdown以及report腳本
master_ip_failover_script=/usr/local/bin/master_ip_failover
#shutdown_script=/usr/local/bin/power_manager
shutdown_script=
report_script="/usr/local/share/perl5/MHA/sendMail.sh"
#MySQL用於復制的賬號
repl_user=repl
repl_password=repl
- 在manager節點上創建配置文件(mha01.cnf)
[root@node05 software]# cat /etc/mha.cnf
[server default]
manager_log=/var/log/masterha/mha.log
manager_workdir=/var/log/masterha
[server1]
hostname=192.168.222.172
master_binlog_dir=/r2/mysqldata
port=3306
[server2]
hostname=192.168.222.173
master_binlog_dir=/r2/mysqldata
port=3306
[server3]
hostname=192.168.222.174
master_binlog_dir=/r2/mysqldata
port=3306
- 在manager節點上修改故障轉移腳本(master_ip_failover)
[root@node05 mha4mysql-manager-0.57]# vim /usr/local/bin/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
use MHA::DBHelper;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
#添加切換vip
my $vip = '192.168.222.99/24';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig ens224:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig ens224:$key down";
$ssh_user = "root";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
&usage();
exit 1;
}
}
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub stop_vip() {
return 0 unless ($ssh_user);
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
2、檢查 SSH/復制/MHA Manager 的配置
- 檢查SSH 情況:masterha_check_ssh --conf=/etc/mha.cnf
[root@node05 src]# masterha_check_ssh --conf=/etc/mha.cnf
Fri Aug 31 13:55:51 2018 - [info] Reading default configuration from /etc/masterha_default.cnf..
Fri Aug 31 13:55:51 2018 - [info] Reading application default configuration from /etc/mha.cnf..
Fri Aug 31 13:55:51 2018 - [info] Reading server configuration from /etc/mha.cnf..
Fri Aug 31 13:55:51 2018 - [info] Starting SSH connection tests..
Fri Aug 31 13:55:52 2018 - [debug]
Fri Aug 31 13:55:51 2018 - [debug] Connecting via SSH from [email protected](192.168.222.172:22) to [email protected](192.168.222.173:22)..
Fri Aug 31 13:55:51 2018 - [debug] ok.
Fri Aug 31 13:55:51 2018 - [debug] Connecting via SSH from [email protected](192.168.222.172:22) to [email protected](192.168.222.174:22)..
Fri Aug 31 13:55:51 2018 - [debug] ok.
Fri Aug 31 13:55:52 2018 - [debug]
Fri Aug 31 13:55:51 2018 - [debug] Connecting via SSH from [email protected](192.168.222.173:22) to [email protected](192.168.222.172:22)..
Fri Aug 31 13:55:51 2018 - [debug] ok.
Fri Aug 31 13:55:51 2018 - [debug] Connecting via SSH from [email protected](192.168.222.173:22) to [email protected](192.168.222.174:22)..
Fri Aug 31 13:55:52 2018 - [debug] ok.
Fri Aug 31 13:55:53 2018 - [debug]
Fri Aug 31 13:55:52 2018 - [debug] Connecting via SSH from [email protected](192.168.222.174:22) to [email protected](192.168.222.172:22)..
Fri Aug 31 13:55:52 2018 - [debug] ok.
Fri Aug 31 13:55:52 2018 - [debug] Connecting via SSH from [email protected](192.168.222.174:22) to [email protected](192.168.222.173:22)..
Fri Aug 31 13:55:52 2018 - [debug] ok.
Fri Aug 31 13:55:53 2018 - [info] All SSH connection tests passed successfully.
[root@node05 src]#
- 檢查復制情況:masterha_check_repl --conf=/etc/mha.cnf
[root@node05 src]# masterha_check_repl --conf=/etc/mha.cnf
Fri Aug 31 13:59:17 2018 - [info] Reading default configuration from /etc/masterha_default.cnf..
Fri Aug 31 13:59:17 2018 - [info] Reading application default configuration from /etc/mha.cnf..
Fri Aug 31 13:59:17 2018 - [info] Reading server configuration from /etc/mha.cnf..
Fri Aug 31 13:59:17 2018 - [info] MHA::MasterMonitor version 0.56.
Fri Aug 31 13:59:17 2018 - [info] g_workdir: /var/log/masterha
Fri Aug 31 13:59:17 2018 - [info] proxy_conf: /etc/cetus.cnf
Fri Aug 31 13:59:17 2018 - [info] -------------ManagerUtil::check_node_version-----------
Fri Aug 31 13:59:18 2018 - [info] GTID failover mode = 1
Fri Aug 31 13:59:18 2018 - [info] Dead Servers:
Fri Aug 31 13:59:18 2018 - [info] Alive Servers:
Fri Aug 31 13:59:18 2018 - [info] 192.168.222.172(192.168.222.172:3306)
Fri Aug 31 13:59:18 2018 - [info] 192.168.222.173(192.168.222.173:3306)
Fri Aug 31 13:59:18 2018 - [info] 192.168.222.174(192.168.222.174:3306)
Fri Aug 31 13:59:18 2018 - [info] Alive Slaves:
Fri Aug 31 13:59:18 2018 - [info] 192.168.222.173(192.168.222.173:3306) Version=5.7.18-log (oldest major version between slaves) log-bin:enabled
Fri Aug 31 13:59:18 2018 - [info] GTID ON
Fri Aug 31 13:59:18 2018 - [info] Replicating from 192.168.222.172(192.168.222.172:3306)
Fri Aug 31 13:59:18 2018 - [info] 192.168.222.174(192.168.222.174:3306) Version=5.7.18-log (oldest major version between slaves) log-bin:enabled
Fri Aug 31 13:59:18 2018 - [info] GTID ON
Fri Aug 31 13:59:18 2018 - [info] Replicating from 192.168.222.172(192.168.222.172:3306)
Fri Aug 31 13:59:18 2018 - [info] Current Alive Master: 192.168.222.172(192.168.222.172:3306)
Fri Aug 31 13:59:18 2018 - [info] Checking slave configurations..
Fri Aug 31 13:59:18 2018 - [info] Checking replication filtering settings..
Fri Aug 31 13:59:18 2018 - [info] binlog_do_db= , binlog_ignore_db=
Fri Aug 31 13:59:18 2018 - [info] Replication filtering check ok.
Fri Aug 31 13:59:18 2018 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Fri Aug 31 13:59:18 2018 - [info] Checking SSH publickey authentication settings on the current master..
Fri Aug 31 13:59:18 2018 - [info] HealthCheck: SSH to 192.168.222.172 is reachable.
Fri Aug 31 13:59:18 2018 - [info]
192.168.222.172(192.168.222.172:3306) (current master)
+--192.168.222.173(192.168.222.173:3306)
+--192.168.222.174(192.168.222.174:3306)
Fri Aug 31 13:59:18 2018 - [info] Checking replication health on 192.168.222.173..
Fri Aug 31 13:59:18 2018 - [info] ok.
Fri Aug 31 13:59:18 2018 - [info] Checking replication health on 192.168.222.174..
Fri Aug 31 13:59:18 2018 - [info] ok.
Fri Aug 31 13:59:18 2018 - [info] Checking master_ip_failover_script status:
Fri Aug 31 13:59:18 2018 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.222.172 --orig_master_ip=192.168.222.172 --orig_master_port=3306
IN SCRIPT TEST====/sbin/ifconfig ens224:1 down==/sbin/ifconfig ens224:1 192.168.222.99/24===
Checking the Status of the script.. OK
Fri Aug 31 13:59:18 2018 - [info] OK.
Fri Aug 31 13:59:18 2018 - [warning] shutdown_script is not defined.
Fri Aug 31 13:59:18 2018 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
[root@node05 src]#
- 檢查MHA 狀態:masterha_check_status --conf=/etc/mha.cnf
[root@node05 src]# masterha_check_status --conf=/etc/mha.cnf
mha is stopped(2:NOT_RUNNING).
- 檢查masterha_secondary_check腳本中mysql客戶端安裝位置
[root@node05 software]# which mysql
/usr/local/mysql/bin/mysql
[root@node05 software]# grep -w /usr/bin/mysql /usr/local/bin/*
/usr/local/bin/masterha_secondary_check: . "/usr/bin/mysql -u$master_user -p$master_password -h$master_host -P$master_port "
[root@node05 software]# ln -s /usr/local/mysql/bin/mysql /usr/bin/mysql
[root@node05 software]# which mysql
/usr/bin/mysql
[root@node05 software]#
3、啟動MHA manager
- session A
[root@node05 src]# mkdir -p /var/log/masterha/
- session B
[root@node05 mha4mysql-manager-0.57]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
[1] 19246
- session A
[root@node05 ~]# tail -f /var/log/masterha/app1/manager.log
IN SCRIPT TEST====/sbin/ifconfig ens224:1 down==/sbin/ifconfig ens224:1 192.168.222.99/24===
Checking the Status of the script.. OK
Fri Aug 31 14:06:45 2018 - [info] OK.
Fri Aug 31 14:06:45 2018 - [warning] shutdown_script is not defined.
Fri Aug 31 14:06:45 2018 - [info] Set master ping interval 3 seconds.
Fri Aug 31 14:06:45 2018 - [info] Set secondary check script: masterha_secondary_check -s 192.168.222.172 -s 192.168.222.173 -s 192.168.222.174
Fri Aug 31 14:06:45 2018 - [info] Starting ping health check on 192.168.222.172(192.168.222.172:3306)..
Fri Aug 31 14:06:45 2018 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
[root@node05 src]# cat /var/log/masterha/mha.log
Fri Aug 31 14:06:44 2018 - [info] Reading default configuration from /etc/masterha_default.cnf..
Fri Aug 31 14:06:44 2018 - [info] Reading application default configuration from /etc/mha.cnf..
Fri Aug 31 14:06:44 2018 - [info] Reading server configuration from /etc/mha.cnf..
Fri Aug 31 14:06:44 2018 - [info] MHA::MasterMonitor version 0.56.
Fri Aug 31 14:06:44 2018 - [info] g_workdir: /var/log/masterha
Fri Aug 31 14:06:44 2018 - [info] proxy_conf: /etc/cetus.cnf
Fri Aug 31 14:06:44 2018 - [info] -------------ManagerUtil::check_node_version-----------
Fri Aug 31 14:06:45 2018 - [info] GTID failover mode = 1
Fri Aug 31 14:06:45 2018 - [info] Dead Servers:
Fri Aug 31 14:06:45 2018 - [info] Alive Servers:
Fri Aug 31 14:06:45 2018 - [info] 192.168.222.172(192.168.222.172:3306)
Fri Aug 31 14:06:45 2018 - [info] 192.168.222.173(192.168.222.173:3306)
Fri Aug 31 14:06:45 2018 - [info] 192.168.222.174(192.168.222.174:3306)
Fri Aug 31 14:06:45 2018 - [info] Alive Slaves:
Fri Aug 31 14:06:45 2018 - [info] 192.168.222.173(192.168.222.173:3306) Version=5.7.18-log (oldest major version between slaves) log-bin:enabled
Fri Aug 31 14:06:45 2018 - [info] GTID ON
Fri Aug 31 14:06:45 2018 - [info] Replicating from 192.168.222.172(192.168.222.172:3306)
Fri Aug 31 14:06:45 2018 - [info] 192.168.222.174(192.168.222.174:3306) Version=5.7.18-log (oldest major version between slaves) log-bin:enabled
Fri Aug 31 14:06:45 2018 - [info] GTID ON
Fri Aug 31 14:06:45 2018 - [info] Replicating from 192.168.222.172(192.168.222.172:3306)
Fri Aug 31 14:06:45 2018 - [info] Current Alive Master: 192.168.222.172(192.168.222.172:3306)
Fri Aug 31 14:06:45 2018 - [info] Checking slave configurations..
Fri Aug 31 14:06:45 2018 - [info] Checking replication filtering settings..
Fri Aug 31 14:06:45 2018 - [info] binlog_do_db= , binlog_ignore_db=
Fri Aug 31 14:06:45 2018 - [info] Replication filtering check ok.
Fri Aug 31 14:06:45 2018 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Fri Aug 31 14:06:45 2018 - [info] Checking SSH publickey authentication settings on the current master..
Fri Aug 31 14:06:45 2018 - [info] HealthCheck: SSH to 192.168.222.172 is reachable.
Fri Aug 31 14:06:45 2018 - [info]
192.168.222.172(192.168.222.172:3306) (current master)
+--192.168.222.173(192.168.222.173:3306)
+--192.168.222.174(192.168.222.174:3306)
Fri Aug 31 14:06:45 2018 - [info] Checking master_ip_failover_script status:
Fri Aug 31 14:06:45 2018 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.222.172 --orig_master_ip=192.168.222.172 --orig_master_port=3306
IN SCRIPT TEST====/sbin/ifconfig ens224:1 down==/sbin/ifconfig ens224:1 192.168.222.99/24===
Checking the Status of the script.. OK
Fri Aug 31 14:06:45 2018 - [info] OK.
Fri Aug 31 14:06:45 2018 - [warning] shutdown_script is not defined.
Fri Aug 31 14:06:45 2018 - [info] Set master ping interval 3 seconds.
Fri Aug 31 14:06:45 2018 - [info] Set secondary check script: masterha_secondary_check -s 192.168.222.172 -s 192.168.222.173 -s 192.168.222.174
Fri Aug 31 14:06:45 2018 - [info] Starting ping health check on 192.168.222.172(192.168.222.172:3306)..
Fri Aug 31 14:06:45 2018 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond.. --表示已開啟監聽
- session B
[root@node05 software]# masterha_check_status --conf=/etc/mha.cnf
mha (pid:25386) is running(0:PING_OK), master:192.168.222.172
[root@node05 software]#
4、啟動cetus
- 啟動前保證啟動配置文件的權限為660
[root@node05 src]# chmod 660 /usr/local/cetus/conf/*
[root@node05 src]# ll /usr/local/cetus/conf/
total 28
-rw-rw---- 1 root root 1053 Aug 31 12:12 proxy.conf
-rw-rw---- 1 root root 1011 Aug 31 09:21 proxy.conf.example
-rw-rw---- 1 root root 1199 Aug 31 09:21 shard.conf.example
-rw-rw---- 1 root root 1030 Aug 31 09:21 sharding.json.example
-rw-rw---- 1 root root 189 Aug 31 12:04 users.json
-rw-rw---- 1 root root 198 Aug 31 09:21 users.json.example
-rw-rw---- 1 root ro
- 守護進程模式啟動Cetus
- session A
[root@node01 software]# ll mha*
-rw-r--r-- 1 root root 118521 Aug 28 15:41 mha4mysql-manager-0.57.tar.gz
-rw-r--r-- 1 root root 54484 Aug 28 15:41 mha4mysql-node-0.57.tar.gz
[root@node05 src]# /usr/local/cetus/bin/cetus --defaults-file=/usr/local/cetus/conf/proxy.conf
[root@node05 src]# ps -ef |grep cetusb
root 26580 1 0 14:23 ? 00:00:00 /usr/local/cetus/libexec/cetus --defaults-file=/usr/local/cetus/conf/proxy.conf
root 26581 26580 0 14:23 ? 00:00:00 /usr/local/cetus/libexec/cetus --defaults-file=/usr/local/cetus/conf/proxy.conf
root 26624 2272 0 14:23 pts/0 00:00:00 grep --color=auto cetus
[root@node05 src]#
- 查看Cetus狀態
- session B
[root@node05 src]# mysql -ugcdb -piforgot -h192.168.222.175 -P13306
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.7.18-log (cetus) MySQL Community Server (GPL)
Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
([email protected]) 14:26:55 [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| proxy_heart_beat |
| sys |
| ttt |
+--------------------+
6 rows in set (0.00 sec)
([email protected]) 14:27:02 [(none)]> exit
Bye
[root@node05 src]# mysql -uadmin -padmin -h192.168.222.175 -P23306
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.7 admin
Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
([email protected]) 14:27:41 [(none)]> select conn_details from backends;select * from backends;
+-------------+----------+------------+------------+-------------+
| backend_ndx | username | idle_conns | used_conns | total_conns |
+-------------+----------+------------+------------+-------------+
| 0 | gcdb | 100 | 0 | 100 |
| 1 | gcdb | 100 | 0 | 100 |
| 2 | gcdb | 100 | 0 | 100 |
+-------------+----------+------------+------------+-------------+
3 rows in set (0.00 sec)
+-------------+----------------------+-------+------+-------------+------+------------+------------+-------------+
| backend_ndx | address | state | type | slave delay | uuid | idle_conns | used_conns | total_conns |
+-------------+----------------------+-------+------+-------------+------+------------+------------+-------------+
| 1 | 192.168.222.172:3306 | up | rw | NULL | NULL | 100 | 0 | 100 |
| 2 | 192.168.222.173:3306 | up | ro | 426 | NULL | 100 | 0 | 100 |
| 3 | 192.168.222.174:3306 | up | ro | 427 | NULL | 100 | 0 | 100 |
+-------------+----------------------+-------+------+-------------+------+------------+------------+-------------+
3 rows in set (0.00 sec)
([email protected]) 14:58:37 [(none)]> cetus;
+--------------------------+---------------------+
| Status | Value |
+--------------------------+---------------------+
| Cetus version | v1.0.0-44-g5b1bd43 |
| Startup time | 2018-08-31 14:23:07 |
| Loaded modules | proxy admin |
| Idle backend connections | 300 |
| Used backend connections | 0 |
| Client connections | 4 |
| Query count | 5 |
| QPS (1min, 5min, 15min) | 0.00, 0.01, 0.00 |
| TPS (1min, 5min, 15min) | 0.00, 0.00, 0.00 |
+--------------------------+---------------------+
9 rows in set (0.00 sec)
([email protected]) 14:58:39 [(none)]>
5、 驗證cetus讀寫分離功能
- session A
[root@node05 software]# mysql -ugcdb -piforgot -h192.168.222.175 -P13306
([email protected]) 15:18:37 [(none)]> select sleep(10) from ttt.t1;
- session B
([email protected]) 15:18:44 [(none)]> select conn_details from backends;select * from backends;
+-------------+----------+------------+------------+-------------+
| backend_ndx | username | idle_conns | used_conns | total_conns |
+-------------+----------+------------+------------+-------------+
| 0 | gcdb | 100 | 0 | 100 |
| 1 | gcdb | 100 | 0 | 100 |
| 2 | gcdb | 99 | 1 | 100 | --讀負載到backend_ndx=3這臺
+-------------+----------+------------+------------+-------------+
3 rows in set (0.00 sec)
+-------------+----------------------+-------+------+-------------+------+------------+------------+-------------+
| backend_ndx | address | state | type | slave delay | uuid | idle_conns | used_conns | total_conns |
+-------------+----------------------+-------+------+-------------+------+------------+------------+-------------+
| 1 | 192.168.222.172:3306 | up | rw | NULL | NULL | 100 | 0 | 100 |
| 2 | 192.168.222.173:3306 | up | ro | 403 | NULL | 100 | 0 | 100 |
| 3 | 192.168.222.174:3306 | up | ro | 404 | NULL | 99 | 1 | 100 | --讀負載到192.168.222.174
+-------------+----------------------+-------+------+-------------+------+------------+------------+-------------+
3 rows in set (0.00 sec)
([email protected]) 15:18:46 [(none)]>
- session A
([email protected]) 15:18:37 [(none)]> select sleep(10) from ttt.t1;
+-----------+
| sleep(10) |
+-----------+
| 0 |
| 0 |
| 0 |
| 0 |
+-----------+
4 rows in set (40.00 sec)
([email protected]) 15:19:20 [(none)]>
#再次執行查詢看後端負載到那個服務器
([email protected]) 15:19:20 [(none)]> ([email protected]) 15:19:20 [(none)]> select sleep(10) from ttt.t1;
+-----------+
| sleep(10) |
+-----------+
| 0 |
| 0 |
| 0 |
| 0 |
+-----------+
4 rows in set (40.01 sec)
- session B
([email protected]) 15:19:22 [(none)]> select conn_details from backends;select * from backends;
+-------------+----------+------------+------------+-------------+
| backend_ndx | username | idle_conns | used_conns | total_conns |
+-------------+----------+------------+------------+-------------+
| 0 | gcdb | 100 | 0 | 100 |
| 1 | gcdb | 99 | 1 | 100 | --讀負載到backend_ndx=2這臺
| 2 | gcdb | 100 | 0 | 100 |
+-------------+----------+------------+------------+-------------+
3 rows in set (0.00 sec)
+-------------+----------------------+-------+------+-------------+------+------------+------------+-------------+
| backend_ndx | address | state | type | slave delay | uuid | idle_conns | used_conns | total_conns |
+-------------+----------------------+-------+------+-------------+------+------------+------------+-------------+
| 1 | 192.168.222.172:3306 | up | rw | NULL | NULL | 100 | 0 | 100 |
| 2 | 192.168.222.173:3306 | up | ro | 783 | NULL | 99 | 1 | 100 | --讀負載到192.168.222.173
| 3 | 192.168.222.174:3306 | up | ro | 432 | NULL | 100 | 0 | 100 |
+-------------+----------------------+-------+------+-------------+------+------------+------------+-------------+
3 rows in set (0.00 sec)
- 驗證簡單讀寫
([email protected]) 15:23:11 [(none)]> select sleep(5);insert into ttt.t1(nums) values(6),(7),(8),(9),(10);
+----------+
| sleep(5) |
+----------+
| 0 |
+----------+
1 row in set (5.00 sec)
Query OK, 5 rows affected (0.05 sec)
Records: 5 Duplicates: 0 Warnings: 0
([email protected]) 15:30:35 [(none)]> select * from ttt.t1;
+----+------+
| id | nums |
+----+------+
| 1 | 1 |
| 2 | 2 |
| 3 | 100 |
| 5 | 5 |
| 6 | 6 |
| 7 | 7 |
| 8 | 8 |
| 9 | 9 |
| 10 | 10 |
+----+------+
9 rows in set (0.01 sec)
([email protected]) 15:31:02 [(none)]>
五 MHA和Cetu聯動切換
1、添加mail告警功能
- 修改配置
[root@node05 software]# vim /etc/mail.rc
#最後追加以下內容
set [email protected] #此處設置發件人的信息
set smtp=smtp.163.com #此處配置郵件服務地址,因為郵箱是163的,所以此處配置為smtp.163.com
set [email protected] #此處配置發件人郵箱地址
set smtp-auth-password=xxxxxx #切記此處配置abc是客戶端授權碼,不是發件人郵箱地址密碼
set smtp-auth=login #郵件認證方式
- 啟動sendmail測試
[root@node05 ~]# systemctl start sendmail
[root@node05 ~]# mail -s "status" [email protected] < /etc/hosts
- 修改sendmail腳本
[root@node05 ~]# cat /usr/local/share/perl5/MHA/sendMail.sh
#!/bin/bash
## author : cch
## desc: sendmail
############ variable part ########################
conf=/masterha/app1/sample.cnf
if [ $# -ne 1 ];then
mailsubject="mha--failover--`date +%Y%m%d%H%M`"
else
mailsubject=$1
fi
############# main #########################
find_flag=`cat $conf|grep -v '^#'|grep "manager_workdir"|awk -F= '{print $2}'|wc -l`
if [ ${find_flag} -eq 1 ];then
manager_workdir=`cat $conf|grep -v '^#'|grep "manager_workdir"|awk -F= '{print $2}'|sed 's/ //g'`
fi
#java SendMail service.netease.com ${manager_workdir}/to_list_mha.txt "${mailsubject}" ${manager_workdir}/to_list_filename ${manager_workdir}/sendmail.txt
#修改郵箱地址
mail -s "${mailsubject}" [email protected] < ${manager_workdir}/sendmail.txt
echo `date` >> ./sendMail.log
[root@node05 ~]#
030:Cetus中間件和MHA讀寫分離