1. 程式人生 > 其它 >TiDB叢集手動安裝

TiDB叢集手動安裝

TIDB的安裝 TiDB 是 PingCAP 公司受 GoogleSpanner/F1論文啟發而設計的開源分散式 HTAP (Hybrid Transactional and Analytical Processing) 資料庫,結合了傳統的 RDBMS 和 NoSQL 的最佳特性。TiDB 相容 MySQL,支援無限的水平擴充套件,具備強一致性和高可用性。(官網介紹) 用mysql客戶端工具連線進去tidb。tidb和mysql的各類命令用法是一樣的。沒有過高的學習成本,開發成本。
TiDB目前官網是強烈推薦用ansible部署。ansible可以方便快速部署完畢。為了更好的瞭解整個架構,可以手動去部署一次。 TIDB的架構(圖來自官網): 安裝TiDB ansible方式下載: git clone
https://github.com/pingcap/tidb-ansible.git
cd tidb-ansible ansible-playbook local_prepare.yml cddownloads(可以看到下載的tidb安裝包,包括工具等。) 或者 http://download.pingcap.org/tidb-latest-linux-amd64-unportable.tar.gz 或者 連結:https://pan.baidu.com/s/1hkQ_fsbXTZjzFXGX2sRyGA 密碼:fj16 手動下載完成tidb安裝包並解壓後,會看tidb的目錄架構如下: 除此之外tidb官網還提供了眾多工具如(checker dump_region loader syncer。)可以用於匯出mysql資料並匯入,根據binlog實時同步等。 安裝TiDB 需要安裝3類節點。tipd。tikv。tidb。 啟動順序是: tipd》tikv》tidb。 tipd節點:管理節點,管理元資料,對tikv節點資料的均衡排程。 tikv節點:儲存資料節點,可設定多個副本互備。 tidb節點:客戶端連結,計算節點。是無資料,無狀態的。 1、安裝環境: 系統:centos6.6(官網推薦是centos7。centos6需要升級GLIBC庫到2.17以上) 磁碟:tidb僅對ssd進行優化,建議使用ssd。 go環境:建議 go version go1.10.2 linux/amd64 以上 (注:tidb為go語言編寫,需要配置go環境。自行配置。編譯需要go。二進位制解壓即可用方式不用安裝go) 2、目錄檔案規範: 按照我的安裝習慣,先會對安裝目錄,資料目錄,檔名稱進行規劃順帶說一下: 安裝至/usr/local。解壓後目錄: /usr/local/tidb-2.0.4 ln -s/usr/local/tidb-2.0.4 tidb conf放配置檔案:配置檔案為tipd_埠號.conftikv_埠號.conftidb_埠號.conf 。 tools防止tidb的其他額外工具。 資料目錄:mkdir/data_db3/tidb/ 》db 、kv、pd 》埠 節點規劃: 3臺機器(192.168.100.73,192.168.100.74,192.168.100.75) tipd叢集:192.168.100.73,192.168.100.74,192.168.100.75 tikv資料節點:192.168.100.73,192.168.100.74,192.168.100.75 tidb節點:192.168.100.75 3、配置tipd節點(叢集方式): /usr/local/tidb-2.0.4/conf/tipd_4205.conf client-urls="http://192.168.100.75:4205" name="pd3" data-dir="/data_db3/tidb/pd/4205/" peer-urls="http://192.168.100.75:4206" initial-cluster="pd1=http://192.168.100.74:4202,pd2=http://192.168.100.73:4204,pd3=http://192.168.100.75:4206" log-file="/data_db3/tidb/pd/4205_run.log" 說明:name指定的名稱必須和初始化叢集名稱的“pd3=”相同。 pd節點有3類埠: peer-urls埠為叢集tipd叢集之間通訊用的埠。健康檢查等。 client-urls為何tikv通訊用的埠。 log-file是指定日誌檔案,有一個很奇怪的規定,就是日誌檔案目錄不能放在data-dir目錄下層。 設定完配置檔案啟動: /usr/local/tidb-2.0.4/bin/pd-server --config=/usr/local/tidb-2.0.4/conf/tipd_4205.conf 首次啟動首個tipd會做初始化。如果tipd節點報錯不成功。重新初始化需要先刪除目錄節點內容data-dir。 (pd1pd2節點安裝同pd3。) 安裝完成tipd節點後。登陸pd-ctl的其中一個節點(客戶端url)。檢視叢集資訊: ./pd-ctl -u http://192.168.100.75:4205 help檢視更多命令,命令 help檢視命令的選項資訊。如help member 檢視member的更多操作。可以deletetipd節點,leader優先順序等。 config show檢視叢集配置,health節點之間健康檢查。 4、安裝tikv節點: tikv節點是真正存放資料的節點。引數調優更多在於tikv節點上。 配置檔案: 主要需配置資訊: /usr/local/tidb-2.0.4/conf/tikv_4402.conf log-level = "info" log-file = "/data_db3/tidb/kv/4402/run.log" [server] addr = "192.168.100.74:4402" [storage] data-dir = "/data_db3/tidb/kv/4402" scheduler-concurrency = 1024000 scheduler-worker-pool-size = 100 #labels = {zone = "ZONE1", host = "10074"}
[pd] #指定tipd節點這裡指定的都是tipd的client-urls endpoints = ["192.168.100.73:4203","192.168.100.74:4201","192.168.100.75:4205"] [metric] interval = "15s" address = "" job = "tikv" [raftstore] sync-log = false region-max-size = "384MB" region-split-size = "256MB" [rocksdb] max-background-jobs = 28 max-open-files = 409600 max-manifest-file-size = "20MB" compaction-readahead-size = "20MB" [rocksdb.defaultcf] block-size = "64KB" compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"] write-buffer-size = "128MB" max-write-buffer-number = 10 level0-slowdown-writes-trigger = 20 level0-stop-writes-trigger = 36 max-bytes-for-level-base = "512MB" target-file-size-base = "32MB" [rocksdb.writecf] compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"] write-buffer-size = "128MB" max-write-buffer-number = 5 min-write-buffer-number-to-merge = 1 max-bytes-for-level-base = "512MB" target-file-size-base = "32MB" [raftdb] max-open-files = 409600 compaction-readahead-size = "20MB" [raftdb.defaultcf] compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"] write-buffer-size = "128MB" max-write-buffer-number = 5 min-write-buffer-number-to-merge = 1 max-bytes-for-level-base = "512MB" target-file-size-base = "32MB" block-cache-size = "10G" [import] import-dir = "/data_db3/tidb/kv/4402/import" num-threads = 8 stream-channel-window = 128 #(引數為個人定義,未經線上調優。) 注意:
一臺機器上安裝多個tikv時候,可以打標籤,防止複製副本存放在同個機器上。 tikv-server --labels zone=<zone>,rack=<rack>,host=<host> disk = <ssd> 標籤分為4個級別: zone。機房 rack 。機架 host。主機 disk。磁碟 tidb會從大到小。儘量讓副本資料不要放在同一個地方。 啟動tikv: /usr/local/tidb-2.0.4/bin/tikv-server --config=/usr/local/tidb-2.0.4/conf/tikv_4402.conf 啟動完畢沒報錯的話可以到pd-ctl叢集管理工具檢視tikv是否加入叢集store的kv節點。 ./pd-ctl -u http://192.168.100.75:4205 » store { "store": { "id": 30, "address": "192.168.100.74:4402", "state_name": "Up" }, "status": { "capacity": "446 GiB", "available": "63 GiB", "leader_count": 1301, "leader_weight": 1, "leader_score": 307618, "leader_size": 307618, "region_count": 2638, "region_weight": 1, "region_score": 1073677587.6132812, "region_size": 615726, "start_ts": "2018-06-26T10:33:17+08:00", "last_heartbeat_ts": "2018-07-17T11:27:17.074373767+08:00", "uptime": "504h54m0.074373767s" } } 5、配置tidb-server節點: tidb節點為客戶端連結處理和計算節點。啟動一般在tipd節點和tikv節點之後,否則無法啟動。 /usr/local/tidb-2.0.4/conf/tidb_4001.conf 配置檔案詳細說明: 主要引數說明: host = "0.0.0.0" port = 4001 #儲存型別指定為tikv。 store = "tikv" #指定tipd節點。這裡指定的都是tipd的client-urls path = "192.168.100.74:4201,192.168.100.73:4203,192.168.100.75:4205" socket = "" run-ddl = true lease = "45s" split-table = true token-limit = 1000 oom-action = "log" enable-streaming = false lower-case-table-names = 2 [log] level = "info" format = "text" disable-timestamp = false slow-query-file = "" slow-threshold = 300 expensive-threshold = 10000 query-log-max-len = 2048 [log.file] filename = "/data_db3/tidb/db/4001/tidb.log" max-size = 300 max-days = 0 max-backups = 0 log-rotate = true [security] ssl-ca = "" ssl-cert = "" ssl-key = "" cluster-ssl-ca = "" cluster-ssl-cert = "" cluster-ssl-key = "" [status] report-status = true status-port = 10080#報告tidb狀態的通訊埠 metrics-addr = "" metrics-interval = 15 [performance] max-procs = 0 stmt-count-limit = 5000 tcp-keep-alive = true cross-join = true stats-lease = "3s" run-auto-analyze = true feedback-probability = 0.05 query-feedback-limit = 1024 pseudo-estimate-ratio = 0.8 [proxy-protocol] networks = "" header-timeout = 5 [plan-cache] enabled = false capacity = 2560 shards = 256 [prepared-plan-cache] enabled = false capacity = 100 [opentracing] enable = false rpc-metrics = false [opentracing.sampler] type = "const" param = 1.0 sampling-server-url = "" max-operations = 0 sampling-refresh-interval = 0 [opentracing.reporter] queue-size = 0 buffer-flush-interval = 0 log-spans = false local-agent-host-port = "" [tikv-client] grpc-connection-count = 16 commit-timeout = "41s" [txn-local-latches] enabled = false capacity = 1024000 [binlog] binlog-socket = "" 啟動tidb。 /usr/local/tidb-2.0.4/bin/tidb-server --config=/usr/local/tidb-2.0.4/conf/tidb_4001.conf。 啟動tidb發現一個問題。就是日誌引數log-file不生效(目前不知為何)可以這麼啟動: /usr/local/tidb-2.0.4/bin/tidb-server --config=/usr/local/tidb-2.0.4/conf/tidb_4001.conf --log-file=/data_db3/tidb/db/4001/tidb.log 安裝完成tidb後。至此。可以通過mysql客戶端工具檢視。tidb裡面的內容資訊。命令基本和mysql一樣。封裝的檢視等和mysql相近似。相容mysql的協議。一句話。就是使用上把tidb當mysql用。 初始化完tidb有root賬號,無密碼。 mysql -h 192.168.100.75 -uroot -P 4001 至此。tidb安裝完成。
如何把各個地方的mysql資料同步給tidb呢? 3個工具:mydumper+loader+syncer 例如:要從mysql(192.168.100.56 3345)實時同步以下資料庫 "test1","test2","test3","mytab1" 1、mydumper+loader匯入資料後獲得日誌點。 2、syncer配置檔案。 100.56_3345.toml #基礎資訊,指定同步規則,過濾規則等等。 log-level = "info" server-id = 101 #指定同步的日誌點。 meta = "/usr/local/tidb-2.0.4/tools/syncer/100.56_3345.meta" worker-count = 16 batch = 10 status-addr = "127.0.0.1:10097" skip-ddls = ["^DROP\\s"] replicate-do-db = ["test1","test2","test3","mytab1"] #源mysql連結 [from] host = "192.168.100.56" user = "tidbrepl" password = "xxxxxx" port = 3345 #tidb連結 [to] host = "192.168.100.75" user = "root" password = "" port = 4001 /usr/local/tidb-2.0.4/tools/syncer/100.56_3345.meta配置檔案: binlog-name = "mysql-bin.000089" binlog-pos = 1070520171 binlog-gtid = "" #gtid可以先不填寫,同步的時候該檔案每隔一段時間重新整理,會填入gitd的資訊。 啟動同步: /usr/local/tidb-2.0.4/bin/syncer -config /usr/local/tidb-2.0.4/tools/syncer/100.78_3317.toml >>/tmp/logfilexxxxx。 啟動的時候建議把日誌檔案儲存起來。/tmp/logfilexxxxx。在複製失敗的時候可以找回日誌點。 https://www.cnblogs.com/vansky/p/9328375.html