1. 程式人生 > 其它 >轉載:番外篇 etcd服務無法啟動的修復方法

轉載:番外篇 etcd服務無法啟動的修復方法

今天有一個環境的master節點的掛載掉線了,恢復之後該節點的etcd就起不來了。

猜測應該是和其他etcd節點資料不同步導致的,下面我們模擬一下

案例

檢視叢集元件狀態

[root@k8s-master01 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
我們登陸節點(192.168.1.20),刪除etcd的資料目錄模仿故障

根據配置得知我們的資料目錄位置

[Member]

ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #資料目錄
ETCD_LISTEN_PEER_URLS="https://192.168.1.20:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.20:2379"

切換目錄

cd /var/lib/etcd/default.etcd

清除資料(或者備份)

rm -rf *

重啟服務,檢視狀態

systemctl restart etcd
systemctl status etcd

解決方法
如果是跟著我之前的部署的那麼你可能沒有把etcdctl命令放在全域性,需要多一步操作

這個命令之前沒有放進去,這裡新增以下

cp /opt/etcd/bin/etcdctl /usr/bin/
檢視etcd叢集狀態

找一臺存活的etcd節點去訪問

etcdctl
--cacert=/opt/etcd/ssl/ca.pem
--cert=/opt/etcd/ssl/server.pem
--key=/opt/etcd/ssl/server-key.pem
--endpoints='https://192.168.1.21:2379'
member list

引數說明

--cacert=/opt/etcd/ssl/ca.pem
--cert=/opt/etcd/ssl/server.pem
--key=/opt/etcd/ssl/server-key.pem \ #以上證書+私鑰
--endpoints='

https://192.168.1.21:2379' #指定一臺存活的etcd服務
返回

22cb69b2fd1bb417, started, etcd-2, https://192.168.1.21:2380, https://192.168.1.21:2379, false
3c3bd4fd7d7e553e, started, etcd-3, https://192.168.1.22:2380, https://192.168.1.22:2379, false
5a224bcd35cc7d02, started, etcd-1, https://192.168.1.20:2380, https://192.168.1.20:2379, false
將無法啟動服務的節點踢出
etcdctl
--cacert=/opt/etcd/ssl/ca.pem
--cert=/opt/etcd/ssl/server.pem
--key=/opt/etcd/ssl/server-key.pem
--endpoints='https://192.168.1.21:2379'
member remove 5a224bcd35cc7d02

刪除自己對應節點上的id

檢視已經被踢出

etcdctl
--cacert=/opt/etcd/ssl/ca.pem
--cert=/opt/etcd/ssl/server.pem
--key=/opt/etcd/ssl/server-key.pem
--endpoints='https://192.168.1.21:2379' member list

返回

22cb69b2fd1bb417, started, etcd-2, https://192.168.1.21:2380, https://192.168.1.21:2379, false
3c3bd4fd7d7e553e, started, etcd-3, https://192.168.1.22:2380, https://192.168.1.22:2379, false

可以看到只有2條了

重新新增該節點

etcdctl
--cacert=/opt/etcd/ssl/ca.pem
--cert=/opt/etcd/ssl/server.pem
--key=/opt/etcd/ssl/server-key.pem
--endpoints='https://192.168.1.21:2379'
member add etcd-1 --peer-urls=https://192.168.1.20:2380

這裡add後面是etcd節點的名稱,必須和配置檔案中的名稱相同

因為是重新加入節點,ip不變,所以證書不需要重新生成

返回

ETCD_NAME="etcd-1"
ETCD_INITIAL_CLUSTER="etcd-2=https://192.168.1.21:2380,etcd-3=https://192.168.1.22:2380,etcd-1=https://192.168.1.20:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.20:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

檢視狀態

etcdctl
--cacert=/opt/etcd/ssl/ca.pem
--cert=/opt/etcd/ssl/server.pem
--key=/opt/etcd/ssl/server-key.pem
--endpoints='https://192.168.1.22:2379' member list

返回

22cb69b2fd1bb417, started, etcd-2, https://192.168.1.21:2380, https://192.168.1.21:2379, false
3c3bd4fd7d7e553e, started, etcd-3, https://192.168.1.22:2380, https://192.168.1.22:2379, false
841bd1ec499f60a2, unstarted, , https://192.168.1.20:2380, , false

這裡還沒有啟動服務,沒有準備好

重啟etcd (無法啟動etcd的節點)
vim /opt/etcd/cfg/etcd.conf

檢視

[Member]

ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.20:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.20:2379"

[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.20:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.20:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.1.20:2380,etcd-2=https://192.168.1.21:2380,etcd-3=https://192.168.1.22:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new" #修改這裡 為existing
啟動服務

systemctl restart etcd

檢視叢集狀態

etcdctl
--cacert=/opt/etcd/ssl/ca.pem
--cert=/opt/etcd/ssl/server.pem
--key=/opt/etcd/ssl/server-key.pem
--endpoints='https://192.168.1.22:2379' member list

返回

22cb69b2fd1bb417, started, etcd-2, https://192.168.1.21:2380, https://192.168.1.21:2379, false
3c3bd4fd7d7e553e, started, etcd-3, https://192.168.1.22:2380, https://192.168.1.22:2379, false
841bd1ec499f60a2, started, etcd-1, https://192.168.1.20:2380, https://192.168.1.20:2379, false

檢視元件狀態

[root@k8s-master01 cfg]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
————————————————
版權宣告:本文為CSDN博主「默子昂」的原創文章,遵循CC 4.0 BY-SA版權協議,轉載請附上原文出處連結及本宣告。
原文連結:https://blog.csdn.net/qq_42883074/article/details/112789206