K8s-scheduler叢集.06-4
阿新 • • 發佈:2019-01-03
tags: master, kube-scheduler
06-4.部署高可用 kube-scheduler 叢集
該叢集包含 3 個節點,啟動後將通過競爭選舉機制產生一個 leader 節點,其它節點為阻塞狀態。當 leader 節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。
為保證通訊安全,本文件先生成 x509 證書和私鑰,kube-scheduler 在如下兩種情況下使用該證書:
- 與 kube-apiserver 的安全埠通訊;
- 在安全埠(https,10251) 輸出 prometheus 格式的 metrics;
準備工作
下載最新版本的二進位制檔案、安裝和配置 flanneld 參考:K8s-部署master節點.06
建立 kube-scheduler 證書和私鑰
cat > kube-scheduler-csr.json <<EOF { "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "172.27.129.101", "172.27.129.102", "172.27.129.103" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:kube-scheduler", "OU": "4Paradigm" } ] } EOF
- hosts 列表包含所有 kube-scheduler 節點 IP;
- CN 為 system:kube-scheduler、O 為 system:kube-scheduler,kubernetes 內建的 ClusterRoleBindings system:kube-scheduler 將賦予 kube-scheduler 工作所需的許可權。
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \ -ca-key=/etc/kubernetes/cert/ca-key.pem \ -config=/etc/kubernetes/cert/ca-config.json \ -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
建立和分發 kubeconfig 檔案
kubeconfig 檔案包含訪問 apiserver 的所有資訊,如 apiserver 地址、CA 證書和自身使用的證書;
source /opt/k8s/bin/environment.sh kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=kube-scheduler.pem \ --client-key=kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context system:kube-scheduler \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
- 上一步建立的證書、私鑰以及 kube-apiserver 地址被寫入到 kubeconfig 檔案中;
分發 kubeconfig 到所有 master 節點:
source /opt/k8s/bin/environment.sh for master_ip in ${MASTER_IP[@]} do echo ">>> ${master_ip}" scp kube-scheduler.kubeconfig [email protected]${master_ip}:/etc/kubernetes/ done
建立和分發 kube-scheduler systemd unit 檔案
cat > kube-scheduler.service <<EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/k8s/bin/kube-scheduler \\ --address=127.0.0.1 \\ --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\ --leader-elect=true \\ --alsologtostderr=true \\ --logtostderr=false \\ --log-dir=/var/log/kubernetes \\ --v=2 Restart=on-failure RestartSec=5 User=k8s [Install] WantedBy=multi-user.target EOF
--address
:在 127.0.0.1:10251 埠接收 http /metrics 請求;kube-scheduler 目前還不支援接收 https 請求;--kubeconfig
:指定 kubeconfig 檔案路徑,kube-scheduler 使用它連線和驗證 kube-apiserver;--leader-elect=true
:叢集執行模式,啟用選舉功能;被選為 leader 的節點負責處理工作,其它節點為阻塞狀態;User=k8s
:使用 k8s 賬戶執行;
分發 systemd unit 檔案到所有 master 節點:
source /opt/k8s/bin/environment.sh for master_ip in ${MASTER_IP[@]} do echo ">>> ${master_ip}" scp kube-scheduler.service [email protected]${master_ip}:/etc/systemd/system/ done
啟動 kube-scheduler 服務
source /opt/k8s/bin/environment.sh for master_ip in ${MASTER_IP[@]} do echo ">>> ${master_ip}" ssh [email protected]${master_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes" ssh [email protected]${master_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler" done
- 必須先建立日誌目錄;
檢查服務執行狀態
source /opt/k8s/bin/environment.sh for master_ip in ${MASTER_IP[@]} do echo ">>> ${master_ip}" ssh [email protected]${master_ip} "systemctl status kube-scheduler|grep Active" done
確保狀態為 active (running)
,否則檢視日誌,確認原因:
journalctl -u kube-scheduler
檢視輸出的 metric
注意:以下命令在 kube-scheduler 節點上執行。
kube-scheduler 監聽 10251 埠,接收 http 請求:
$ sudo netstat -lnpt|grep kube-sche
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 23783/kube-schedule
$ curl -s http://127.0.0.1:10251/metrics |head # HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend. # TYPE apiserver_audit_event_total counter apiserver_audit_event_total 0 # HELP go_gc_duration_seconds A summary of the GC invocation durations. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 9.7715e-05 go_gc_duration_seconds{quantile="0.25"} 0.000107676 go_gc_duration_seconds{quantile="0.5"} 0.00017868 go_gc_duration_seconds{quantile="0.75"} 0.000262444 go_gc_duration_seconds{quantile="1"} 0.001205223
測試 kube-scheduler 叢集的高可用
隨便找一個或兩個 master 節點,停掉 kube-scheduler 服務,看其它節點是否獲取了 leader 許可權(systemd 日誌)。
檢視當前的 leader
$ kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master-0002_61f34593-6cc8-11e8-8af7-5254002f288e","leaseDurationSeconds":15,"acquireTime":"2018-06-10T16:09:56Z","renewTime":"2018-06-10T16:20:54Z","leaderTransitions":1}' creationTimestamp: 2018-06-10T16:07:33Z name: kube-scheduler namespace: kube-system resourceVersion: "4645" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler uid: 62382d98-6cc8-11e8-96fa-525400ba84c6
可見,當前的 leader 為 k8s-master-0002 節點。