1. 程式人生 > 其它 >kube-prometheus 監控etcd

kube-prometheus 監控etcd

有的雲原生應用會暴露一個/metrics介面、而以前的一些比較老的不會暴露、這時候需要用到export來手動暴露、這樣就可以對他進行監控

我們會建立一個endpoint、來連線到有metrics的服務上,如果我們的服務已經部署在k8s內部的,那麼他可能是已經建立好的。


如果我們建立了一個endpoint那麼還要建立一個名字一樣的service,這樣他會自動建立連結

kube-prometheus專案地址:

https://github.com/prometheus-operator/kube-prometheus

實驗環境

k8s叢集為二進位制安裝

[root@master01 ~]# kubectl get
node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master01 Ready master 34d v1.19.16 172.16.1.11 <none> CentOS Linux 7 (Core) 5.15.5-1.el7.elrepo.x86_64 docker://20.10.11
master02 Ready master 34d v1.19.16 172.16.1.12 <none> CentOS Linux 7 (Core) 5.15.5-1.el7.elrepo.x86_64 docker://20.10.11 master03 Ready master 34d v1.19.16 172.16.1.13 <none> CentOS Linux 7 (Core) 5.15.5-1.el7.elrepo.x86_64 docker://20.10.11 node01 Ready <none> 34d v1.19.16
172.16.1.14 <none> CentOS Linux 7 (Core) 5.15.5-1.el7.elrepo.x86_64 docker://20.10.11 node02 Ready <none> 34d v1.19.16 172.16.1.15 <none> CentOS Linux 7 (Core) 5.15.5-1.el7.elrepo.x86_64 docker://20.10.11

解析kube-prometheus 監控宿主機的etcd

servicemonitor來監控宿主機的一些元件是怎麼實現的呢?
1.通過service暴露外部服務來實現通過service的方式能夠訪問到k8s叢集之外的服務
2.然後kube-prometheus通過標籤來過濾出目標service,然後通過service來獲取/metrics 暴露的資料

大白話:就相當於service是一個橋樑,連線宿主機和kube-prometheus,讓kube-prometheus能夠獲取到宿主機暴露的資料。

手動檢視etcd暴露的資料

curl --cert /etc/etcd/ssl/etcd.pem --key /etc/etcd/ssl/etcd-key.pem https://172.16.1.12:2379/metrics -k | more 
#因為etcd是必須啟用https的、而且我們啟用了證書認證,所以得加上證書

方案

建立etcd service

#建立service關鍵資訊不要錯了:1.endpoint的名字和service名字。2.標籤。3.ip。4.port、name(下文以標出)

#建立service
---
apiVersion: v1
kind: Endpoints
metadata:
  labels:
    k8s-app: etcd1
  name: etcd
  namespace: kube-system
subsets:
- addresses:
  - ip: 172.16.1.11
  - ip: 172.16.1.12
  - ip: 172.16.1.13
  ports:
  - name: etcd  #name
    port: 2379  #port
    protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: etcd1
  name: etcd
  namespace: kube-system
spec:
  ports:
  - name: etcd
    port: 2379
    protocol: TCP
    targetPort: 2379
  sessionAffinity: None
  type: ClusterIP

驗證service

[root@master01 ~]# kubectl get svc -n kube-system 
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                        AGE
etcd                                 ClusterIP   10.96.131.104   <none>        2379/TCP                       8m14s
kube-controller-manager-monitoring   ClusterIP   10.96.187.77    <none>        10252/TCP                      20h
kube-dns                             ClusterIP   10.96.0.2       <none>        53/UDP,53/TCP,9153/TCP         34d
kubelet                              ClusterIP   None            <none>        10250/TCP,10255/TCP,4194/TCP   2d6h
ratel                                NodePort    10.96.15.187    <none>        8888:29999/TCP                 8d
scheduler                            ClusterIP   10.96.57.82     <none>        10251/TCP                      20h


##訪問etcd服務ip
curl --cert /etc/etcd/ssl/etcd.pem --key /etc/etcd/ssl/etcd-key.pem https://10.96.131.104:2379/metrics -k | more 
    訪問到資料說明正確、沒有訪問到資料說明沒有成功。

……

# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.70909696e+08
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.64147009048e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.1008647168e+10

#這裡擷取尾部小部分

建立etcd servicemonitor

kubectl -n monitoring create secret generic etcd-certs --from-file=/etc/etcd/ssl/ca.pem  --from-file=/etc/etcd/ssl/etcd.pem  --from-file=/etc/etcd/ssl/etcd-key.pem
#將證書掛到prometheus





#修改prometheus-prometheus.yaml  
將剛剛建立的secret掛載到prometheus
在末尾新增:
……
spec:
……
  secrets:
  - etcd-certs
  
kubectl replace -f kube-prometheus/manifests/prometheus-prometheus.yaml

驗證證書在prometheus裡的儲存位置

[root@master01 ~]# kubectl get pod  -n monitoring 
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   12         2d7h
blackbox-exporter-6798fb5bb4-ltnvl     3/3     Running   21         2d21h
grafana-696d8f4f9c-rvzrb               1/1     Running   6          2d6h
kube-state-metrics-85ccd987fc-wzr7v    3/3     Running   19         2d7h
node-exporter-5r52x                    2/2     Running   16         2d21h
node-exporter-948d6                    2/2     Running   16         2d21h
node-exporter-99bwl                    2/2     Running   12         2d21h
node-exporter-kshxd                    2/2     Running   14         2d21h
node-exporter-t4r2p                    2/2     Running   18         2d21h
prometheus-adapter-67cfd8b5f6-m4spb    1/1     Running   7          2d7h
prometheus-adapter-67cfd8b5f6-pcd9z    1/1     Running   6          2d7h
prometheus-k8s-0                       2/2     Running   0          5m19s
prometheus-k8s-1                       2/2     Running   0          5m44s
prometheus-operator-7ddc6877d5-rwc2f   2/2     Running   13         2d21h




[root@master01 ~]# kubectl exec -it -n monitoring prometheus-k8s-0 -- sh
/prometheus $ ls /etc/prometheus/secrets/etcd-certs/
ca.pem        etcd-key.pem  etcd.pem

建立etcd servicemonitor

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: etcd1 #這個serviceMonitor的標籤
  name: etcd
  namespace: monitoring
spec:
  endpoints:
  - interval: 30s
    port: etcd     #port名字就是service裡面的spec.ports.name
    scheme: https  #訪問的方式
    tlsConfig:
      caFile: /etc/prometheus/secrets/etcd-certs/ca.pem #證書位置/etc/prometheus/secrets,這個路徑是預設的掛載路徑
      certFile: /etc/prometheus/secrets/etcd-certs/etcd.pem
      keyFile: /etc/prometheus/secrets/etcd-certs/etcd-key.pem
  selector:
    matchLabels:
      k8s-app: etcd1
  namespaceSelector:
    matchNames:
    - kube-system  #匹配的名稱空間
    


kubectl appply -f etcd-serviceMonitor.yaml 
#etcd-serviceMonitor.yaml  是以上的yaml檔案

最終結果: