|NO.Z.00297|——————————|CloudNative|——|KuberNetes&運維.V18|-----------------------------------------------------------|監控.v04|部署kube-prometheus|
阿新 • • 發佈:2022-03-31
[CloudNative:KuberNetes&運維.V18] [Applications.KuberNetes] [|**3節點.V1**|部署helm和ingress|Prometheus安裝級入門|部署operator/alert/grafana|]
一、安裝kube-prometheus
### --- 下載kube-prometheus的最新版本包 ~~~ # kube-prometheus下載地址: ~~~ https://github.com/coreos/kube-prometheus.git ~~~ ——>——> 最左邊:main——>Switch branches/tags——>Branches:release-0.5
### --- 下載安裝檔案 [root@k8s-master01 prometheus]# git clone -b release-0.5 --single-branch https://github.com/coreos/kube-prometheus.git Cloning into 'kube-prometheus'... remote: Enumerating objects: 8051, done. remote: Counting objects: 100% (2/2), done. remote: Compressing objects: 100% (2/2), done. remote: Total 8051 (delta 0), reused 1 (delta 0), pack-reused 8049 Receiving objects: 100% (8051/8051), 4.54 MiB | 27.00 KiB/s, done. Resolving deltas: 100% (4876/4876), done.
### --- 檢視下載的檔案 [root@k8s-master01 prometheus]# cd kube-prometheus/ [root@k8s-master01 kube-prometheus]# ls build.sh DCO example.jsonnet experimental go.sum jsonnet jsonnetfile.lock.json LICENSE manifests OWNERS scripts tests code-of-conduct.md docs examples go.mod hack jsonnetfile.json kustomization.yaml Makefile NOTICE README.md sync-to-internal-registry.jsonnet test.sh
### --- 操作目錄檔案詳解
~~~ 注:該目錄為操作目錄,定義了一些定義好的一些模板,可以直接使用
[root@k8s-master01 kube-prometheus]# ls manifests/
alertmanager-alertmanager.yaml // 部署Alertmanager
node-exporter-daemonset.yaml // 定義了node-exporter,是採集宿主機的監控資料的,這些監控資料比zabbix監控的更詳細
prometheus-prometheus.yaml // 部署Prometheus的server端的
grafana-dashboardDatasources.yaml // grafana定義很多dashboard;把這個dashboard放在了configmap中,若是沒有後端儲存的話,新增一個模板的話,需要把它掛載到這個configmap中,然後grafana就會讀取這個dashboard。若是採用宿主機部署的,就可以直接上傳一個dashboard,它會儲存在宿主機上。此環境使用宿主機部署;因為grafana是一個展示,掛掉影響也不會很嚴重。
prometheus-rules.yaml // 定義了Prometheus的基本規則
grafana-deployment.yaml // 定義grafana,grafana集成了一些dashboard,這個dashboard是用configmap去注入的 ;這個grafana有儲存的話建議掛一個儲存,若是沒有儲存,建議使用一個宿主機去部署,grafana使用容器部署的話會不方便,因為我們需要經常性的去更改裡面的引數,更改模板,建立模板。所以最好使用一個宿主機來部署,有儲存可以把儲存掛載到pod的目錄上即可
setup
二、安裝operator### --- 進入operator安裝目錄
[root@k8s-master01 setup]# pwd
/root/README/EFK/prometheus/kube-prometheus/manifests/setup
### --- 安裝operator
~~~ 注:它會建立一個monitoring的namespace
[root@k8s-master01 setup]# kubectl create -f .
namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created
### --- 檢視operator安裝結果
[root@k8s-master01 setup]# kubectl get po -n monitoring -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
prometheus-operator-848d669f6d-j5vjd 2/2 Running 0 64m 172.17.125.16 k8s-node01 <none> <none>
### --- 驗證crd有沒有產生
[root@k8s-master01 setup]# until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
三、建立Prometheus叢集
### --- 進入安裝目錄
[root@k8s-master01 manifests]# pwd
/root/README/EFK/prometheus/kube-prometheus/manifests
### --- 修改配置檔案
[root@k8s-master01 manifests]# vim alertmanager-alertmanager.yaml
~~~ 註釋一:
replicas: 1 // 副本數量預設是3個,此環境我們只啟1個即可,生成環境是最少啟動3個
~~~ 註釋二:
nodeSelector:
kubernetes.io/hostname: k8s-node02 // 繫結在k8s-node02上面
[root@k8s-master01 manifests]# vim prometheus-prometheus.yaml //修改Prometheus副本數
replicas: 1
[root@k8s-master01 manifests]# vim prometheus-adapter-deployment.yaml //副本數設定為1
replicas: 1
### --- 建立Prometheus叢集
[root@k8s-master01 manifests]# kubectl create -f .
alertmanager.monitoring.coreos.com/main created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager created
secret/grafana-datasources created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-statefulset created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-operator created
prometheus.monitoring.coreos.com/k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
### --- 檢視建立pod的狀態
[root@k8s-master01 manifests]# kubectl get po -n monitoring -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
alertmanager-main-0 2/2 Running 0 22m 172.25.244.211 k8s-master01 <none> <none>
grafana-5d9d5f67c4-68kxb 1/1 Running 0 22m 172.17.125.18 k8s-node01 <none> <none>
kube-state-metrics-7fddf8779f-g7959 3/3 Running 0 22m 172.25.244.212 k8s-master01 <none> <none>
node-exporter-db78b 2/2 Running 0 22m 192.168.1.15 k8s-node02 <none> <none>
node-exporter-rwdf8 2/2 Running 0 22m 192.168.1.14 k8s-node01 <none> <none>
node-exporter-sxf9d 2/2 Running 0 22m 192.168.1.11 k8s-master01 <none> <none>
prometheus-adapter-cb548cdbf-qnjgd 1/1 Running 0 22m 172.17.125.17 k8s-node01 <none> <none>
prometheus-k8s-0 3/3 Running 2 22m 172.27.14.209 k8s-node02 <none> <none>
prometheus-k8s-1 3/3 Running 2 22m 172.27.14.208 k8s-node02 <none> <none>
prometheus-operator-848d669f6d-j5vjd 2/2 Running 0 96m 172.17.125.16 k8s-node01 <none> <none>
### --- 建立完成之後,會建立3個服務
~~~ 第一個服務:Alertmanager:經常會使用到,需要給它配置域名;可以檢視當前的告警有哪些,那些告警需要處理,需要停止那些告警
~~~ 第二個服務:grafana:需要給它配置域名;展示資料的。
~~~ 第三個服務:prometheus-k8s:需要給它配置域名;建立一些規則,建立的的規則的語法是否正確,查詢操作,檢視target
[root@k8s-master01 manifests]# kubectl get svc -n monitoring -owide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
alertmanager-main ClusterIP 10.111.201.48 <none> 9093/TCP 24m alertmanager=main,app=alertmanager
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 24m app=alertmanager
grafana ClusterIP 10.98.164.98 <none> 3000/TCP 24m app=grafana
kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 24m app.kubernetes.io/name=kube-state-metrics
node-exporter ClusterIP None <none> 9100/TCP 24m app.kubernetes.io/name=node-exporter,app.kubernetes.io/version=v0.18.1
prometheus-adapter ClusterIP 10.98.176.139 <none> 443/TCP 24m name=prometheus-adapter
prometheus-k8s ClusterIP 10.110.112.47 <none> 9090/TCP 23m app=prometheus,prometheus=k8s
prometheus-operated ClusterIP None <none> 9090/TCP 23m app=prometheus
prometheus-operator ClusterIP None <none> 8443/TCP 99m app.kubernetes.io/component=controller,app.kubernetes.io/name=prometheus-operator
四、建立Prometheus-ingress
### --- 建立prometheus-configmap.yaml檔案
[root@k8s-master01 prometheus]# vim prometheus-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prom-ingresses
namespace: monitoring
spec:
rules:
- host: alert.test.com
http:
paths:
- backend:
serviceName: alertmanager-main
servicePort: 9093
path: /
- host: grafana.test.com
http:
paths:
- backend:
serviceName: grafana
servicePort: 3000
path: /
- host: prom.test.com
http:
paths:
- backend:
serviceName: prometheus-k8s
servicePort: 9090
path: /
### --- 建立prometheus-configmap
[root@k8s-master01 prometheus]# kubectl create -f prometheus-ingress.yaml -n monitoring
ingress.extensions/prom-ingresses created
### --- 檢視建立結果
[root@k8s-master01 prometheus]# kubectl get ingress -n monitoring
NAME CLASS HOSTS ADDRESS PORTS AGE
prom-ingresses <none> alert.test.com,grafana.test.com,prom.test.com 10.107.150.111 80 44s
===============================END===============================
Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart ——W.S.Landor
來自為知筆記(Wiz)