1. 程式人生 > 實用技巧 >使用 Elastic 技術棧構建 Kubernetes全棧監控

使用 Elastic 技術棧構建 Kubernetes全棧監控

以下我們描述如何使用 Elastic 技術棧來為 Kubernetes 構建監控環境。可觀測性的目標是為生產環境提供運維工具來檢測服務不可用的情況(比如服務宕機、錯誤或者響應變慢等),並且保留一些可以排查的資訊,以幫助我們定位問題。總的來說主要包括3個方面:

  • 監控指標提供系統各個元件的時間序列資料,比如 CPU、記憶體、磁碟、網路等資訊,通常可以用來顯示系統的整體狀況以及檢測某個時間的異常行為
  • 日誌為運維人員提供了一個數據來分析系統的一些錯誤行為,通常將系統、服務和應用的日誌集中收集在同一個資料庫中
  • 追蹤或者 APM(應用效能監控)提供了一個更加詳細的應用檢視,可以將服務執行的每一個請求和步驟都記錄下來(比如 HTTP 呼叫、資料庫查詢等),通過追蹤這些資料,我們可以檢測到服務的效能,並相應地改進或修復我們的系統。

本文我們就將在 Kubernetes 叢集中使用由 ElasticSearch、Kibana、Filebeat、Metricbeat 和 APM-Server 組成的 Elastic 技術棧來監控系統環境。為了更好地去了解這些元件的配置,我們這裡將採用手寫資源清單檔案的方式來安裝這些元件,當然我們也可以使用 Helm 等其他工具來快速安裝配置。

接下來我們就來學習下如何使用 Elastic 技術構建 Kubernetes 監控棧。我們這裡的試驗環境是 Kubernetes v1.16.3 版本的叢集(已經配置完成),為方便管理,我們將所有的資源物件都部署在一個名為 elastic 的名稱空間中:

$ kubectl create ns elastic
namespace/elastic created

1. SpringBoot 和 MongoDB 開發的示例應用

這裡我們先部署一個使用 SpringBoot 和 MongoDB 開發的示例應用。首先部署一個 MongoDB 應用,對應的資源清單檔案如下所示:

# mongo.yml
---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  namespace: elastic
  labels:
    app: mongo
spec:
  ports:
  - port: 27017
    protocol: TCP
  selector:
    app: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: elastic
  name: mongo
  labels:
    app: mongo
spec:
  serviceName: "mongo"
  selector:
    matchLabels:
      app: mongo
  template:
    metadata:
      labels:
        app: mongo
    spec:
      containers:
      - name: mongo
        image: mongo
        ports:
        - containerPort: 27017
        volumeMounts:
        - name: data
          mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: rook-ceph-block  # 使用支援 RWO 的 StorageClass
      resources:
        requests:
          storage: 1Gi

這裡我們使用了一個名為 rook-ceph-block 的 StorageClass 物件來自動建立 PV,可以替換成自己叢集中支援 RWO 的 StorageClass 物件即可。儲存採用rook-ceph 實踐配置,直接使用上面的資源清單建立即可:

$ kubectl apply -f mongo.yml
service/mongo created
statefulset.apps/mongo created
$ kubectl get pods -n elastic -l app=mongo             
NAME      READY   STATUS    RESTARTS   AGE
mongo-0   1/1     Running   0          34m

直到 Pod 變成 Running 狀態證明 mongodb 部署成功了。接下來部署 SpringBoot 的 API 應用,這裡我們通過 NodePort 型別的 Service 服務來暴露該服務,對應的資源清單檔案如下所示:

# spring-boot-simple.yml
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: spring-boot-simple
  labels:
    app: spring-boot-simple
spec:
  type: NodePort
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: spring-boot-simple
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: spring-boot-simple
  labels:
    app: spring-boot-simple
spec:
  replicas: 1
  selector:
    matchLabels:
      app: spring-boot-simple
  template:
    metadata:
      labels:
        app: spring-boot-simple
    spec:
      containers:
      - image: cnych/spring-boot-simple:0.0.1-SNAPSHOT
        name: spring-boot-simple
        env:
        - name: SPRING_DATA_MONGODB_HOST  # 指定MONGODB地址
          value: mongo
        ports:
        - containerPort: 8080

同樣直接建立上面的應用的應用即可:

$ kubectl apply -f spring-boot-simple.yaml 
service/spring-boot-simple created
deployment.apps/spring-boot-simple created
$ kubectl get pods -n elastic -l app=spring-boot-simple
NAME                                  READY   STATUS    RESTARTS   AGE
spring-boot-simple-64795494bf-hqpcj   1/1     Running   0          24m
$ kubectl get svc -n elastic -l app=spring-boot-simple
NAME                 TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
spring-boot-simple   NodePort   10.109.55.134   <none>        8080:31847/TCP   84s

當應用部署完成後,我們就可以通過地址 http://:31847 訪問應用,可以通過如下命令進行簡單測試:

$ curl -X GET  http://k8s.qikqiak.com:31847/
Greetings from Spring Boot!

傳送一個 POST 請求:

$ curl -X POST http://k8s.qikqiak.com:31847/message -d 'hello world'
{"id":"5ef55c130d53190001bf74d2","message":"hello+world=","postedAt":"2020-06-26T02:23:15.860+0000"}

獲取所以訊息資料:

$ curl -X GET http://k8s.qikqiak.com:31847/message
[{"id":"5ef55c130d53190001bf74d2","message":"hello+world=","postedAt":"2020-06-26T02:23:15.860+0000"}]

2. ElasticSearch 叢集

要建立一個 Elastic 技術的監控棧,當然首先我們需要部署 ElasticSearch,它是用來儲存所有的指標、日誌和追蹤的資料庫,這裡我們通過3個不同角色的可擴充套件的節點組成一個叢集。

2.1 安裝 ElasticSearch 主節點

設定叢集的第一個節點為 Master 主節點,來負責控制整個叢集。首先建立一個 ConfigMap 物件,用來描述叢集的一些配置資訊,以方便將 ElasticSearch 的主節點配置到叢集中並開啟安全認證功能。對應的資源清單檔案如下所示:

# elasticsearch-master.configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: elasticsearch-master-config
  labels:
    app: elasticsearch
    role: master
data:
  elasticsearch.yml: |-
    cluster.name: ${CLUSTER_NAME}
    node.name: ${NODE_NAME}
    discovery.seed_hosts: ${NODE_LIST}
    cluster.initial_master_nodes: ${MASTER_NODES}

    network.host: 0.0.0.0

    node:
      master: true
      data: false
      ingest: false

    xpack.security.enabled: true
    xpack.monitoring.collection.enabled: true
---

然後建立一個 Service 物件,在 Master 節點下,我們只需要通過用於叢集通訊的 9300 埠進行通訊。資源清單檔案如下所示:

# elasticsearch-master.service.yaml
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: elasticsearch-master
  labels:
    app: elasticsearch
    role: master
spec:
  ports:
  - port: 9300
    name: transport
  selector:
    app: elasticsearch
    role: master
---

最後使用一個 Deployment 物件來定義 Master 節點應用,資源清單檔案如下所示:

# elasticsearch-master.deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: elasticsearch-master
  labels:
    app: elasticsearch
    role: master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch
      role: master
  template:
    metadata:
      labels:
        app: elasticsearch
        role: master
    spec:
      containers:
      - name: elasticsearch-master
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        env:
        - name: CLUSTER_NAME
          value: elasticsearch
        - name: NODE_NAME
          value: elasticsearch-master
        - name: NODE_LIST
          value: elasticsearch-master,elasticsearch-data,elasticsearch-client
        - name: MASTER_NODES
          value: elasticsearch-master
        - name: "ES_JAVA_OPTS"
          value: "-Xms512m -Xmx512m"
        ports:
        - containerPort: 9300
          name: transport
        volumeMounts:
        - name: config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          readOnly: true
          subPath: elasticsearch.yml
        - name: storage
          mountPath: /data
      volumes:
      - name: config
        configMap:
          name: elasticsearch-master-config
      - name: "storage"
        emptyDir:
          medium: ""
---

直接建立上面的3個資源物件即可:

$ kubectl apply  -f elasticsearch-master.configmap.yaml \
                 -f elasticsearch-master.service.yaml \
                 -f elasticsearch-master.deployment.yaml

configmap/elasticsearch-master-config created
service/elasticsearch-master created
deployment.apps/elasticsearch-master created
$ kubectl get pods -n elastic -l app=elasticsearch
NAME                                    READY   STATUS    RESTARTS   AGE
elasticsearch-master-6f666cbbd-r9vtx    1/1     Running   0          111m

直到 Pod 變成 Running 狀態就表明 master 節點安裝成功。

2.2 安裝 ElasticSearch 資料節點

現在我們需要安裝的是叢集的資料節點,它主要來負責叢集的資料託管和執行查詢。 和 master 節點一樣,我們使用一個 ConfigMap 物件來配置我們的資料節點:

# elasticsearch-data.configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: elasticsearch-data-config
  labels:
    app: elasticsearch
    role: data
data:
  elasticsearch.yml: |-
    cluster.name: ${CLUSTER_NAME}
    node.name: ${NODE_NAME}
    discovery.seed_hosts: ${NODE_LIST}
    cluster.initial_master_nodes: ${MASTER_NODES}

    network.host: 0.0.0.0

    node:
      master: false
      data: true
      ingest: false

    xpack.security.enabled: true
    xpack.monitoring.collection.enabled: true
---

可以看到和上面的 master 配置非常類似,不過需要注意的是屬性 node.data=true。

同樣只需要通過 9300 埠和其他節點進行通訊:

# elasticsearch-data.service.yaml
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: elasticsearch-data
  labels:
    app: elasticsearch
    role: data
spec:
  ports:
  - port: 9300
    name: transport
  selector:
    app: elasticsearch
    role: data
---

最後建立一個 StatefulSet 的控制器,因為可能會有多個數據節點,每一個節點的資料不是一樣的,需要單獨儲存,所以也使用了一個 volumeClaimTemplates 來分別建立儲存卷,對應的資源清單檔案如下所示:

# elasticsearch-data.statefulset.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: elastic
  name: elasticsearch-data
  labels:
    app: elasticsearch
    role: data
spec:
  serviceName: "elasticsearch-data"
  selector:
    matchLabels:
      app: elasticsearch
      role: data
  template:
    metadata:
      labels:
        app: elasticsearch
        role: data
    spec:
      containers:
      - name: elasticsearch-data
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        env:
        - name: CLUSTER_NAME
          value: elasticsearch
        - name: NODE_NAME
          value: elasticsearch-data
        - name: NODE_LIST
          value: elasticsearch-master,elasticsearch-data,elasticsearch-client
        - name: MASTER_NODES
          value: elasticsearch-master
        - name: "ES_JAVA_OPTS"
          value: "-Xms1024m -Xmx1024m"
        ports:
        - containerPort: 9300
          name: transport
        volumeMounts:
        - name: config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          readOnly: true
          subPath: elasticsearch.yml
        - name: elasticsearch-data-persistent-storage
          mountPath: /data/db
      volumes:
      - name: config
        configMap:
          name: elasticsearch-data-config
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-data-persistent-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: rook-ceph-block
      resources:
        requests:
          storage: 50Gi
---

直接建立上面的資源物件即可:

$ kubectl apply -f elasticsearch-data.configmap.yaml \
                -f elasticsearch-data.service.yaml \
                -f elasticsearch-data.statefulset.yaml

configmap/elasticsearch-data-config created
service/elasticsearch-data created
statefulset.apps/elasticsearch-data created

直到 Pod 變成 Running 狀態證明節點啟動成功:

$ kubectl get pods -n elastic -l app=elasticsearch
NAME                                    READY   STATUS    RESTARTS   AGE
elasticsearch-data-0                    1/1     Running   0          90m
elasticsearch-master-6f666cbbd-r9vtx    1/1     Running   0          111m

2.3 安裝 ElasticSearch 客戶端節點

最後來安裝配置 ElasticSearch 的客戶端節點,該節點主要負責暴露一個 HTTP 介面將查詢資料傳遞給資料節點獲取資料。

同樣使用一個 ConfigMap 物件來配置該節點:

# elasticsearch-client.configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: elasticsearch-client-config
  labels:
    app: elasticsearch
    role: client
data:
  elasticsearch.yml: |-
    cluster.name: ${CLUSTER_NAME}
    node.name: ${NODE_NAME}
    discovery.seed_hosts: ${NODE_LIST}
    cluster.initial_master_nodes: ${MASTER_NODES}

    network.host: 0.0.0.0

    node:
      master: false
      data: false
      ingest: true

    xpack.security.enabled: true
    xpack.monitoring.collection.enabled: true
---

客戶端節點需要暴露兩個埠,9300埠用於與叢集的其他節點進行通訊,9200埠用於 HTTP API。對應的 Service 物件如下所示:

# elasticsearch-client.service.yaml
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: elasticsearch-client
  labels:
    app: elasticsearch
    role: client
spec:
  ports:
  - port: 9200
    name: client
  - port: 9300
    name: transport
  selector:
    app: elasticsearch
    role: client
---

使用一個 Deployment 物件來描述客戶端節點:

# elasticsearch-client.deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: elasticsearch-client
  labels:
    app: elasticsearch
    role: client
spec:
  selector:
    matchLabels:
      app: elasticsearch
      role: client
  template:
    metadata:
      labels:
        app: elasticsearch
        role: client
    spec:
      containers:
      - name: elasticsearch-client
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        env:
        - name: CLUSTER_NAME
          value: elasticsearch
        - name: NODE_NAME
          value: elasticsearch-client
        - name: NODE_LIST
          value: elasticsearch-master,elasticsearch-data,elasticsearch-client
        - name: MASTER_NODES
          value: elasticsearch-master
        - name: "ES_JAVA_OPTS"
          value: "-Xms256m -Xmx256m"
        ports:
        - containerPort: 9200
          name: client
        - containerPort: 9300
          name: transport
        volumeMounts:
        - name: config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          readOnly: true
          subPath: elasticsearch.yml
        - name: storage
          mountPath: /data
      volumes:
      - name: config
        configMap:
          name: elasticsearch-client-config
      - name: "storage"
        emptyDir:
          medium: ""
---

同樣直接建立上面的資源物件來部署 client 節點:

$ kubectl apply  -f elasticsearch-client.configmap.yaml \
                 -f elasticsearch-client.service.yaml \
                 -f elasticsearch-client.deployment.yaml

configmap/elasticsearch-client-config created
service/elasticsearch-client created
deployment.apps/elasticsearch-client created

直到所有的節點都部署成功後證明叢集安裝成功:

$ kubectl get pods -n elastic -l app=elasticsearch
NAME                                    READY   STATUS    RESTARTS   AGE
elasticsearch-client-788bffcc98-hh2s8   1/1     Running   0          83m
elasticsearch-data-0                    1/1     Running   0          91m
elasticsearch-master-6f666cbbd-r9vtx    1/1     Running   0          112m

可以通過如下所示的命令來檢視叢集的狀態變化:

$ kubectl logs -f -n elastic \
  $(kubectl get pods -n elastic | grep elasticsearch-master | sed -n 1p | awk '{print $1}') \
  | grep "Cluster health status changed from"

{"type": "server", "timestamp": "2020-06-26T03:31:21,353Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "message": "Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.monitoring-es-7-2020.06.26][0]]]).", "cluster.uuid": "SS_nyhNiTDSCE6gG7z-J4w", "node.id": "BdVScO9oQByBHR5rfw-KDA"  }

2.4 生成密碼

我們啟用了 xpack 安全模組來保護我們的叢集,所以我們需要一個初始化的密碼。我們可以執行如下所示的命令,在客戶端節點容器內執行 bin/elasticsearch-setup-passwords 命令來生成預設的使用者名稱和密碼:

$ kubectl exec $(kubectl get pods -n elastic | grep elasticsearch-client | sed -n 1p | awk '{print $1}') \
    -n elastic \
    -- bin/elasticsearch-setup-passwords auto -b

Changed password for user apm_system
PASSWORD apm_system = 3Lhx61s6woNLvoL5Bb7t

Changed password for user kibana_system
PASSWORD kibana_system = NpZv9Cvhq4roFCMzpja3

Changed password for user kibana
PASSWORD kibana = NpZv9Cvhq4roFCMzpja3

Changed password for user logstash_system
PASSWORD logstash_system = nNnGnwxu08xxbsiRGk2C

Changed password for user beats_system
PASSWORD beats_system = fen759y5qxyeJmqj6UPp

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = mCP77zjCATGmbcTFFgOX

Changed password for user elastic
PASSWORD elastic = wmxhvsJFeti2dSjbQEAH

注意需要將 elastic 使用者名稱和密碼也新增到 Kubernetes 的 Secret 物件中(後續會進行呼叫):

$ kubectl create secret generic elasticsearch-pw-elastic \
    -n elastic \
    --from-literal password=wmxhvsJFeti2dSjbQEAH
secret/elasticsearch-pw-elastic created

3. Kibana

ElasticSearch 叢集安裝完成後,接著我們可以來部署 Kibana,這是 ElasticSearch 的資料視覺化工具,它提供了管理 ElasticSearch 叢集和視覺化資料的各種功能。

同樣首先我們使用 ConfigMap 物件來提供一個檔案檔案,其中包括對 ElasticSearch 的訪問(主機、使用者名稱和密碼),這些都是通過環境變數配置的。對應的資源清單檔案如下所示:

# kibana.configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: kibana-config
  labels:
    app: kibana
data:
  kibana.yml: |-
    server.host: 0.0.0.0

    elasticsearch:
      hosts: ${ELASTICSEARCH_HOSTS}
      username: ${ELASTICSEARCH_USER}
      password: ${ELASTICSEARCH_PASSWORD}
---

然後通過一個 NodePort 型別的服務來暴露 Kibana 服務:

# kibana.service.yaml
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: kibana
  labels:
    app: kibana
spec:
  type: NodePort
  ports:
  - port: 5601
    name: webinterface
  selector:
    app: kibana
---

最後通過 Deployment 來部署 Kibana 服務,由於需要通過環境變數提供密碼,這裡我們使用上面建立的 Secret 物件來引用:

# kibana.deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: kibana
  labels:
    app: kibana
spec:
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.8.0
        ports:
        - containerPort: 5601
          name: webinterface
        env:
        - name: ELASTICSEARCH_HOSTS
          value: "http://elasticsearch-client.elastic.svc.cluster.local:9200"
        - name: ELASTICSEARCH_USER
          value: "elastic"
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:  # 呼叫前面建立的secret密碼檔案,將密碼賦值成為變數使用
              name: elasticsearch-pw-elastic
              key: password
        volumeMounts:
        - name: config
          mountPath: /usr/share/kibana/config/kibana.yml
          readOnly: true
          subPath: kibana.yml
      volumes:
      - name: config
        configMap:
          name: kibana-config
---

同樣直接建立上面的資源清單即可部署:

$ kubectl apply  -f kibana.configmap.yaml \
                 -f kibana.service.yaml \
                 -f kibana.deployment.yaml

configmap/kibana-config created
service/kibana created
deployment.apps/kibana created

部署成功後,可以通過檢視 Pod 的日誌來了解 Kibana 的狀態:

$ kubectl logs -f -n elastic $(kubectl get pods -n elastic | grep kibana | sed -n 1p | awk '{print $1}') \
     | grep "Status changed from yellow to green"

{"type":"log","@timestamp":"2020-06-26T04:20:38Z","tags":["status","plugin:[email protected]","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

當狀態變成 green 後,我們就可以通過 NodePort 埠 30474 去瀏覽器中訪問 Kibana 服務了:

$ kubectl get svc kibana -n elastic   
NAME     TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kibana   NodePort   10.101.121.31   <none>        5601:30474/TCP   8m18s

如下圖所示,使用上面我們建立的 Secret 物件的 elastic 使用者和生成的密碼即可登入:

登入成功後會自動跳轉到 Kibana 首頁:

同樣也可以自己建立一個新的超級使用者,Management → Stack Management → Create User:

使用新的使用者名稱和密碼,選擇 superuser 這個角色來建立新的使用者:

建立成功後就可以使用上面新建的使用者登入 Kibana,最後還可以通過 Management → Stack Monitoring 頁面檢視整個叢集的健康狀態:

到這裡我們就安裝成功了 ElasticSearch 與 Kibana,它們將為我們來儲存和視覺化我們的應用資料(監控指標、日誌和追蹤)服務。

上面我們已經安裝配置了 ElasticSearch 的叢集,接下來我們將來使用 Metricbeat 對 Kubernetes 叢集進行監控。Metricbeat 是一個伺服器上的輕量級採集器,用於定期收集主機和服務的監控指標。這也是我們構建 Kubernetes 全棧監控的第一個部分。

Metribeat 預設採集系統的指標,但是也包含了大量的其他模組來採集有關服務的指標,比如 Nginx、Kafka、MySQL、Redis 等等,支援的完整模組可以在 Elastic 官方網站上檢視到 https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-modules.html。

4. kube-state-metrics

首先,我們需要安裝 kube-state-metrics,這個元件是一個監聽 Kubernetes API 的服務,可以暴露每個資源物件狀態的相關指標資料。

要安裝 kube-state-metrics 也非常簡單,在對應的 GitHub 倉庫下就有對應的安裝資源清單檔案:

$ git clone https://github.com/kubernetes/kube-state-metrics.git
$ cd kube-state-metrics
# 執行安裝命令
$ kubectl apply -f examples/standard/  
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics configured
clusterrole.rbac.authorization.k8s.io/kube-state-metrics configured
deployment.apps/kube-state-metrics configured
serviceaccount/kube-state-metrics configured
service/kube-state-metrics configured
$ kubectl get pods -n kube-system -l app.kubernetes.io/name=kube-state-metrics
NAME                                  READY   STATUS    RESTARTS   AGE
kube-state-metrics-6d7449fc78-mgf4f   1/1     Running   0          88s

當 Pod 變成 Running 狀態後證明安裝成功。

5. Metricbeat

由於我們需要監控所有的節點,所以我們需要使用一個 DaemonSet 控制器來安裝 Metricbeat。

首先,使用一個 ConfigMap 來配置 Metricbeat,然後通過 Volume 將該物件掛載到容器中的 /etc/metricbeat.yaml 中去。配置檔案中包含了 ElasticSearch 的地址、使用者名稱和密碼,以及 Kibana 配置,我們要啟用的模組與抓取頻率等資訊。

# metricbeat.settings.configmap.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: metricbeat-config
  labels:
    app: metricbeat
data:
  metricbeat.yml: |-

    # 模組配置
    metricbeat.modules:
    - module: system
      period: ${PERIOD}  # 設定一個抓取資料的間隔
      metricsets: ["cpu", "load", "memory", "network", "process", "process_summary", "core", "diskio", "socket"]
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5      # 根據 CPU 計算的前5個程序
        by_memory: 5   # 根據記憶體計算的前5個程序

    - module: system
      period: ${PERIOD}
      metricsets:  ["filesystem", "fsstat"]
      processors:
      - drop_event.when.regexp:  # 排除一些系統目錄的監控
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'

    - module: docker             # 抓取docker應用,但是不支援containerd 
      period: ${PERIOD}
      hosts: ["unix:///var/run/docker.sock"]
      metricsets: ["container", "cpu", "diskio", "healthcheck", "info", "memory", "network"]

    - module: kubernetes  # 抓取 kubelet 監控指標
      period: ${PERIOD}
      node: ${NODE_NAME}
      hosts: ["https://${NODE_NAME}:10250"]    # 連線kubelet的監控埠,如果需要監控api-server/controller-manager等其他元件的監控,也需要連線埠
      metricsets: ["node", "system", "pod", "container", "volume"]
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      ssl.verification_mode: "none"
     
    - module: kubernetes  # 抓取 kube-state-metrics 資料
      period: ${PERIOD}
      node: ${NODE_NAME}
      metricsets: ["state_node", "state_deployment", "state_replicaset", "state_pod", "state_container"]
      hosts: ["kube-state-metrics.kube-system.svc.cluster.local:8080"]

    # 根據 k8s deployment 配置具體的服務模組mongo
    metricbeat.autodiscover:
      providers:
      - type: kubernetes
        node: ${NODE_NAME}
        templates:
        - condition.equals:
            kubernetes.labels.app: mongo
          config:
          - module: mongodb
            period: ${PERIOD}
            hosts: ["mongo.elastic:27017"]
            metricsets: ["dbstats", "status", "collstats", "metrics", "replstatus"]

    # ElasticSearch 連線配置
    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}

    # 連線到 Kibana
    setup.kibana:
      host: '${KIBANA_HOST:kibana}:${KIBANA_PORT:5601}'

    # 匯入已經存在的 Dashboard
    setup.dashboards.enabled: true

    # 配置 indice 生命週期
    setup.ilm:
      policy_file: /etc/indice-lifecycle.json
---

ElasticSearch 的 indice 生命週期表示一組規則,可以根據 indice 的大小或者時長應用到你的 indice 上。比如可以每天或者每次超過 1GB 大小的時候對 indice 進行輪轉,我們也可以根據規則配置不同的階段。由於監控會產生大量的資料,很有可能一天就超過幾十G的資料,所以為了防止大量的資料儲存,我們可以利用 indice 的生命週期來配置資料保留,這個在 Prometheus 中也有類似的操作。 如下所示的檔案中,我們配置成每天或每次超過5GB的時候就對 indice 進行輪轉,並刪除所有超過10天的 indice 檔案,我們這裡只保留10天監控資料完全足夠了。

# metricbeat.indice-lifecycle.configmap.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: metricbeat-indice-lifecycle
  labels:
    app: metricbeat
data:
  indice-lifecycle.json: |-
    {
      "policy": {
        "phases": {
          "hot": {
            "actions": {
              "rollover": {
                "max_size": "5GB" ,
                "max_age": "1d"
              }
            }
          },
          "delete": {
            "min_age": "10d",
            "actions": {
              "delete": {}
            }
          }
        }
      }
    }
---

接下來就可以來編寫 Metricbeat 的 DaemonSet 資源物件清單,如下所示:

# metricbeat.daemonset.yml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: elastic
  name: metricbeat
  labels:
    app: metricbeat
spec:
  selector:
    matchLabels:
      app: metricbeat
  template:
    metadata:
      labels:
        app: metricbeat
    spec:
      serviceAccountName: metricbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.8.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e", "-system.hostfs=/hostfs"
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-client.elastic.svc.cluster.local
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:    # 呼叫前面建立的secret密碼檔案
              name: elasticsearch-pw-elastic
              key: password
        - name: KIBANA_HOST
          value: kibana.elastic.svc.cluster.local
        - name: KIBANA_PORT
          value: "5601"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: PERIOD
          value: "10s"
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: indice-lifecycle
          mountPath: /etc/indice-lifecycle.json
          readOnly: true
          subPath: indice-lifecycle.json
        - name: dockersock
          mountPath: /var/run/docker.sock
        - name: proc
          mountPath: /hostfs/proc
          readOnly: true
        - name: cgroup
          mountPath: /hostfs/sys/fs/cgroup
          readOnly: true
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: cgroup
        hostPath:
          path: /sys/fs/cgroup
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-config
      - name: indice-lifecycle
        configMap:
          defaultMode: 0600
          name: metricbeat-indice-lifecycle
      - name: data
        hostPath:
          path: /var/lib/metricbeat-data
          type: DirectoryOrCreate
---

需要注意的將上面的兩個 ConfigMap 掛載到容器中去,由於需要 Metricbeat 獲取宿主機的相關資訊,所以我們這裡也掛載了一些宿主機的檔案到容器中去,比如 proc 目錄,cgroup 目錄以及 dockersock 檔案。

由於 Metricbeat 需要去獲取 Kubernetes 叢集的資源物件資訊,所以同樣需要對應的 RBAC 許可權宣告,由於是全域性作用域的,所以這裡我們使用 ClusterRole 進行宣告:

# metricbeat.permissions.yml
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: metricbeat
subjects:
- kind: ServiceAccount
  name: metricbeat
  namespace: elastic
roleRef:
  kind: ClusterRole
  name: metricbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: metricbeat
  labels:
    app: metricbeat
rules:
- apiGroups: [""]
  resources:
  - nodes
  - namespaces
  - events
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
  resources:
  - replicasets
  verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
  resources:
  - statefulsets
  - deployments
	- replicasets
  verbs: ["get", "list", "watch"]
- apiGroups:
  - ""
  resources:
  - nodes/stats
  verbs:
  - get
---
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: elastic
  name: metricbeat
  labels:
    app: metricbeat
---

直接建立上面的幾個資源物件即可:

$ kubectl apply  -f metricbeat.settings.configmap.yml \
                 -f metricbeat.indice-lifecycle.configmap.yml \
                 -f metricbeat.daemonset.yml \
                 -f metricbeat.permissions.yml

configmap/metricbeat-config configured
configmap/metricbeat-indice-lifecycle configured
daemonset.extensions/metricbeat created
clusterrolebinding.rbac.authorization.k8s.io/metricbeat created
clusterrole.rbac.authorization.k8s.io/metricbeat created
serviceaccount/metricbeat created
$ kubectl get pods -n elastic -l app=metricbeat   
NAME               READY   STATUS    RESTARTS   AGE
metricbeat-2gstq   1/1     Running   0          18m
metricbeat-99rdb   1/1     Running   0          18m
metricbeat-9bb27   1/1     Running   0          18m
metricbeat-cgbrg   1/1     Running   0          18m
metricbeat-l2csd   1/1     Running   0          18m
metricbeat-lsrgv   1/1     Running   0          18m

當 Metricbeat 的 Pod 變成 Running 狀態後,正常我們就可以在 Kibana 中去檢視對應的監控資訊了。

在 Kibana 左側頁面 Observability → Metrics 進入指標監控頁面,正常就可以看到一些監控資料了:

也可以根據自己的需求進行篩選,比如我們可以按照 Kubernetes Namespace 進行分組作為檢視檢視監控資訊:

由於我們在配置檔案中設定了屬性 setup.dashboards.enabled=true,所以 Kibana 會匯入預先已經存在的一些 Dashboard。我們可以在左側選單進入 Kibana → Dashboard 頁面,我們會看到一個大約有 50 個 Metricbeat 的 Dashboard 列表,我們可以根據需要篩選 Dashboard,比如我們要檢視叢集節點的資訊,可以檢視 [Metricbeat Kubernetes] Overview ECS 這個 Dashboard:

我們還單獨啟用了 mongodb 模組,我們可以使用 [Metricbeat MongoDB] Overview ECS 這個 Dashboard 來檢視監控資訊:

我們還啟用了 docker 這個模組,也可以使用 [Metricbeat Docker] Overview ECS 這個 Dashboard 來檢視監控資訊:

到這裡我們就完成了使用 Metricbeat 來監控 Kubernetes 叢集資訊,在下面我們學習如何使用 Filebeat 來收集日誌以監控 Kubernetes 叢集。

6. Filebeat

我們將要安裝配置 Filebeat 來收集 Kubernetes 叢集中的日誌資料,然後傳送到 ElasticSearch 去中,Filebeat 是一個輕量級的日誌採集代理,還可以配置特定的模組來解析和視覺化應用(比如資料庫、Nginx 等)的日誌格式。

和 Metricbeat 類似,Filebeat 也需要一個配置檔案來設定和 ElasticSearch 的連結資訊、和 Kibana 的連線已經日誌採集和解析的方式。

如下所示的 ConfigMap 資源物件就是我們這裡用於日誌採集的配置資訊(可以從官方網站上獲取完整的可配置資訊):

# filebeat.settings.configmap.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: filebeat-config
  labels:
    app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      enabled: true
      paths:
      - /var/log/containers/*.log
      processors:
      - add_kubernetes_metadata:
          in_cluster: true
          host: ${NODE_NAME}
          matchers:
          - logs_path:
              logs_path: "/var/log/containers/"
    
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          templates:
            - condition.equals:
                kubernetes.labels.app: mongo
              config:
                - module: mongodb
                  enabled: true
                  log:
                    input:
                      type: docker
                      containers.ids:
                        - ${data.kubernetes.container.id}

    processors:
      - drop_event:
          when.or:
              - and:
                  - regexp:
                      message: '^\d+\.\d+\.\d+\.\d+ '
                  - equals:
                      fileset.name: error
              - and:
                  - not:
                      regexp:
                          message: '^\d+\.\d+\.\d+\.\d+ '
                  - equals:
                      fileset.name: access
      - add_cloud_metadata:
      - add_kubernetes_metadata:
          matchers:
          - logs_path:
              logs_path: "/var/log/containers/"
      - add_docker_metadata:

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}

    setup.kibana:
      host: '${KIBANA_HOST:kibana}:${KIBANA_PORT:5601}'

    setup.dashboards.enabled: true
    setup.template.enabled: true

    setup.ilm:
      policy_file: /etc/indice-lifecycle.json
---

我們配置採集 /var/log/containers/ 下面的所有日誌資料,並且使用 inCluster 的模式訪問 Kubernetes 的 APIServer,獲取日誌資料的 Meta 資訊,將日誌直接傳送到 Elasticsearch。

此外還通過 policy_file 定義了 indice 的回收策略:

# filebeat.indice-lifecycle.configmap.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: filebeat-indice-lifecycle
  labels:
    app: filebeat
data:
  indice-lifecycle.json: |-
    {
      "policy": {
        "phases": {
          "hot": {
            "actions": {
              "rollover": {
                "max_size": "5GB" ,
                "max_age": "1d"
              }
            }
          },
          "delete": {
            "min_age": "30d",
            "actions": {
              "delete": {}
            }
          }
        }
      }
    }
---

同樣為了採集每個節點上的日誌資料,我們這裡使用一個 DaemonSet 控制器,使用上面的配置來採集節點的日誌。

#filebeat.daemonset.yml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: elastic
  name: filebeat
  labels:
    app: filebeat
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.8.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-client.elastic.svc.cluster.local
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elasticsearch-pw-elastic
              key: password
        - name: KIBANA_HOST
          value: kibana.elastic.svc.cluster.local
        - name: KIBANA_PORT
          value: "5601"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: filebeat-indice-lifecycle
          mountPath: /etc/indice-lifecycle.json
          readOnly: true
          subPath: indice-lifecycle.json
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: dockersock
          mountPath: /var/run/docker.sock
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: filebeat-indice-lifecycle
        configMap:
          defaultMode: 0600
          name: filebeat-indice-lifecycle
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---

我們這裡使用的是 Kubeadm 搭建的叢集,預設 Master 節點是有汙點的,所以如果還想採集 Master 節點的日誌,還必須加上對應的容忍,我這裡不採集就沒有新增容忍了。 此外由於需要獲取日誌在 Kubernetes 叢集中的 Meta 資訊,比如 Pod 名稱、所在的名稱空間等,所以 Filebeat 需要訪問 APIServer,自然就需要對應的 RBAC 許可權了,所以還需要進行許可權宣告:

# filebeat.permission.yml
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: elastic
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    app: filebeat
rules:
- apiGroups: [""]
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: elastic
  name: filebeat
  labels:
    app: filebeat
---

然後直接安裝部署上面的幾個資源物件即可:

$ kubectl apply  -f filebeat.settings.configmap.yml \
                 -f filebeat.indice-lifecycle.configmap.yml \
                 -f filebeat.daemonset.yml \
                 -f filebeat.permissions.yml 

configmap/filebeat-config created
configmap/filebeat-indice-lifecycle created
daemonset.apps/filebeat created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
serviceaccount/filebeat created

當所有的 Filebeat 和 Logstash 的 Pod 都變成 Running 狀態後,證明部署完成。現在我們就可以進入到 Kibana 頁面中去檢視日誌了。左側選單 Observability → Logs

此外還可以從上節我們提到的 Metrics 頁面進入檢視 Pod 的日誌:

點選 Kubernetes Pod logs 獲取需要檢視的 Pod 日誌:

如果叢集中要採集的日誌資料量太大,直接將資料傳送給 ElasticSearch,對 ES 壓力比較大,這種情況一般可以加一個類似於 Kafka 這樣的中介軟體來緩衝下,或者通過 Logstash 來收集 Filebeat 的日誌。

這裡我們就完成了使用 Filebeat 採集 Kubernetes 叢集的日誌,在下篇文章中,我們繼續學習如何使用 Elastic APM 來追蹤 Kubernetes 叢集應用。

7. Elastic APM

Elastic APM 是 Elastic Stack 上用於應用效能監控的工具,它允許我們通過收集傳入請求、資料庫查詢、快取呼叫等方式來實時監控應用效能。這可以讓我們更加輕鬆快速定位效能問題。

Elastic APM 是相容 OpenTracing 的,所以我們可以使用大量現有的庫來跟蹤應用程式效能。

比如我們可以在一個分散式環境(微服務架構)中跟蹤一個請求,並輕鬆找到可能潛在的效能瓶頸。

Elastic APM 通過一個名為 APM-Server 的元件提供服務,用於收集並向 ElasticSearch 以及和應用一起執行的 agent 程式傳送追蹤資料。

安裝 APM-Server

首先我們需要在 Kubernetes 叢集上安裝 APM-Server 來收集 agent 的追蹤資料,並轉發給 ElasticSearch,這裡同樣我們使用一個 ConfigMap 來配置:

# apm.configmap.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: apm-server-config
  labels:
    app: apm-server
data:
  apm-server.yml: |-
    apm-server:
      host: "0.0.0.0:8200"

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}

    setup.kibana:
      host: '${KIBANA_HOST:kibana}:${KIBANA_PORT:5601}'
---

APM-Server 需要暴露 8200 埠來讓 agent 轉發他們的追蹤資料,新建一個對應的 Service 物件即可:

# apm.service.yml
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: apm-server
  labels:
    app: apm-server
spec:
  ports:
  - port: 8200
    name: apm-server
  selector:
    app: apm-server
---

然後使用一個 Deployment 資源物件管理即可:

# apm.deployment.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: apm-server
  labels:
    app: apm-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: apm-server
  template:
    metadata:
      labels:
        app: apm-server
    spec:
      containers:
      - name: apm-server
        image: docker.elastic.co/apm/apm-server:7.8.0
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-client.elastic.svc.cluster.local
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elasticsearch-pw-elastic
              key: password
        - name: KIBANA_HOST
          value: kibana.elastic.svc.cluster.local
        - name: KIBANA_PORT
          value: "5601"
        ports:
        - containerPort: 8200
          name: apm-server
        volumeMounts:
        - name: config
          mountPath: /usr/share/apm-server/apm-server.yml
          readOnly: true
          subPath: apm-server.yml
      volumes:
      - name: config
        configMap:
          name: apm-server-config
---

直接部署上面的幾個資源物件:

$ kubectl apply  -f apm.configmap.yml \
                 -f apm.service.yml \
                 -f apm.deployment.yml

configmap/apm-server-config created
service/apm-server created
deployment.extensions/apm-server created

當 Pod 處於 Running 狀態證明執行成功:

$ kubectl get pods -n elastic -l app=apm-server
NAME                          READY   STATUS    RESTARTS   AGE
apm-server-667bfc5cff-zj8nq   1/1     Running   0          12m

接下來我們可以在第一節中部署的 Spring-Boot 應用上安裝一個 agent 應用。

配置 Java Agent

接下來我們在示例應用程式 spring-boot-simple 上配置一個 Elastic APM Java agent。 首先我們需要把 elastic-apm-agent-1.8.0.jar 這個 jar 包程式內建到應用容器中去,在構建映象的 Dockerfile 檔案中新增一行如下所示的命令直接下載該 JAR 包即可:

RUN wget -O /apm-agent.jar https://search.maven.org/remotecontent?filepath=co/elastic/apm/elastic-apm-agent/1.8.0/elastic-apm-agent-1.8.0.jar

完整的 Dockerfile 檔案如下所示:

FROM openjdk:8-jdk-alpine

ENV ELASTIC_APM_VERSION "1.8.0"
RUN wget -O /apm-agent.jar https://search.maven.org/remotecontent?filepath=co/elastic/apm/elastic-apm-agent/$ELASTIC_APM_VERSION/elastic-apm-agent-$ELASTIC_APM_VERSION.jar

COPY target/spring-boot-simple.jar /app.jar

CMD java -jar /app.jar

然後需要在示例應用中新增上如下依賴關係,這樣我們就可以整合 open-tracing 的依賴庫或者使用 Elastic APM API 手動檢測。

<dependency>
    <groupId>co.elastic.apm</groupId>
    <artifactId>apm-agent-api</artifactId>
    <version>${elastic-apm.version}</version>
</dependency>
<dependency>
    <groupId>co.elastic.apm</groupId>
    <artifactId>apm-opentracing</artifactId>
    <version>${elastic-apm.version}</version>
</dependency>
<dependency>
    <groupId>io.opentracing.contrib</groupId>
    <artifactId>opentracing-spring-cloud-mongo-starter</artifactId>
    <version>${opentracing-spring-cloud.version}</version>
</dependency>

然後需要修改第一篇文章中使用 Deployment 部署的 Spring-Boot 應用,需要開啟 Java agent 並且要連線到 APM-Server。

# spring-boot-simple.deployment.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: spring-boot-simple
  labels:
    app: spring-boot-simple
spec:
  selector:
    matchLabels:
      app: spring-boot-simple
  template:
    metadata:
      labels:
        app: spring-boot-simple
    spec:
      containers:
      - image: cnych/spring-boot-simple:0.0.1-SNAPSHOT
        imagePullPolicy: Always
        name: spring-boot-simple
        command:
          - "java"
          - "-javaagent:/apm-agent.jar"
          - "-Delastic.apm.active=$(ELASTIC_APM_ACTIVE)"
          - "-Delastic.apm.server_urls=$(ELASTIC_APM_SERVER)"
          - "-Delastic.apm.service_name=spring-boot-simple"
          - "-jar"
          - "app.jar"
        env:
          - name: SPRING_DATA_MONGODB_HOST
            value: mongo
          - name: ELASTIC_APM_ACTIVE
            value: "true"
          - name: ELASTIC_APM_SERVER
            value: http://apm-server.elastic.svc.cluster.local:8200
        ports:
        - containerPort: 8080
---

然後重新部署上面的示例應用:

$ kubectl apply -f spring-boot-simple.yml
$ kubectl get pods -n elastic -l app=spring-boot-simple
NAME                                 READY   STATUS    RESTARTS   AGE
spring-boot-simple-fb5564885-tf68d   1/1     Running   0          5m11s
$ kubectl get svc -n elastic -l app=spring-boot-simple
NAME                 TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
spring-boot-simple   NodePort   10.109.55.134   <none>        8080:31847/TCP   9d

當示例應用重新部署完成後,執行如下幾個請求:

get messages

獲取所有釋出的 messages 資料:

$ curl -X GET http://k8s.qikqiak.com:31847/message

get messages (慢請求)

使用 sleep=<ms> 來模擬慢請求:

$ curl -X GET http://k8s.qikqiak.com:31847/message?sleep=3000

get messages (error)

使用 error=true 來觸發一異常:

$ curl -X GET http://k8s.qikqiak.com:31847/message?error=true

現在我們去到 Kibana 頁面中路由到 APM 頁面,我們應該就可以看到 spring-boot-simple 應用的資料了。

點選應用就可以檢視到當前應用的各種效能追蹤資料:

可以檢視現在的錯誤資料:

還可以檢視 JVM 的監控資料:

除此之外,我們還可以新增報警資訊,就可以在第一時間掌握應用的效能狀況了。

總結

到這裡我們就完成了使用 Elastic Stack 進行 Kubernetes 環境的全棧監控,通過監控指標、日誌、效能追蹤來了解我們的應用各方面執行情況,加快我們排查和解決各種問題。

問題排錯

  1. 關於kibana調取secret密碼時,登入kibana內檢視密碼變數發現變數是一個亂碼值,這個目前只在變數掛入kibana容器中發現。

    解決辦法:將容器變數呼叫設定成密碼
    
  2. es 自動生成索引時,使用索引模板,生成預設tag 過多,可以通過修改索引模板的方法來進行減少索引建立