loki簡單安裝配置使用
阿新 • • 發佈:2020-12-16
Grafana Loki is a set of components that can be composed into a fully featured logging stack. Unlike other logging systems, Loki is built around the idea of only indexing labels for logs and leaving the original log message unindexed. This means that Loki is cheaper to operate and can be orders of magnitude more efficient.
以前提起日誌系統,大家的第一反應就是ELK相關,並且好像也只能想起它,但是現在LOKI的出現讓我們有了其他選擇。毋容置疑,ELK是一款強大的日誌系統,它基於es,提供了很多豐富又強大功能,但同時帶來的也是資源的要求,以及高成本的維護,日誌20G,倒排索引160G。對於中小型公司,對日誌系統的要求並不多,開發更希望的是給我我要的專案日誌,讓我能grep就好;運維想要的不要佔用太多資源,讓我能輕鬆維護。LOKI滿足了開發人員的需求,它的簡單及低成本也讓運維人員可以接受。
https://grafana.com/docs/loki/latest/overview/ 官方文件
使用helm安裝是很輕鬆的方式,這裡不做贅述 官網連結:https://grafana.com/docs/loki/latest/installation/helm/ helm repo add loki https://grafana.github.io/loki/charts helm repo update helm upgrade --install loki --namespace=kube-system loki/loki helm upgrade --install promtail --namespace=kube-system loki/promtail
在搜尋了不少網站後,發現並沒有使用yaml檔案直接安裝的,個人也是調配了一份
loki:
注意儲存,需要改成自己的儲存或者使用本地空目錄
apiVersion: v1 kind: ServiceAccount metadata: name: loki namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: loki namespace: kube-system rules: - apiGroups: - extensions resourceNames: - loki resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: loki namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: loki subjects: - kind: ServiceAccount name: loki --- apiVersion: v1 kind: ConfigMap metadata: name: loki namespace: kube-system data: loki.yaml: | auth_enabled: false chunk_store_config: max_look_back_period: 0s compactor: shared_store: filesystem working_directory: /data/loki/boltdb-shipper-compactor ingester: chunk_block_size: 262144 chunk_idle_period: 3m chunk_retain_period: 1m lifecycler: ring: kvstore: store: inmemory replication_factor: 1 max_transfer_retries: 0 limits_config: enforce_metric_name: false reject_old_samples: true reject_old_samples_max_age: 168h schema_config: configs: - from: "2020-10-24" index: period: 24h prefix: index_ object_store: filesystem schema: v11 store: boltdb-shipper server: http_listen_port: 3100 storage_config: boltdb_shipper: active_index_directory: /data/loki/boltdb-shipper-active cache_location: /data/loki/boltdb-shipper-cache cache_ttl: 24h shared_store: filesystem filesystem: directory: /data/loki/chunks table_manager: retention_deletes_enabled: false retention_period: 0s --- apiVersion: v1 kind: Service metadata: name: loki namespace: kube-system labels: app: loki release: loki spec: ports: - name: http-metrics port: 3100 protocol: TCP targetPort: http-metrics selector: app: loki release: loki --- apiVersion: v1 kind: Service metadata: name: loki-headless namespace: kube-system labels: app: loki release: loki spec: clusterIP: None publishNotReadyAddresses: true ports: - name: http-metrics port: 3100 protocol: TCP targetPort: http-metrics selector: app: loki release: loki --- apiVersion: apps/v1 kind: StatefulSet metadata: name: loki namespace: kube-system labels: app: loki release: loki spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: loki release: loki serviceName: loki-headless template: metadata: labels: app: loki release: loki spec: containers: - args: - -config.file=/etc/loki/loki.yaml image: grafana/loki:2.0.0 imagePullPolicy: IfNotPresent name: loki ports: - containerPort: 3100 name: http-metrics protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /ready port: http-metrics scheme: HTTP initialDelaySeconds: 45 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 livenessProbe: failureThreshold: 3 httpGet: path: /ready port: http-metrics scheme: HTTP initialDelaySeconds: 45 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 volumeMounts: - name: config mountPath: /etc/loki - name: storage mountPath: /data securityContext: fsGroup: 10001 runAsGroup: 10001 runAsNonRoot: true runAsUser: 10001 serviceAccount: loki serviceAccountName: loki volumes: - name: config configMap: defaultMode: 420 name: loki volumeClaimTemplates: - metadata: name: storage spec: accessModes: - ReadWriteOnce storageClassName: yizhuang-nfs resources: requests: storage: 100Gi
promtail:
apiVersion: v1 kind: ServiceAccount metadata: name: promtail namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: promtail namespace: kube-system rules: - apiGroups: - extensions resourceNames: - promtail resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: promtail namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: promtail subjects: - kind: ServiceAccount name: promtail --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: promtail-clusterrole rules: - apiGroups: - "" resources: - nodes - nodes/proxy - services - endpoints - pods verbs: - get - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: promtail-clusterrolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: promtail-clusterrole subjects: - kind: ServiceAccount name: promtail namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: promtail namespace: kube-system data: promtail.yaml: | server: http_listen_port: 80 log_level: "warn" client: url: http://loki.kube-system:3100/loki/api/v1/push batchwait: 5s batchsize: 5242880 backoff_config: max_period: 5m max_retries: 10 min_period: 1s external_labels: {} timeout: 10s positions: filename: /run/promtail/positions.yaml scrape_configs: - job_name: kubernetes-pods-name pipeline_stages: - docker: {} kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: - __meta_kubernetes_pod_label_name target_label: __service__ - source_labels: - __meta_kubernetes_pod_node_name target_label: __host__ - action: drop regex: '' source_labels: - __service__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - action: replace replacement: $1 separator: / source_labels: - __meta_kubernetes_namespace - __service__ target_label: job - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace source_labels: - __meta_kubernetes_pod_name target_label: pod - action: replace source_labels: - __meta_kubernetes_pod_container_name target_label: container - replacement: /var/log/pods/*$1/*.log separator: / source_labels: - __meta_kubernetes_pod_uid - __meta_kubernetes_pod_container_name target_label: __path__ - job_name: kubernetes-pods-app pipeline_stages: - docker: {} kubernetes_sd_configs: - role: pod relabel_configs: - action: drop regex: .+ source_labels: - __meta_kubernetes_pod_label_name - source_labels: - __meta_kubernetes_pod_label_app target_label: __service__ - source_labels: - __meta_kubernetes_pod_node_name target_label: __host__ - action: drop regex: '' source_labels: - __service__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - action: replace replacement: $1 separator: / source_labels: - __meta_kubernetes_namespace - __service__ target_label: job - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace source_labels: - __meta_kubernetes_pod_name target_label: pod - action: replace source_labels: - __meta_kubernetes_pod_container_name target_label: container - replacement: /var/log/pods/*$1/*.log separator: / source_labels: - __meta_kubernetes_pod_uid - __meta_kubernetes_pod_container_name target_label: __path__ - job_name: kubernetes-pods-direct-controllers pipeline_stages: - docker: {} kubernetes_sd_configs: - role: pod relabel_configs: - action: drop regex: .+ separator: '' source_labels: - __meta_kubernetes_pod_label_name - __meta_kubernetes_pod_label_app - action: drop regex: '[0-9a-z-.]+-[0-9a-f]{8,10}' source_labels: - __meta_kubernetes_pod_controller_name - source_labels: - __meta_kubernetes_pod_controller_name target_label: __service__ - source_labels: - __meta_kubernetes_pod_node_name target_label: __host__ - action: drop regex: '' source_labels: - __service__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - action: replace replacement: $1 separator: / source_labels: - __meta_kubernetes_namespace - __service__ target_label: job - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace source_labels: - __meta_kubernetes_pod_name target_label: pod - action: replace source_labels: - __meta_kubernetes_pod_container_name target_label: container - replacement: /var/log/pods/*$1/*.log separator: / source_labels: - __meta_kubernetes_pod_uid - __meta_kubernetes_pod_container_name target_label: __path__ - job_name: kubernetes-pods-indirect-controller pipeline_stages: - docker: {} kubernetes_sd_configs: - role: pod relabel_configs: - action: drop regex: .+ separator: '' source_labels: - __meta_kubernetes_pod_label_name - __meta_kubernetes_pod_label_app - action: keep regex: '[0-9a-z-.]+-[0-9a-f]{8,10}' source_labels: - __meta_kubernetes_pod_controller_name - action: replace regex: '([0-9a-z-.]+)-[0-9a-f]{8,10}' source_labels: - __meta_kubernetes_pod_controller_name target_label: __service__ - source_labels: - __meta_kubernetes_pod_node_name target_label: __host__ - action: drop regex: '' source_labels: - __service__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - action: replace replacement: $1 separator: / source_labels: - __meta_kubernetes_namespace - __service__ target_label: job - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace source_labels: - __meta_kubernetes_pod_name target_label: pod - action: replace source_labels: - __meta_kubernetes_pod_container_name target_label: container - replacement: /var/log/pods/*$1/*.log separator: / source_labels: - __meta_kubernetes_pod_uid - __meta_kubernetes_pod_container_name target_label: __path__ - job_name: kubernetes-pods-static pipeline_stages: - docker: {} kubernetes_sd_configs: - role: pod relabel_configs: - action: drop regex: '' source_labels: - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror - action: replace source_labels: - __meta_kubernetes_pod_label_component target_label: __service__ - source_labels: - __meta_kubernetes_pod_node_name target_label: __host__ - action: drop regex: '' source_labels: - __service__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - action: replace replacement: $1 separator: / source_labels: - __meta_kubernetes_namespace - __service__ target_label: job - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace source_labels: - __meta_kubernetes_pod_name target_label: pod - action: replace source_labels: - __meta_kubernetes_pod_container_name target_label: container - replacement: /var/log/pods/*$1/*.log separator: / source_labels: - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror - __meta_kubernetes_pod_container_name target_label: __path__ target_config: sync_period: 10s --- apiVersion: apps/v1 kind: DaemonSet metadata: name: promtail namespace: kube-system labels: app: promtail release: promtail spec: selector: matchLabels: app: promtail release: promtail template: metadata: labels: app: promtail release: promtail spec: containers: - args: - -config.file=/etc/promtail/promtail.yaml env: - name: HOSTNAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName image: grafana/promtail:2.0.0 imagePullPolicy: IfNotPresent name: promtail ports: - containerPort: 80 name: http-metrics protocol: TCP readinessProbe: failureThreshold: 5 httpGet: path: /ready port: http-metrics scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 livenessProbe: failureThreshold: 5 httpGet: path: /ready port: http-metrics scheme: HTTP initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 volumeMounts: - name: config mountPath: /etc/promtail - name: run mountPath: /run/promtail - name: docker mountPath: /var/lib/docker/containers readOnly: true - name: pods mountPath: /var/log/pods readOnly: true serviceAccount: promtail serviceAccountName: promtail tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists volumes: - name: config configMap: defaultMode: 420 name: promtail - name: run hostPath: path: /run/promtail type: "" - name: docker hostPath: path: /var/lib/docker/containers type: "" - name: pods hostPath: path: /var/log/pods type: ""
在檔案生效後,我們就可以去grafana裡去配置對應的資料來源
然後稍等一會,讓資料進行一些傳輸,就可以去進行查詢了
loki會將我們k8s裡定義的所有label掃出來,當然在promtail的收集配置裡,你可以進行更多的自定義標籤,收集規則等等
我們進行日誌的查詢,一般先使用選擇表示式,然後再使用過濾表示式
對於選擇的內容,我們放在{}裡,如{namespace="dev", project_name=~"cdy.+"}
=等於 !=不相等 =~正則表示式匹配 !~不匹配正則表示式
過濾的內容,我們只需要接在選擇表示式後面即可,如{namespace="dev", project_name=~"cdy.+"} |~ "close"
|= 行包含字串 != 行不包含字串 |~ 行匹配正則表示式 !~ 行與正則表示式不匹配