Prometheus 監控帶使用者名稱密碼的 API (NGINX)採集配置
阿新 • • 發佈:2021-11-25
官方配置指南
官方文件->Prometheus Configuration
# The job name assigned to scraped metrics by default. job_name: <job_name> # How frequently to scrape targets from this job. [ scrape_interval: <duration> | default = <global_config.scrape_interval> ] # Per-scrape timeout when scraping this job. [ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ] # The HTTP resource path on which to fetch metrics from targets. [ metrics_path: <path> | default = /metrics ] # honor_labels controls how Prometheus handles conflicts between labels that are # already present in scraped data and labels that Prometheus would attach # server-side ("job" and "instance" labels, manually configured target # labels, and labels generated by service discovery implementations). # # If honor_labels is set to "true", label conflicts are resolved by keeping label # values from the scraped data and ignoring the conflicting server-side labels. # # If honor_labels is set to "false", label conflicts are resolved by renaming # conflicting labels in the scraped data to "exported_<original-label>" (for # example "exported_instance", "exported_job") and then attaching server-side # labels. # # Setting honor_labels to "true" is useful for use cases such as federation and # scraping the Pushgateway, where all labels specified in the target should be # preserved. # # Note that any globally configured "external_labels" are unaffected by this # setting. In communication with external systems, they are always applied only # when a time series does not have a given label yet and are ignored otherwise. [ honor_labels: <boolean> | default = false ] # honor_timestamps controls whether Prometheus respects the timestamps present # in scraped data. # # If honor_timestamps is set to "true", the timestamps of the metrics exposed # by the target will be used. # # If honor_timestamps is set to "false", the timestamps of the metrics exposed # by the target will be ignored. [ honor_timestamps: <boolean> | default = true ] # Configures the protocol scheme used for requests. [ scheme: <scheme> | default = http ] # Optional HTTP URL parameters. params: [ <string>: [<string>, ...] ] # Sets the `Authorization` header on every scrape request with the # configured username and password. # password and password_file are mutually exclusive. basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] # Sets the `Authorization` header on every scrape request with # the configured bearer token. It is mutually exclusive with `bearer_token_file`. [ bearer_token: <secret> ] # Sets the `Authorization` header on every scrape request with the bearer token # read from the configured file. It is mutually exclusive with `bearer_token`. [ bearer_token_file: <filename> ]
如果你認真看的話,應該會關注到幾個關鍵資訊: metrics_path 和 basic_auth。其中,metrics_path 用於指定 HTTP 類指標資訊採集時的路由地址,預設值是 /metrics;欄位 basic_auth 則是用來進行許可權驗證的,而且密碼這裡可以指定密碼檔案,而不是直接填寫明文(一般來說,指定密碼檔案的安全性稍高與明文)。
有效的配置
根據官方文件的指引,我們很快便可以推匯出正確的配置寫法:
- job_name: 'web' metrics_path: /status/format/prometheus static_configs: - targets: ['www.weishidong.com'] basic_auth: username: weishidong password: 0099887kk
要注意的是,這裡並不需要填寫 http:// 字樣,因為 Prometheus 預設的 Scheme 就是 http。如果地址的 Scheme 是 https 的話,按照文件指引,我們需要新增 scheme 欄位,對應的配置為:
- job_name: 'web' metrics_path: /status/format/prometheus static_configs: - targets: ['www.weishidong.com'] scheme: https basic_auth: username: weishidong password: 0099887kk
配置完成後,Prometheus 應該就能順利的採集到資料了,配上 Grafana,就能夠看到開篇給出的監控效果圖。