1. 程式人生 > >consul部署

consul部署

./consul agent -dev -client 192.168.p.p(伺服器的ip)

註冊服務

服務定義

1.首先,為Consul配置建立一個目錄。 Consul將所有配置檔案載入到配置目錄中,因此Unix系統上的一個通用約定是將目錄命名為/etc/consul.d(.d字尾意味著“該目錄包含一組配置檔案”)。

sudo mkdir /etc/consul.d

接下來,我們將編寫一個服務定義配置檔案。 假設我們有一個名為“web”的服務在埠80上執行。另外,我們給它一個標籤,我們可以使用它作為查詢服務的附加方式:

echo '{"service": {"name"
: "web", "tags": ["rails"], "port": 80}}' | sudo tee /etc/consul.d/web.json

現在,重新啟動代理程式,提供配置目錄:

cd /opt
./consul agent -dev -config-dir=/etc/consul.d
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.2'
           Node ID: 'f532e531-85e3-8426-8510-6aee9ee2b500'
         Node name
: 'localhost.localdomain' Datacenter: 'dc1' (Segment: '<all>') Server: true (Bootstrap: false) Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600) Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302) Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs: 2018/08/26 21:26:50 [DEBUG] agent: Using random ID "f532e531-85e3-8426-8510-6aee9ee2b500" as node ID 2018/08/26 21:26:50 [WARN] agent: Node name "localhost.localdomain" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes. 2018/08/26 21:26:50 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:f532e531-85e3-8426-8510-6aee9ee2b500 Address:127.0.0.1:8300}] 2018/08/26 21:26:50 [INFO] serf: EventMemberJoin: localhost.localdomain.dc1 127.0.0.1 2018/08/26 21:26:50 [INFO] serf: EventMemberJoin: localhost.localdomain 127.0.0.1 2018/08/26 21:26:50 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp) 2018/08/26 21:26:50 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "") 2018/08/26 21:26:50 [INFO] consul: Adding LAN server localhost.localdomain (Addr: tcp/127.0.0.1:8300) (DC: dc1) 2018/08/26 21:26:50 [INFO] consul: Handled member-join event for server "localhost.localdomain.dc1" in area "wan" 2018/08/26 21:26:50 [DEBUG] agent/proxy: managed Connect proxy manager started 2018/08/26 21:26:50 [WARN] agent/proxy: running as root, will not start managed proxies 2018/08/26 21:26:50 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp) 2018/08/26 21:26:50 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp) 2018/08/26 21:26:50 [INFO] agent: started state syncer 2018/08/26 21:26:50 [WARN] raft: Heartbeat timeout from "" reached, starting election 2018/08/26 21:26:50 [INFO] raft: Node at 127.0.0.1:8300 [Candidate] entering Candidate state in term 2 2018/08/26 21:26:50 [DEBUG] raft: Votes needed: 1 2018/08/26 21:26:50 [DEBUG] raft: Vote granted from f532e531-85e3-8426-8510-6aee9ee2b500 in term 2. Tally: 1 2018/08/26 21:26:50 [INFO] raft: Election won. Tally: 1 2018/08/26 21:26:50 [INFO] raft: Node at 127.0.0.1:8300 [Leader] entering Leader state 2018/08/26 21:26:50 [INFO] consul: cluster leadership acquired 2018/08/26 21:26:50 [INFO] consul: New leader elected: localhost.localdomain 2018/08/26 21:26:50 [INFO] connect: initialized CA with provider "consul" 2018/08/26 21:26:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small 2018/08/26 21:26:50 [INFO] consul: member 'localhost.localdomain' joined, marking health alive 2018/08/26 21:26:50 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically 2018/08/26 21:26:50 [INFO] agent: Synced service "web" 2018/08/26 21:26:50 [DEBUG] agent: Node info in sync 2018/08/26 21:26:52 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically 2018/08/26 21:26:52 [DEBUG] agent: Service "web" in sync 2018/08/26 21:26:52 [DEBUG] agent: Node info in sync 2018/08/26 21:26:52 [DEBUG] agent: Service "web" in sync 2018/08/26 21:26:52 [DEBUG] agent: Node info in sync 2018/08/26 21:27:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small 2018/08/26 21:28:08 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically 2018/08/26 21:28:08 [DEBUG] agent: Service "web" in sync 2018/08/26 21:28:08 [DEBUG] agent: Node info in sync 2018/08/26 21:28:30 [DEBUG] dns: request for name web.service.consul. type A class IN (took 1.864898ms) from client 127.0.0.1:60925 (udp) 2018/08/26 21:28:50 [DEBUG] manager: Rebalanced 1 servers, next active server is localhost.localdomain.dc1 (Addr: tcp/127.0.0.1:8300) (DC: dc1) 2018/08/26 21:28:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small 2018/08/26 21:29:23 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically 2018/08/26 21:29:23 [DEBUG] agent: Service "web" in sync 2018/08/26 21:29:23 [DEBUG] agent: Node info in sync 2018/08/26 21:29:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small 2018/08/26 21:30:40 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically 2018/08/26 21:30:40 [DEBUG] agent: Service "web" in sync 2018/08/26 21:30:40 [DEBUG] agent: Node info in sync 2018/08/26 21:30:46 [DEBUG] http: Request GET /v1/health/service/web?passing (1.221711ms) from=127.0.0.1:40608 2018/08/26 21:30:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small 2018/08/26 21:31:29 [DEBUG] manager: Rebalanced 1 servers, next active server is localhost.localdomain.dc1 (Addr: tcp/127.0.0.1:8300) (DC: dc1) 2018/08/26 21:31:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small 2018/08/26 21:32:00 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically 2018/08/26 21:32:00 [DEBUG] agent: Service "web" in sync 2018/08/26 21:32:00 [DEBUG] agent: Node info in sync 2018/08/26 21:32:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small 2018/08/26 21:33:05 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically 2018/08/26 21:33:05 [DEBUG] agent: Service "web" in sync 2018/08/26 21:33:05 [DEBUG] agent: Node info in sync 2018/08/26 21:33:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small 2018/08/26 21:34:13 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically 2018/08/26 21:34:13 [DEBUG] agent: Service "web" in sync 2018/08/26 21:34:13 [DEBUG] agent: Node info in sync 2018/08/26 21:34:18 [DEBUG] manager: Rebalanced 1 servers, next active server is localhost.localdomain.dc1 (Addr: tcp/127.0.0.1:8300) (DC: dc1) 2018/08/26 21:34:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small 2018/08/26 21:35:40 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically 2018/08/26 21:35:40 [DEBUG] agent: Service "web" in sync 2018/08/26 21:35:40 [DEBUG] agent: Node info in sync 2018/08/26 21:35:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small 2018/08/26 21:36:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small 2018/08/26 21:37:02 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically 2018/08/26 21:37:02 [DEBUG] agent: Service "web" in sync 2018/08/26 21:37:02 [DEBUG] agent: Node info in sync 2018/08/26 21:37:08 [DEBUG] manager: Rebalanced 1 servers, next active server is localhost.localdomain.dc1 (Addr: tcp/127.0.0.1:8300) (DC: dc1) ^C 2018/08/26 21:37:15 [INFO] agent: Caught signal: interrupt 2018/08/26 21:37:15 [INFO] agent: Graceful shutdown disabled. Exiting 2018/08/26 21:37:15 [INFO] agent: Requesting shutdown 2018/08/26 21:37:15 [WARN] agent: dev mode disabled persistence, killing all proxies since we can't recover them 2018/08/26 21:37:15 [DEBUG] agent/proxy: Stopping managed Connect proxy manager 2018/08/26 21:37:15 [INFO] consul: shutting down server 2018/08/26 21:37:15 [WARN] serf: Shutdown without a Leave 2018/08/26 21:37:15 [WARN] serf: Shutdown without a Leave 2018/08/26 21:37:15 [INFO] manager: shutting down 2018/08/26 21:37:15 [INFO] agent: consul server down 2018/08/26 21:37:15 [INFO] agent: shutdown complete 2018/08/26 21:37:15 [INFO] agent: Stopping DNS server 127.0.0.1:8600 (tcp) 2018/08/26 21:37:15 [INFO] agent: Stopping DNS server 127.0.0.1:8600 (udp) 2018/08/26 21:37:15 [INFO] agent: Stopping HTTP server 127.0.0.1:8500 (tcp) 2018/08/26 21:37:15 [INFO] agent: Waiting for endpoints to shut down 2018/08/26 21:37:15 [INFO] agent: Endpoints down 2018/08/26 21:37:15 [INFO] agent: Exit code: 1

查詢服務

一旦代理啟動並且服務同步,我們可以使用DNS或HTTP API來查詢服務

DNS API

我們首先使用DNS API來查詢我們的服務。 對於DNS API,服務的DNS名稱是NAME.service.consul。 預設情況下,所有DNS名稱始終在consul名稱空間中,儘管這是可配置的。 服務子域告訴Consul我們正在查詢服務,NAME是服務的名稱。
  對於我們註冊的Web服務,這些約定和設定會生成web.service.consul的完全限定的域名:

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> @127.0.0.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5363
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;web.service.consul.        IN  A

;; ANSWER SECTION:
web.service.consul. 0   IN  A   127.0.0.1

;; ADDITIONAL SECTION:
web.service.consul. 0   IN  TXT "consul-network-segment="

;; Query time: 3 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Sun Aug 26 21:28:30 EDT 2018
;; MSG SIZE  rcvd: 99

最後,我們也可以使用DNS API來按標籤過濾服務。 基於標記的服務查詢的格式是TAG.NAME.service.consul。 在下面的例子中,我們向Consul詢問所有帶有“rails”標籤的web服務。 自從我們使用該標籤註冊我們的服務後,我們得到了成功的迴應:

dig @127.0.0.1 -p 8600 rails.web.service.consul

HTTP API

  除了DNS API之外,HTTP API還可以用來查詢服務:
  

curl http://localhost:8500/v1/catalog/service/web

目錄API提供了託管給定服務的所有節點。 正如我們稍後將看到的健康檢查一樣,您通常只需要查詢檢查通過的健康例項。 這是DNS正在做的事情。 這是一個查詢只查詢健康的例項:

curl 'http://localhost:8500/v1/health/service/web?passing'
[
    {
        "Node": {
            "ID": "f532e531-85e3-8426-8510-6aee9ee2b500",
            "Node": "localhost.localdomain",
            "Address": "127.0.0.1",
            "Datacenter": "dc1",
            "TaggedAddresses": {
                "lan": "127.0.0.1",
                "wan": "127.0.0.1"
            },
            "Meta": {
                "consul-network-segment": ""
            },
            "CreateIndex": 9,
            "ModifyIndex": 10
        },
        "Service": {
            "ID": "web",
            "Service": "web",
            "Tags": [
                "rails"
            ],
            "Address": "",
            "Meta": null,
            "Port": 80,
            "EnableTagOverride": false,
            "ProxyDestination": "",
            "Connect": {
                "Native": false,
                "Proxy": null
            },
            "CreateIndex": 10,
            "ModifyIndex": 10
        },
        "Checks": [
            {
                "Node": "localhost.localdomain",
                "CheckID": "serfHealth",
                "Name": "Serf Health Status",
                "Status": "passing",
                "Notes": "",
                "Output": "Agent alive and reachable",
                "ServiceID": "",
                "ServiceName": "",
                "ServiceTags": [],
                "Definition": {},
                "CreateIndex": 9,
                "ModifyIndex": 9
            }
        ]
    }
]