1. 程式人生 > >saltstack syndic安裝配置使用

saltstack syndic安裝配置使用

cee 執行文件 comm temp 因此 ssh 重名 delete ini

salt-syndic是做神馬的呢?如果大家知道zabbix proxy的話那就可以很容易理解了,syndic的意思為理事,其實如果叫salt-proxy的話那就更好理解了,它就是一層代理,如同zabbix proxy功能一樣,隔離master與minion,使其不需要通訊,只需要與syndic都通訊就可以,這樣的話就可以在跨機房的時候將架構清晰部署了,建議zabbix proxy與salt-syndic可以放在一起哦

本次我萌使用node2作為node3的代理讓他收到node1(master)的控制技術分享圖片

在node1(master)上配置

1 [root@linux-node1 ~]# grep
"^[a-Z]" /etc/salt/master 2 default_include: master.d/*.conf 3 file_roots: 4 order_masters: True # 修改這裏,表示允許開啟多層master

在node2上安裝配置

 1 [root@linux-node2 salt]# yum install salt-syndic -y
 2 [root@linux-node2 salt]# cd /etc/salt/
 3 [root@linux-node2 salt]# grep "^[a-Z]" proxy
 4 master: 192.168
.56.11 # proxy文件裏 5 [root@linux-node2 salt]# grep "^[a-Z]" master 6 syndic_master: 192.168.56.11 # master文件裏 7 [root@linux-node2 salt]# systemctl start salt-master.service 8 [root@linux-node2 salt]# systemctl start salt-syndic.service
9 [root@linux-node2 salt]# netstat -tpln 10 Active Internet connections (only servers) 11 Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name 12 tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd 13 tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 998/sshd 14 tcp 0 0 0.0.0.0:4505 0.0.0.0:* LISTEN 6013/python 15 tcp 0 0 0.0.0.0:4506 0.0.0.0:* LISTEN 6019/python 16 tcp6 0 0 :::111 :::* LISTEN 1/systemd 17 tcp6 0 0 :::22 :::* LISTEN 998/sshd

node3上正常安裝minion

1 [root@linux-node3 salt]# yum install salt-minion -y
2 [root@linux-node3 salt]# cd /etc/salt/
3 [root@linux-node3 salt]# grep "^[a-Z]" minion
4 master: 192.168.56.12                                # 此時只需要認定node2就好,不需要知道node1的存在
5 [root@linux-node3 salt]# systemctl start salt-minion

然後回到node2(syndic)

1 [root@linux-node2 salt]# salt-key -L
2 Accepted Keys:
3 Denied Keys:
4 Unaccepted Keys:
5 linux-node3.example.com
6 Rejected Keys:
7 [root@linux-node2 salt]# salt-key –A                # 把key接受了

最後回到node1(master)

 1 [root@linux-node1 ~]# salt-key –L                    # 發現並沒有linux-node3.example.com
 2 Accepted Keys:
 3 linux-node1.example.com
 4 linux-node2.example.com
 5 Denied Keys:
 6 Unaccepted Keys:
 7 Rejected Keys:
 8 [root@linux-node1 ~]# salt * test.ping    
 9 linux-node2.example.com:
10     True
11 linux-node1.example.com:
12     True
13 linux-node3.example.com:                        # 但是它會出現效果
14 True

其他的同層代理及多層代理的配置也是相同的,只需要分清每個代理的上層master就好

這裏有一些常見的問題

1.我代理之下控制的是否可以重名?舉個例子就是node3的id改成node2,然後在總master上執行會有什麽情況?

首先我萌要涉及到修改id啦,小夥伴還記得修改id的流程嗎~~技術分享圖片

 1 [root@linux-node3 salt]# systemctl stop salt-minion.service    # 停止minion
 2 [root@linux-node2 salt]# salt-key –L            # 註意是在node2(syndic)上哦,因為node3的眼裏的master是node2,並且把key是發送給node2了哦,刪除它
 3 Accepted Keys:
 4 linux-node3.example.com
 5 Denied Keys:
 6 Unaccepted Keys:
 7 Rejected Keys:
 8 [root@linux-node2 salt]# salt-key -d linux-node3.example.com
 9 The following keys are going to be deleted:
10 Accepted Keys:
11 linux-node3.example.com
12 Proceed? [N/y] y
13 Key for minion linux-node3.example.com deleted.
14 [root@linux-node3 salt]# rm -fr /etc/salt/pki/minion/            # 刪除minion端/etc/salt/pki/minion下所有文件
15 [root@linux-node3 salt]# grep "^[a-Z]" /etc/salt/minion        # 修改新id
16 master: 192.168.56.12
17 id: linux-node2.example.com                                # 配置一個已有的重復id做測試
18 [root@linux-node3 salt]# systemctl start salt-minion.service    # 再次啟動minion
19 [root@linux-node2 salt]# salt-key –L                        # 回到node2再次接受新id的key
20 Accepted Keys:
21 Denied Keys:
22 Unaccepted Keys:
23 linux-node2.example.com
24 Rejected Keys:
25 [root@linux-node2 salt]# salt-key -A
26 The following keys are going to be accepted:
27 Unaccepted Keys:
28 linux-node2.example.com
29 Proceed? [n/Y] Y
30 Key for minion linux-node2.example.com accepted.
31 [root@linux-node2 salt]# salt * test.ping                    # 簡單測試下
32 linux-node2.example.com:
33 True
34 
35 最後驗證我們的測試,回到node1(master)
36 [root@linux-node1 ~]# salt * test.ping
37 linux-node2.example.com:
38     True
39 linux-node1.example.com:
40     True
41 linux-node2.example.com:
42     True
43 我萌發現,wtf,什麽鬼,linux-node2.example.com居然出現了兩次,雖然已經想過這種情況,但是在實際使用中我肯定是分不清誰是誰了,所以這種使用了代理後依然id重名的方式依然是很不好的,所以還是建議大家把id要分清楚哦,最簡單的方式就是設置合理的主機名,這樣所有的機器都不會重復,而且連設置id這個事情都可以省略了(我已經將node3的id改回去了)

2.遠程執行沒有問題了,這種架構下狀態文件的執行會不會有影響呢?

  1 [root@linux-node1 base]# pwd                    # 我們在master上定義top
  2 /srv/salt/base
  3 [root@linux-node1 base]# cat top.sls                 # 其實就是給大家傳輸了個文件
  4 base:
  5   *:
  6     - known-hosts.known-hosts
  7 [root@linux-node1 base]# cat known-hosts/known-hosts.sls 
  8 known-hosts:
  9   file.managed:
 10     - name: /root/.ssh/known_hosts
 11     - source: salt://known-hosts/templates/known-hosts
 12     - clean: True
 13 [root@linux-node1 base]# salt * state.highstate
 14 linux-node3.example.com:
 15 ----------
 16           ID: states
 17     Function: no.None
 18       Result: False
 19      Comment: No Top file or master_tops data matches found.
 20      Changes:   
 21 
 22 Summary for linux-node3.example.com
 23 ------------
 24 Succeeded: 0
 25 Failed:    1
 26 ------------
 27 Total states run:     1
 28 Total run time:   0.000 ms
 29 linux-node2.example.com:
 30 ----------
 31           ID: known-hosts
 32     Function: file.managed
 33         Name: /root/.ssh/known_hosts
 34       Result: True
 35      Comment: File /root/.ssh/known_hosts updated
 36      Started: 11:15:35.210699
 37     Duration: 37.978 ms
 38      Changes:   
 39               ----------
 40               diff:
 41                   New file
 42               mode:
 43                   0644
 44 
 45 Summary for linux-node2.example.com
 46 ------------
 47 Succeeded: 1 (changed=1)
 48 Failed:    0
 49 ------------
 50 Total states run:     1
 51 Total run time:  37.978 ms
 52 linux-node1.example.com:
 53 ----------
 54           ID: known-hosts
 55     Function: file.managed
 56         Name: /root/.ssh/known_hosts
 57       Result: True
 58      Comment: File /root/.ssh/known_hosts is in the correct state
 59      Started: 11:15:35.226119
 60     Duration: 51.202 ms
 61      Changes:   
 62 
 63 Summary for linux-node1.example.com
 64 ------------
 65 Succeeded: 1
 66 Failed:    0
 67 ------------
 68 Total states run:     1
 69 Total run time:  51.202 ms
 70 ERROR: Minions returned with non-zero exit code
 71 顯而易見的node3發生了錯誤,而node1跟node2正常(很好理解),我們去看node3報出的“No Top file or master_tops data matches found”,言簡意賅沒有找到匹配的top執行文件,簡單推斷出是因為node3認證的master是node2,但是node2上沒有寫top,我們去node2上寫一個不同的top再次測試下
 72 [root@linux-node2 base]# pwd
 73 /srv/salt/base
 74 [root@linux-node2 base]# cat top.sls                     # 這個更簡單了,就是ls /root
 75 base:
 76   *:
 77     - cmd.cmd
 78 [root@linux-node2 base]# cat cmd/cmd.sls 
 79 cmd:
 80   cmd.run:
 81     - name: ls /root
 82 好的我們回到master上再次測試,我將node1、2正常執行的信息省略
 83 [root@linux-node1 base]# salt * state.highstate
 84 linux-node3.example.com:
 85 ----------
 86           ID: cmd
 87     Function: cmd.run
 88         Name: ls /root
 89       Result: True
 90      Comment: Command "ls /root" run
 91      Started: 11:24:42.752326
 92     Duration: 11.944 ms
 93      Changes:   
 94               ----------
 95               pid:
 96                   5095
 97               retcode:
 98                   0
 99               stderr:
100               stdout:
101                   lvs.sh
102 
103 Summary for linux-node3.example.com
104 ------------
105 Succeeded: 1 (changed=1)
106 Failed:    0
107 ------------
108 Total states run:     1
109 Total run time:  11.944 ms
110 我們已經可以看出一些端倪,我們再次修改master的配置文件並執行測試
111 [root@linux-node1 base]# cat top.sls 
112 base:
113   linux-node3.example.com:                        # 只定義執行node3
114     - known-hosts.known-hosts
115 [root@linux-node1 base]# salt * state.highstate
116 linux-node3.example.com:
117 ----------
118           ID: cmd
119     Function: cmd.run
120         Name: ls /root
121       Result: True
122      Comment: Command "ls /root" run
123      Started: 11:28:20.792475
124     Duration: 8.686 ms
125      Changes:   
126               ----------
127               pid:
128                   5283
129               retcode:
130                   0
131               stderr:
132               stdout:
133                   lvs.sh
134 
135 Summary for linux-node3.example.com
136 ------------
137 Succeeded: 1 (changed=1)
138 Failed:    0
139 ------------
140 Total states run:     1
141 Total run time:   8.686 ms
142 linux-node2.example.com:
143 ----------
144           ID: states
145     Function: no.None
146       Result: False
147      Comment: No Top file or master_tops data matches found.
148      Changes:   
149 
150 Summary for linux-node2.example.com
151 ------------
152 Succeeded: 0
153 Failed:    1
154 ------------
155 Total states run:     1
156 Total run time:   0.000 ms
157 linux-node1.example.com:
158 ----------
159           ID: states
160     Function: no.None
161       Result: False
162      Comment: No Top file or master_tops data matches found.
163      Changes:   
164 
165 Summary for linux-node1.example.com
166 ------------
167 Succeeded: 0
168 Failed:    1
169 ------------
170 Total states run:     1
171 Total run time:   0.000 ms
172 ERROR: Minions returned with non-zero exit code
173 我們發現這次node1跟node2出剛才問題了,而node3執行的是node2上定義的top,好吧,這時候就要發揮小北方的作用!
174 北方的總結:
175 每個minion會去找自己master裏定義的top並執行,即node1、2找的是master的,而node2找的是syndic(node2)的
176 
177 “No Top file or master_tops data matches found”出現是因為我每次執行都是salt * state.highstate,即讓所有機器都查找top文件並執行對應操作,第一次node3出現問題是因為它聽從的top文件在syndic上,當時syndic上我還沒有寫top所以他找不到匹配自己的;第二次我把top裏執行的*換成了node3單獨一個,沒有node1跟node2的相關操作了,他們接受到指令並來查找top文件想執行相關操作發現沒匹配自己也因此報錯,就跟剛才node3找不到是一個意思
178 
179 一下子還是無法理解呢,那麽怎麽辦呢,有一個規範的做法就是,將master的文件目錄直接拷到所有的syndic上,這樣就可以保證所有的操作都是統一的了,如同沒有代理的時候一樣

3.top好麻煩呀,那麽我普通的執行sls文件會怎麽樣呢?技術分享圖片

 1 [root@linux-node1 base]# salt * state.sls  known-hosts.known-hosts
 2 linux-node3.example.com:
 3     Data failed to compile:
 4 ----------
 5     No matching sls found for known-hosts.known-hosts in env base
 6 linux-node2.example.com:
 7 ----------
 8           ID: known-hosts
 9     Function: file.managed
10         Name: /root/.ssh/known_hosts
11       Result: True
12      Comment: File /root/.ssh/known_hosts is in the correct state
13      Started: 11:46:03.968021
14     Duration: 870.596 ms
15      Changes:   
16 
17 Summary for linux-node2.example.com
18 ------------
19 Succeeded: 1
20 Failed:    0
21 ------------
22 Total states run:     1
23 Total run time: 870.596 ms
24 linux-node1.example.com:
25 ----------
26           ID: known-hosts
27     Function: file.managed
28         Name: /root/.ssh/known_hosts
29       Result: True
30      Comment: File /root/.ssh/known_hosts is in the correct state
31      Started: 11:46:05.003462
32     Duration: 42.02 ms
33      Changes:   
34 
35 Summary for linux-node1.example.com
36 ------------
37 Succeeded: 1
38 Failed:    0
39 ------------
40 Total states run:     1
41 Total run time:  42.020 ms
42 ERROR: Minions returned with non-zero exit code
43 我麽看到node3又報錯了,“No matching sls found for known-hosts.known-hosts in env base”,我甚至都不需要驗證都知道是怎麽回事了,直接復制下來
44 
45 每個minion會去找自己master裏定義的sls並執行,即node1、2找的是master的,而node2找的是syndic(node2)的
46 
47 所以如果你在syndic定義個known-hosts但是裏面執行些其他操作那麽node3就會按這個來了,但是沒有人會這麽亂七八糟的搞,因此:保證各個syndic與master的文件目錄保持統一!

saltstack syndic安裝配置使用