Elasticsearch + Logstash + Kibana + SearchGuard 安裝部署(燁哥提供)
阿新 • • 發佈:2022-03-16
Elasticsearch + Logstash + Kibana + SearchGuard 安裝部署
環境說明
系統以及java版本
系統版本 | JAVA 版本 |
---|---|
CentOS 7.4.1708 | 1.8 |
Elasticsearch node 配置
三臺伺服器都新增host 檔案
IP | Elasticsearch node |
---|---|
10.3.245.25 | node-25 |
10.3.245.40 | node-40 |
10.3.245.65 | node-65 |
ELK 版本資訊
ElasticSearch版本 | Kibana版本 | Logstash | SearchGuard版本 |
---|---|---|---|
6.6.1 | 6.6.1 | 6.6.1 | 25.5 |
SearchGuard 外掛版本
ES對應的SearchGuard外掛版本 | Kibana對應的SearchGuard外掛版本 |
---|---|
25.5 | 18.5 |
JAVA 安裝
三臺伺服器都需要安裝
java 版本是1.8
解壓配置環境變數
# 解壓 tar fzx jdk-8u181-linux-x64.tar.gz # 移動到/usr/local mv jdk1.8.0_181 /usr/local/ # 配置環境變數 # 將下面兩行放到/etc/profile 檔案最後 export JAVA_HOME=/usr/local/jdk1.8.0_181 export PATH=$JAVA_HOME/bin:$PATH # 載入使其生效 source /etc/profile
檢查是否生效
# 檢查是否生效
java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
Elasticsearch 安裝
在安裝elk 之前 java必須提前安裝,三臺伺服器都需要安裝
es 的版本是 6.6.1
基本配置
# 本次使用的是rpm包安裝 yum localinstall elasticsearch-6.6.1.rpm -y # 修改jvm 配置 將記憶體大小給4G,如果記憶體比較多,可以給8G,或者16G,本機記憶體比較小,所以給4G vim /etc/elasticsearch/jvm.options -Xms4g -Xmx4g # 修改es配置檔案引數,下面以10.3.245.25 為例 grep -Ev "^$|^#" /etc/elasticsearch/elasticsearch.yml cluster.name: elk node.name: node-25 path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch bootstrap.memory_lock: false bootstrap.system_call_filter: false network.host: 10.3.245.25 http.port: 9200 transport.tcp.port: 9300 transport.tcp.compress: true discovery.zen.ping.unicast.hosts: ["node-25", "node-65","node-40"] discovery.zen.minimum_master_nodes: 2 # restart Elasticsearch /etc/init.d/elasticsearch restart
Search Guard 外掛安裝
Elasticsearch 版本對應的searchguard 外掛版本,需要在searchguard 官網檢視並下載
安裝外掛
# 切換到es bin 目錄
cd /usr/share/elasticsearch/bin/
# 安裝有兩種方法:
#第一種是線上安裝
./elasticsearch-plugin install com.floragunn:search-guard-6:6.6.1-25.5
#第二種是線下安裝
./elasticsearch-plugin install -b file:///root/search-guard-6-6.6.1-25.5.zip
# 本次使用第二種線下安裝
配置Elasticsearch
# 進行demo模式的安裝
cd /usr/share/elasticsearch/plugins/search-guard-6/tools
ls
hash.bat hash.sh install_demo_configuration.sh sgadmin.bat sgadmin_demo.sh sgadmin.sh
chmod +x install_demo_configuration.sh
./install_demo_configuration.sh # 三個y 即可
# 安裝完成之後,會在/etc/elasticsearch/elasticsearch.yml 配置檔案中新增一些引數。
# 請把https 更新為http,修改引數searchguard.ssl.http.enabled 的值為false
# 下面是具體配置完的引數
grep -Ev "^$|^#" /etc/elasticsearch/elasticsearch.yml
cluster.name: elk
node.name: node-25
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
network.host: 10.3.245.25
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.zen.ping.unicast.hosts: ["node-25", "node-65","node-40"]
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: "Authorization"
searchguard.ssl.transport.pemcert_filepath: esnode.pem
searchguard.ssl.transport.pemkey_filepath: esnode-key.pem
searchguard.ssl.transport.pemtrustedcas_filepath: root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.http.enabled: false
searchguard.ssl.http.pemcert_filepath: esnode.pem
searchguard.ssl.http.pemkey_filepath: esnode-key.pem
searchguard.ssl.http.pemtrustedcas_filepath: root-ca.pem
searchguard.allow_unsafe_democertificates: true
searchguard.allow_default_init_sgindex: true
searchguard.authcz.admin_dn:
- CN=kirk,OU=client,O=client,L=test, C=de
searchguard.audit.type: internal_elasticsearch
searchguard.enable_snapshot_restore_privilege: true
searchguard.check_snapshot_restore_write_privileges: true
searchguard.restapi.roles_enabled: ["sg_all_access"]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3
xpack.security.enabled: false
# 在es上配置還有一步
# 將下面的配置新增到 /etc/security/limits.conf
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
elasticsearch soft nproc 10240
elasticsearch hard nproc 10240
elasticsearch soft nofile 65535
elasticsearch hard nofile 65535
# 重啟elasticsearch 服務
/etc/init.d/elasticsearch restart
SearchGuard 多使用者設定
目錄位置
/usr/share/elasticsearch/plugins/search-guard-6/sgconfig
SearchGuard 各個檔案的說明
- sg_internal_users.yml: 儲存使用者名稱密碼,密碼可以使用plugin/tools/hash.sh生成
- sg_roles.yml:許可權設定,定義什麼型別的許可權
- sg_roles_mapping.yml: 對映角色關係,可以把許可權對映給使用者,也可以對映給使用者組
- sg_action_groups.yml: 定義一些使用者動作的許可權與es索引之間的關係
- sg_config.yml:全域性設定
例項 (建立一個使用者)
注: 如果是叢集,則下面的操作在所有的伺服器都要操作一遍
-
建立使用者密碼
# cd /usr/share/elasticsearch/plugins/search-guard-6/tools # ls hash.bat hash.sh install_demo_configuration.sh sgadmin.bat sgadmin_demo.sh sgadmin.sh # sh hash.sh -p oapassword $2y$12$d5adFlpwkVFfyL7awgSbPekVsi7v7vfrNFQWCH98/7Oh4dtCHH5Iy #編輯 sg_internal_users.yml 檔案 #將下面幾行程式碼放到最後的位置就可以 # password is: oapassword oauser: hash: $2y$12$d5adFlpwkVFfyL7awgSbPekVsi7v7vfrNFQWCH98/7Oh4dtCHH5Iy roles: - sg_oauser
-
建立角色 sg_osuser
# vim sg_roles.yml # 將下面的程式碼放到最後就可以 sg_oauser: cluster: - UNLIMITED indices: 'kibana*': '*': - READ '?nkibana*': '*': - READ 'logstash-oa*': '*': - CRUD 'oa-*': '*': - CRUD
-
給使用者賦予角色
# vim sg_roles_mapping.yml # 將下面程式碼放到最後就可以 sg_oauser: readonly: true backendroles: - sg_oauser
-
更新使用者配置
# cd /usr/share/elasticsearch/plugins/search-guard-6/tools # ls hash.bat hash.sh install_demo_configuration.sh sgadmin.bat sgadmin_demo.sh sgadmin.sh # ./sgadmin.sh -cd ../sgconfig/ -icl -nhnv -cacert /etc/elasticsearch/root-ca.pem -cert /etc/elasticsearch/kirk.pem -key /etc/elasticsearch/kirk-key.pem -h IP地址
許可權說明
許可權定義
許可權配置
許可權對映
檢測Elasticsearch
在瀏覽器上輸入http://10.3.245.25:9200 ,正常情況下會彈出一個密碼驗證框,輸入使用者名稱和密碼,SearchGuard的預設使用者名稱和密碼都是admin
{
name: "node-25",
cluster_name: "elk",
cluster_uuid: "bzeCw1L9RGi1dlCqOXDC4A",
version: {
number: "6.6.1",
build_flavor: "default",
build_type: "rpm",
build_hash: "1fd8f69",
build_date: "2019-02-13T17:10:04.160291Z",
build_snapshot: false,
lucene_version: "7.6.0",
minimum_wire_compatibility_version: "5.6.0",
minimum_index_compatibility_version: "5.0.0"
},
tagline: "You Know, for Search"
}
故障原因
如果Elasticsearch 沒有重啟成功,請檢查日誌 /var/log/elasticsearch/elk.log (elk.log 其中elk是叢集名稱)
- 配置檔案中的配置引數,是否有寫錯或者漏寫
- 日誌檔案路徑是否由於許可權問題沒有生成
- java 環境變數是否沒有識別到,在/etc/sysconfig/elasticsearch 配置檔案中配置JAVA_HOME
Kibana 安裝
kibana 也需要依賴JAVA
Kibana 版本為 6.6.1
安裝
yum localinstall kibana-6.6.1-x86_64.rpm -y
安裝SearchGuard外掛
# 切換到kibana bin 目錄
cd /usr/share/kibana/bin/
ls
kibana kibana-keystore kibana-plugin
# 安裝外掛
./kibana-plugin install file:///root/search-guard-kibana-plugin-6.6.1-18.5.zip
配置
# kibana 配置檔案
grep -Ev "^$|^#" /etc/kibana/kibana.yml
server.port: 5601
server.host: "10.3.245.25"
server.name: "kibana-server"
elasticsearch.hosts: ["http://10.3.245.25:9200","http://10.3.245.40:9200","http://10.3.245.65:9200"]
kibana.index: ".kibana"
elasticsearch.username: "admin"
elasticsearch.password: "admin"
elasticsearch.pingTimeout: 1500
elasticsearch.requestTimeout: 30000
elasticsearch.requestHeadersWhitelist: [ "authorization","sgtenant" ]
elasticsearch.shardTimeout: 30000
elasticsearch.startupTimeout: 5000
xpack.graph.enabled: false
xpack.ml.enabled: false
xpack.watcher.enabled: false
xpack.security.enabled: false
#重啟kibana
/etc/init.d/kibana restart
訪問Kibana
輸入使用者名稱和密碼 admin/admin 登入
Logstash 安裝
軟體下載安裝
logstash 版本為 6.6.1
# yum 安裝
yum localinstall logstash-6.6.1.rpm -y
# 安裝後logstash 目錄 /usr/share/logstash
下載logstash pattern 目錄
配置檔案
/etc/logstash/jvm.options 中的
-Xms4g
-Xmx4g
修改為4G 或更多
# 修改配置檔案
cd /etc/logstash
ls /etc/logstash/
conf.d jvm.options log4j2.properties logstash-sample.conf logstash.yml pipelines.yml startup.options
#啟動命令(預設情況下logstash 不提供啟停指令碼)
/usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d -r -l /tmp
# --path.settings 配置檔案路徑
# -f /etc/logstash/conf.d 清洗資料路徑
# -r 如果有配置更新,會自動更新
# -l 日誌目錄
# logstash 說明
Usage:
bin/logstash [OPTIONS]
Options:
-n, --node.name NAME Specify the name of this logstash instance, if no value is given
it will default to the current hostname.
(default: "m8-ops-elk-02.test.xesv5.com")
-f, --path.config CONFIG_PATH Load the logstash config from a specific file
or directory. If a directory is given, all
files in that directory will be concatenated
in lexicographical order and then parsed as a
single config file. You can also specify
wildcards (globs) and any matched files will
be loaded in the order described above.
-e, --config.string CONFIG_STRING Use the given string as the configuration
data. Same syntax as the config file. If no
input is specified, then the following is
used as the default input:
"input { stdin { type => stdin } }"
and if no output is specified, then the
following is used as the default output:
"output { stdout { codec => rubydebug } }"
If you wish to use both defaults, please use
the empty string for the '-e' flag.
(default: nil)
--field-reference-parser MODE Use the given MODE when parsing field
references.
The field reference parser is used to expand
field references in your pipeline configs,
and will be becoming more strict to better
handle illegal and ambbiguous inputs in a
future release of Logstash.
Available MODEs are:
- `LEGACY`: parse with the legacy parser,
which is known to handle ambiguous- and
illegal-syntax in surprising ways;
warnings will not be emitted.
- `COMPAT`: warn once for each distinct
ambiguous- or illegal-syntax input, but
continue to expand field references with
the legacy parser.
- `STRICT`: parse in a strict manner; when
given ambiguous- or illegal-syntax input,
raises a runtime exception that should
be handled by the calling plugin.
The MODE can also be set with
`config.field_reference.parser`
(default: "COMPAT")
--modules MODULES Load Logstash modules.
Modules can be defined using multiple instances
'--modules module1 --modules module2',
or comma-separated syntax
'--modules=module1,module2'
Cannot be used in conjunction with '-e' or '-f'
Use of '--modules' will override modules declared
in the 'logstash.yml' file.
-M, --modules.variable MODULES_VARIABLE Load variables for module template.
Multiple instances of '-M' or
'--modules.variable' are supported.
Ignored if '--modules' flag is not used.
Should be in the format of
'-M "MODULE_NAME.var.PLUGIN_TYPE.PLUGIN_NAME.VARIABLE_NAME=VALUE"'
as in
'-M "example.var.filter.mutate.fieldname=fieldvalue"'
--setup Load index template into Elasticsearch, and saved searches,
index-pattern, visualizations, and dashboards into Kibana when
running modules.
(default: false)
--cloud.id CLOUD_ID Sets the elasticsearch and kibana host settings for
module connections in Elastic Cloud.
Your Elastic Cloud User interface or the Cloud support
team should provide this.
Add an optional label prefix '<label>:' to help you
identify multiple cloud.ids.
e.g. 'staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy'
--cloud.auth CLOUD_AUTH Sets the elasticsearch and kibana username and password
for module connections in Elastic Cloud
e.g. 'username:<password>'
--pipeline.id ID Sets the ID of the pipeline.
(default: "main")
-w, --pipeline.workers COUNT Sets the number of pipeline workers to run.
(default: 4)
--java-execution Use Java execution engine.
(default: false)
-b, --pipeline.batch.size SIZE Size of batches the pipeline is to work in.
(default: 125)
-u, --pipeline.batch.delay DELAY_IN_MS When creating pipeline batches, how long to wait while polling
for the next event.
(default: 50)
--pipeline.unsafe_shutdown Force logstash to exit during shutdown even
if there are still inflight events in memory.
By default, logstash will refuse to quit until all
received events have been pushed to the outputs.
(default: false)
--path.data PATH This should point to a writable directory. Logstash
will use this directory whenever it needs to store
data. Plugins will also have access to this path.
(default: "/usr/share/logstash/data")
-p, --path.plugins PATH A path of where to find plugins. This flag
can be given multiple times to include
multiple paths. Plugins are expected to be
in a specific directory hierarchy:
'PATH/logstash/TYPE/NAME.rb' where TYPE is
'inputs' 'filters', 'outputs' or 'codecs'
and NAME is the name of the plugin.
(default: [])
-l, --path.logs PATH Write logstash internal logs to the given
file. Without this flag, logstash will emit
logs to standard output.
(default: "/usr/share/logstash/logs")
--log.level LEVEL Set the log level for logstash. Possible values are:
- fatal
- error
- warn
- info
- debug
- trace
(default: "info")
--config.debug Print the compiled config ruby code out as a debug log (you must also have --log.level=debug enabled).
WARNING: This will include any 'password' options passed to plugin configs as plaintext, and may result
in plaintext passwords appearing in your logs!
(default: false)
-i, --interactive SHELL Drop to shell instead of running as normal.
Valid shells are "irb" and "pry"
-V, --version Emit the version of logstash and its friends,
then exit.
-t, --config.test_and_exit Check configuration for valid syntax and then exit.
(default: false)
-r, --config.reload.automatic Monitor configuration changes and reload
whenever it is changed.
NOTE: use SIGHUP to manually reload the config
(default: false)
--config.reload.interval RELOAD_INTERVAL How frequently to poll the configuration location
for changes, in seconds.
(default: 3000000000)
--http.host HTTP_HOST Web API binding host (default: "127.0.0.1")
--http.port HTTP_PORT Web API http port (default: 9600..9700)
--log.format FORMAT Specify if Logstash should write its own logs in JSON form (one
event per line) or in plain text (using Ruby's Object#inspect)
(default: "plain")
--path.settings SETTINGS_DIR Directory containing logstash.yml file. This can also be
set through the LS_SETTINGS_DIR environment variable.
(default: "/usr/share/logstash/config")
--verbose Set the log level to info.
DEPRECATED: use --log.level=info instead.
--debug Set the log level to debug.
DEPRECATED: use --log.level=debug instead.
--quiet Set the log level to info.
DEPRECATED: use --log.level=info instead.
-h, --help print help
測試Logstash 輸出到Elasticsearch
# cat /etc/logstash/conf.d/message.conf
input {
http {
port => 7474
}
}
filter {
grok {
patterns_dir => ['/etc/logstash/pattern']
pattern_definitions => {
"DATETIME" => "%{MONTH}%{SPACE}%{MONTHDAY}%{SPACE}%{TIME}"
}
match => {
"message" => '%{DATETIME:datetime} %{HOSTNAME:hostname} (%{HOSTNAME:system}:|%{USERNAME:system}\[%{USERNAME}\]:) %{GREEDYDATA:message}'
}
overwrite => ["message"]
remove_field => ["headers"]
}
}
output {
elasticsearch {
hosts => ["10.3.245.65:9200","10.3.245.25:9200"]
index => "test"
user => admin
password => admin
}
}
啟動Logstash
Logstash 啟動的時候 增加 -t 是進行語法檢測
# 啟動logstash
/usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d -r -l /tmp/
Sending Logstash logs to /tmp/ which is now configured via log4j2.properties
[2020-01-04T23:51:26,208][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-01-04T23:51:26,221][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.6.1"}
[2020-01-04T23:51:31,358][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2020-01-04T23:51:31,719][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://admin:[email protected]:9200/, http://admin:[email protected]:9200/]}}
[2020-01-04T23:51:31,903][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://admin:[email protected]:9200/"}
[2020-01-04T23:51:31,943][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2020-01-04T23:51:31,946][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2020-01-04T23:51:31,951][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://admin:[email protected]:9200/"}
[2020-01-04T23:51:31,979][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.3.245.65:9200", "//10.3.245.25:9200"]}
[2020-01-04T23:51:31,988][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2020-01-04T23:51:32,007][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2020-01-04T23:51:32,320][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7d814629 run>"}
[2020-01-04T23:51:32,332][INFO ][logstash.inputs.http ] Starting http input listener {:address=>"0.0.0.0:7474", :ssl=>"false"}
[2020-01-04T23:51:32,366][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-01-04T23:51:32,564][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
從postman 輸出資訊到logstash