實戰ELK(8) 安裝ElasticSearch中文分詞器
安裝
-
方法1 - download pre-build package from here: https://github.com/medcl/elasticsearch-analysis-ik/releases
創建文件夾
cd your-es-root/plugins/ && mkdir ik
解壓到你的文件夾
your-es-root/plugins/ik
-
方法2 - use elasticsearch-plugin to install ( supported from version v5.5.1 ):
./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.5.1/elasticsearch-analysis-ik-6.5.1.zip
註意: 版本跟elasticsearch要完全一致
2.重啟elasticsearch
示例
1.創建索引
curl -XPUT http://localhost:9200/index
2.create a mapping
curl -XPOST http://localhost:9200/index/fulltext/_mapping -H ‘Content-Type:application/json‘ -d‘
{
"properties": {
"content": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
}
}
}‘
3.寫一些數據
curl -XPOST http://localhost:9200/index/fulltext/1 -H ‘Content-Type:application/json‘ -d‘
{"content":"美國留給伊拉克的是個爛攤子嗎"}
‘
curl -XPOST http://localhost:9200/index/fulltext/2 -H ‘Content-Type:application/json‘ -d‘
{"content":"公安部:各地校車將享最高路權"}
‘
curl -XPOST http://localhost:9200/index/fulltext/3 -H ‘Content-Type:application/json‘ -d‘
{"content":"中韓漁警沖突調查:韓警平均每天扣1艘中國漁船"}
‘
curl -XPOST http://localhost:9200/index/fulltext/4 -H ‘Content-Type:application/json‘ -d‘
{"content":"中國駐洛杉磯領事館遭亞裔男子槍擊 嫌犯已自首"}
‘
4.query with highlighting
curl -XPOST http://localhost:9200/index/fulltext/_search -H ‘Content-Type:application/json‘ -d‘
{
"query" : { "match" : { "content" : "中國" }},
"highlight" : {
"pre_tags" : ["<tag1>", "<tag2>"],
"post_tags" : ["</tag1>", "</tag2>"],
"fields" : {
"content" : {}
}
}
}
‘
Result
{
"took": 14,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 2,
"hits": [
{
"_index": "index",
"_type": "fulltext",
"_id": "4",
"_score": 2,
"_source": {
"content": "中國駐洛杉磯領事館遭亞裔男子槍擊 嫌犯已自首"
},
"highlight": {
"content": [
"<tag1>中國</tag1>駐洛杉磯領事館遭亞裔男子槍擊 嫌犯已自首 "
]
}
},
{
"_index": "index",
"_type": "fulltext",
"_id": "3",
"_score": 2,
"_source": {
"content": "中韓漁警沖突調查:韓警平均每天扣1艘中國漁船"
},
"highlight": {
"content": [
"均每天扣1艘<tag1>中國</tag1>漁船 "
]
}
}
]
}
}
查看分詞效果
curl -XGET http://localhost:9200/your_index/your_type/your_id/_termvectors?fields=your_fieldsName
curl -XGET http://localhost:9200/index/fulltext/1/_termvectors?fields=content
結果
{"_index":"index","_type":"fulltext","_id":"1","_version":1,"found":true,"took":1,"term_vectors":{"content":{"field_statistics":{"sum_doc_freq":9,"doc_count":1,"sum_ttf":9},"terms":{"個":{"term_freq":1,"tokens":[{"position":5,"start_offset":9,"end_offset":10}]},"伊拉克":{"term_freq":1,"tokens":[{"position":2,"start_offset":4,"end_offset":7}]},"嗎":{"term_freq":1,"tokens":[{"position":8,"start_offset":13,"end_offset":14}]},"攤子":{"term_freq":1,"tokens":[{"position":7,"start_offset":11,"end_offset":13}]},"是":{"term_freq":1,"tokens":[{"position":4,"start_offset":8,"end_offset":9}]},"爛攤子":{"term_freq":1,"tokens":[{"position":6,"start_offset":10,"end_offset":13}]},"留給":{"term_freq":1,"tokens":[{"position":1,"start_offset":2,"end_offset":4}]},"的":{"term_freq":1,"tokens":[{"position":3,"start_offset":7,"end_offset":8}]},"美國":{"term_freq":1,"tokens":[{"position":0,"start_offset":0,"end_offset":2}]}}}}}
Dictionary Configuration
IKAnalyzer.cfg.xml
can be located at {conf}/analysis-ik/config/IKAnalyzer.cfg.xml
or {plugins}/elasticsearch-analysis-ik-*/config/IKAnalyzer.cfg.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>IK Analyzer 擴展配置</comment>
<!--用戶可以在這裏配置自己的擴展字典 -->
<entry key="ext_dict">custom/mydict.dic;custom/single_word_low_freq.dic</entry>
<!--用戶可以在這裏配置自己的擴展停止詞字典-->
<entry key="ext_stopwords">custom/ext_stopword.dic</entry>
<!--用戶可以在這裏配置遠程擴展字典 -->
<entry key="remote_ext_dict">location</entry>
<!--用戶可以在這裏配置遠程擴展停止詞字典-->
<entry key="remote_ext_stopwords">http://xxx.com/xxx.dic</entry>
</properties>
熱更新 IK 分詞使用方法
目前該插件支持熱更新 IK 分詞,通過上文在 IK 配置文件中提到的如下配置
<!--用戶可以在這裏配置遠程擴展字典 -->
<entry key="remote_ext_dict">location</entry>
<!--用戶可以在這裏配置遠程擴展停止詞字典-->
<entry key="remote_ext_stopwords">location</entry>
其中 location
是指一個 url,比如 http://yoursite.com/getCustomDict
,該請求只需滿足以下兩點即可完成分詞熱更新。
-
該 http 請求需要返回兩個頭部(header),一個是
Last-Modified
,一個是ETag
,這兩者都是字符串類型,只要有一個發生變化,該插件就會去抓取新的分詞進而更新詞庫。 -
該 http 請求返回的內容格式是一行一個分詞,換行符用
\n
即可。
滿足上面兩點要求就可以實現熱更新分詞了,不需要重啟 ES 實例。
可以將需自動更新的熱詞放在一個 UTF-8 編碼的 .txt 文件裏,放在 nginx 或其他簡易 http server 下,當 .txt 文件修改時,http server 會在客戶端請求該文件時自動返回相應的 Last-Modified 和 ETag。可以另外做一個工具來從業務系統提取相關詞匯,並更新這個 .txt 文件。
have fun.
常見問題
1.自定義詞典為什麽沒有生效?
請確保你的擴展詞典的文本格式為 UTF8 編碼
2.如何手動安裝?
git clone https://github.com/medcl/elasticsearch-analysis-ik
cd elasticsearch-analysis-ik
git checkout tags/{version}
mvn clean
mvn compile
mvn package
拷貝和解壓release下的文件: #{project_path}/elasticsearch-analysis-ik/target/releases/elasticsearch-analysis-ik-*.zip 到你的 elasticsearch 插件目錄, 如: plugins/ik 重啟elasticsearch
3.分詞測試失敗 請在某個索引下調用analyze接口測試,而不是直接調用analyze接口 如:
curl -XGET "http://localhost:9200/your_index/_analyze" -H ‘Content-Type: application/json‘ -d‘
{
"text":"中華人民共和國MN","tokenizer": "my_ik"
}‘
- ik_max_word 和 ik_smart 什麽區別?
ik_max_word: 會將文本做最細粒度的拆分,比如會將“中華人民共和國國歌”拆分為“中華人民共和國,中華人民,中華,華人,人民共和國,人民,人,民,共和國,共和,和,國國,國歌”,會窮盡各種可能的組合;
ik_smart: 會做最粗粒度的拆分,比如會將“中華人民共和國國歌”拆分為“中華人民共和國,國歌”。
Changes
5.0.0
- 移除名為
ik
的analyzer和tokenizer,請分別使用ik_smart
和ik_max_word
實戰ELK(8) 安裝ElasticSearch中文分詞器