1. 程式人生 > 其它 >DAY 127 ES--常見錯誤

DAY 127 ES--常見錯誤

一 read_only_allow_delete" : "true"

當我們在向某個索引新增一條資料的時候,可能(極少情況)會碰到下面的報錯:

{
"error": {
"root_cause": [
{
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
}
],
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
},
"status": 403
}

上述報錯是說索引現在的狀態是隻讀模式(read-only),如果檢視該索引此時的狀態:

GET z1/_settings
# 結果如下
{
"z1" : {
"settings" : {
"index" : {
"number_of_shards" : "5",
"blocks" : {
"read_only_allow_delete" : "true"
},
"provided_name" : "z1",
"creation_date" : "1556204559161",
"number_of_replicas" : "1",
"uuid" : "3PEevS9xSm-r3tw54p0o9w",
"version" : {
"created" : "6050499"
}
}
}
}
}

可以看到"read_only_allow_delete" : "true",說明此時無法插入資料,當然,我們也可以模擬出來這個錯誤:

PUT z1
{
"mappings": {
"doc": {
"properties": {
"title": {
"type":"text"
}
}
}
},
"settings": {
"index.blocks.read_only_allow_delete": true
}
}

PUT z1/doc/1
{
"title": "es真難學"
}

現在我們如果執行插入資料,就會報開始的錯誤。那麼怎麼解決呢?

  • 清理磁碟,使佔用率低於85%。

  • 手動調整該項,具體參考官網

這裡介紹一種,我們將該欄位重新設定為:

PUT z1/_settings
{
"index.blocks.read_only_allow_delete": null
}

現在再檢視該索引就正常了,也可以正常的插入資料和查詢了。

二 illegal_argument_exception

有時候,在聚合中,我們會發現如下報錯:

{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": "z2",
"node": "NRwiP9PLRFCTJA7w3H9eqA",
"reason": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
}
}
],
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
}
}
},
"status": 400
}

這是怎麼回事呢?是因為,聚合查詢時,指定欄位不能是text型別。比如下列示例:

PUT z2/doc/1
{
"age":"18"
}
PUT z2/doc/2
{
"age":20
}

GET z2/doc/_search
{
"query": {
"match_all": {}
},
"aggs": {
"my_sum": {
"sum": {
"field": "age"
}
}
}
}

當我們向elasticsearch中,新增一條資料時(此時,如果索引存在則直接新增或者更新文件,不存在則先建立索引),首先檢查該age欄位的對映型別。如上示例中,我們新增第一篇文件時(z1索引不存在),elasticsearch會自動的建立索引,然後為age欄位建立對映關係(es就猜此時age欄位的值是什麼型別,如果發現是text型別,那麼儲存該欄位的對映型別就是text),此時age欄位的值是text型別,所以,第二條插入資料,age的值也是text型別,而不是我們看到的long型別。我們可以檢視一下該索引的mappings資訊:

GET z2/_mapping
# mapping資訊如下
{
"z2" : {
"mappings" : {
"doc" : {
"properties" : {
"age" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
}
}
}

上述返回結果發現,age型別是text。而該型別又不支援聚合,所以,就會報錯了。解決辦法就是:

  • 如果選擇動態建立一篇文件,對映關係取決於你新增的第一條文件的各欄位都對應什麼型別。而不是我們看到的那樣,第一次是text,第二次不加引號,就是long型別了不是這樣的。

  • 如果嫌棄上面的解決辦法麻煩,那就選擇手動建立對映關係。首先指定好各欄位對應什麼型別。後續才不至於出錯。

三 Result window is too large

很多時候,我們在查詢文件時,一次查詢結果很可能會有很多,而elasticsearch一次返回多少條結果,由size引數決定:

GET e2/doc/_search
{
"size": 100000,
"query": {
"match_all": {}
}
}

而預設是最多範圍一萬條,那麼當我們的請求超過一萬條時(比如有十萬條),就會報:

Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting.

意思是一次請求返回的結果太大,可以另行參考 scroll API或者設定index.max_result_window引數手動調整size的最大預設值:

# kibana中設定
PUT e2/_settings
{
"index": {
"max_result_window": "100000"
}
}
# Python中設定
from elasticsearch import Elasticsearch
es = Elasticsearch()
es.indices.put_settings(index='e2', body={"index": {"max_result_window": 100000}})

如上例,我們手動調整索引e2size引數最大預設值到十萬,這時,一次查詢結果只要不超過10萬就都會一次返回。 注意,這個設定對於索引essize引數是永久生效的。