1. 程式人生 > 實用技巧 >springboot整合elasticsearch+ik分詞器+kibana

springboot整合elasticsearch+ik分詞器+kibana

SpringBoot整合Elasticsearch+IK+Kibana

ElasticSearch是一個基於Lucene的搜尋伺服器。它提供了一個分散式多使用者能力的全文搜尋引擎,基於RESTful web介面。

Elasticsearch是用Java開發的,並作為Apache許可條款下的開放原始碼釋出,是當前流行的企業級搜尋引擎。設計用於雲端計算中,能夠達到實時搜尋,穩定,可靠,快速,安裝使用方便。

安裝相關軟體

軟體名稱 軟體版本 下載地址
Elasticsearch 6.2.4 elasticsearch官網下載
IK中文分詞器 6.2.4 ik分詞器官網下載
kibana 6.2.4 kibana官網下載

1,安裝Elasticsearch

安裝教程百度一搜一大把,這裡就不作詳細解釋,只說明如下幾點

  • 修改jvm.options的記憶體分配引數,否則容易造成記憶體不足的啟動失敗問題
  • 修改elasticsearch.yml配置檔案,修改地址,埠,節點名稱等資訊

注意:elasticsearch自帶jdk,注意Linux環境中的jdk和自帶的jdk衝突!

安裝完成之後啟動es,預設啟動埠為9200

./bin/elasticsearch
./bin/elasticsearch -d  # 後臺執行

瀏覽器訪問: http://ip:9200/ 會得到相應的版本資訊,如

{
	"name": "Bb-td48",
	"cluster_name": "elasticsearch",
	"cluster_uuid": "_IM0iQAeToWALU0tq7rsZQ",
	"version": {
	"number": "6.2.4",
	"build_hash": "ccec39f",
	"build_date": "2018-04-12T20:37:28.497551Z",
	"build_snapshot": false,
	"lucene_version": "7.2.1",
	"minimum_wire_compatibility_version": "5.6.0",
	"minimum_index_compatibility_version": "5.0.0"
	},
	"tagline": "You Know, for Search"
}

2,ES安裝IK分詞器

為什麼要在elasticsearch中要使用ik這樣的中文分詞呢,那是因為es提供的分詞是英文分詞,對於中文的分詞就做的非常不好了,因此我們需要一箇中文分詞器來用於搜尋和使用。今天我們就嘗試安裝下IK分詞。

1、去github 下載對應的分詞外掛,根據不同版本下載不同的分詞外掛

https://github.com/medcl/elasticsearch-analysis-ik/releases

2、到es的plugins 目錄建立資料夾

cd your-es-root/plugins/ && mkdir ik

3、解壓ik分詞外掛到ik資料夾

unzip elasticsearch-analysis-ik-6.4.3.zip

然後重新啟動ES,就可以看到分詞器已經自動被載入了

3,ES安裝Kibana(視覺化工具)

Kibana可以到官網去下載,不過網速都是特別感人,這裡提供一個華為雲映象地址,下載速度嗖嗖的!

https://mirrors.huaweicloud.com/kibana/

裡面有所有版本的Kibana提供下載!

解壓

tar -zxvf kibana-6.3.2-linux-x86_64.tar.gz

修改配置檔案

vim config/kibana.yml
# 放開註釋,將預設配置改成如下:
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.202.128:9200"
kibana.index: ".kibana"

啟動

bin/kibana

啟動失敗 報錯如下

這個很明顯是沒有許可權,一次切換root使用者 給es使用者這個檔案的許可權

chown -R 使用者名稱:使用者名稱 /usr/local/elasticsearch/kibana/kibana-7.8.0/

但是下面的警告出事了,這裡並不是找不到啥導致報錯,而是伺服器記憶體不足造成的,所以,我選擇放棄!

相信不用kibana照樣玩轉elasticsearch!

4,springboot工程整合elasticsearch

(1)整合maven依賴

<dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
     還有lombok,自己加一下

上面的是整合依賴,由於測試等原因,加上其他的依賴

		<!--elasticsearch-->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
        </dependency>
        <!--由於高亮顯示需要重寫mapResults方法,這是重寫方法時用到的beanUtils的依賴-->
        <dependency>
            <groupId>commons-beanutils</groupId>
            <artifactId>commons-beanutils</artifactId>
            <version>1.9.3</version>
        </dependency>

(2)application.yml

spring:
  data:
    elasticsearch:
      cluster-name: my-application
      cluster-nodes: 101.201.101.206:9300

(3)實體類

@Data
@AllArgsConstructor
@NoArgsConstructor
@Document(indexName = "item", type = "docs", shards = 1, replicas = 0)
public class Item {
    @Id
    private Long id;

    @Field(type = FieldType.Text, analyzer = "ik_max_word")
    private String title; //標題

    @Field(type = FieldType.Keyword)
    private String category;// 分類

    @Field(type = FieldType.Keyword)
    private String brand; // 品牌

    @Field(type = FieldType.Double)
    private Double price; // 價格

    @Field(index = false, type = FieldType.Keyword)
    private String images; // 圖片地址
}

Spring Data通過註解來宣告欄位的對映屬性,有下面的三個註解:

  • @Document

    作用在類,標記實體類為文件物件,一般有兩個屬性

    • indexName:對應索引庫名稱
    • type:對應在索引庫中的型別
    • shards:分片數量,預設5
    • replicas:副本數量,預設1
  • @Id 作用在成員變數,標記一個欄位作為id主鍵

  • @Field

    作用在成員變數,標記為文件的欄位,並指定欄位對映屬性:

    • type:欄位型別,是是列舉:FieldType
    • index:是否索引,布林型別,預設是true
    • store:是否儲存,布林型別,預設是false
    • analyzer:分詞器名稱

(4)repository

需要提供一個repository倉庫

public interface ItemRepository extends ElasticsearchRepository<Item, Long> {

    /**
     * 根據價格區間查詢
     *
     * @param price1
     * @param price2
     * @return
     */
    List<Item> findByPriceBetween(double price1, double price2);
}

(5)測試

這裡對elasticsearch做增刪改查!

@RunWith(SpringRunner.class)
@SpringBootTest
public class SpringbootElasticsearchApplicationTests {

    @Autowired
    private ElasticsearchTemplate elasticsearchTemplate;

    @Autowired
    private ItemRepository itemRepository;

    /**
     * 建立索引
     */
    @Test
    public void createIndex() {
        // 建立索引,會根據Item類的@Document註解資訊來建立
        elasticsearchTemplate.createIndex(Item.class);
        // 配置對映,會根據Item類中的id、Field等欄位來自動完成對映
        elasticsearchTemplate.putMapping(Item.class);
    }

    /**
     * 刪除索引
     */
    @Test
    public void deleteIndex() {
        elasticsearchTemplate.deleteIndex("item");
    }

    /**
     * 新增
     */
    @Test
    public void insert() {
        Item item = new Item(1L, "小米手機7", "手機", "小米", 2999.00, "https://img12.360buyimg.com/n1/s450x450_jfs/t1/14081/40/4987/124705/5c371b20E53786645/c1f49cd69e6c7e6a.jpg");
        itemRepository.save(item);
    }

    /**
     * 批量新增
     */
    @Test
    public void insertList() {
        List<Item> list = new ArrayList<>();
        list.add(new Item(2L, "堅果手機R1", "手機", "錘子", 3999.00, "https://img12.360buyimg.com/n1/s450x450_jfs/t1/14081/40/4987/124705/5c371b20E53786645/c1f49cd69e6c7e6a.jpg"));
        list.add(new Item(3L, "華為META20", "手機", "華為", 4999.00, "https://img12.360buyimg.com/n1/s450x450_jfs/t1/14081/40/4987/124705/5c371b20E53786645/c1f49cd69e6c7e6a.jpg"));
        list.add(new Item(4L, "iPhone X", "手機", "iPhone", 5100.00, "https://img12.360buyimg.com/n1/s450x450_jfs/t1/14081/40/4987/124705/5c371b20E53786645/c1f49cd69e6c7e6a.jpg"));
        list.add(new Item(5L, "iPhone XS", "手機", "iPhone", 5999.00, "https://img12.360buyimg.com/n1/s450x450_jfs/t1/14081/40/4987/124705/5c371b20E53786645/c1f49cd69e6c7e6a.jpg"));
        // 接收物件集合,實現批量新增
        itemRepository.saveAll(list);
    }

    /**
     * 修改
     *
     * :修改和新增是同一個介面,區分的依據就是id,這一點跟我們在頁面發起PUT請求是類似的。
     */

    /**
     * 刪除所有
     */
    @Test
    public void delete() {
        itemRepository.deleteAll();
    }

    /**
     * 基本查詢
     */
    @Test
    public void query() {
        // 查詢全部,並按照價格降序排序
        Iterable<Item> items = itemRepository.findAll(Sort.by("price").descending());
        items.forEach(item -> System.out.println("item = " + item));
    }

    /**
     * 自定義方法
     */
    @Test
    public void queryByPriceBetween() {
        // 根據價格區間查詢
        List<Item> list = itemRepository.findByPriceBetween(5000.00, 6000.00);
        list.forEach(item -> System.out.println("item = " + item));
    }

    /**
     * 自定義查詢
     */
    @Test
    public void search() {
        // 構建查詢條件
        NativeSearchQueryBuilder queryBuilder = new NativeSearchQueryBuilder();
        // 新增基本分詞查詢
        queryBuilder.withQuery(QueryBuilders.matchQuery("title", "小米手機"));
        // 搜尋,獲取結果
        Page<Item> items = itemRepository.search(queryBuilder.build());
        // 總條數
        long total = items.getTotalElements();
        System.out.println("total = " + total);
        items.forEach(item -> System.out.println("item = " + item));
    }

    /**
     * 分頁查詢
     */
    @Test
    public void searchByPage() {
        // 構建查詢條件
        NativeSearchQueryBuilder queryBuilder = new NativeSearchQueryBuilder();
        // 新增基本分詞查詢
        queryBuilder.withQuery(QueryBuilders.termQuery("category", "手機"));
        // 分頁:
        int page = 0;
        int size = 2;
        queryBuilder.withPageable(PageRequest.of(page, size));
        // 搜尋,獲取結果
        Page<Item> items = itemRepository.search(queryBuilder.build());
        long total = items.getTotalElements();
        System.out.println("總條數 = " + total);
        System.out.println("總頁數 = " + items.getTotalPages());
        System.out.println("當前頁:" + items.getNumber());
        System.out.println("每頁大小:" + items.getSize());
        items.forEach(item -> System.out.println("item = " + item));
    }

    /**
     * 排序
     */
    @Test
    public void searchAndSort() {
        // 構建查詢條件
        NativeSearchQueryBuilder queryBuilder = new NativeSearchQueryBuilder();
        // 新增基本分詞查詢
        queryBuilder.withQuery(QueryBuilders.termQuery("category", "手機"));
        // 排序
        queryBuilder.withSort(SortBuilders.fieldSort("price").order(SortOrder.ASC));
        // 搜尋,獲取結果
        Page<Item> items = this.itemRepository.search(queryBuilder.build());
        // 總條數
        long total = items.getTotalElements();
        System.out.println("總條數 = " + total);
        items.forEach(item -> System.out.println("item = " + item));
    }

    /**
     * 聚合為桶
     */
    @Test
    public void testAgg() {
        NativeSearchQueryBuilder queryBuilder = new NativeSearchQueryBuilder();
        // 不查詢任何結果
        queryBuilder.withSourceFilter(new FetchSourceFilter(new String[]{""}, null));
        // 1、新增一個新的聚合,聚合型別為terms,聚合名稱為brands,聚合欄位為brand
        queryBuilder.addAggregation(AggregationBuilders.terms("brands").field("brand"));
        // 2、查詢,需要把結果強轉為AggregatedPage型別
        AggregatedPage<Item> aggPage = (AggregatedPage<Item>) itemRepository.search(queryBuilder.build());
        // 3、解析
        // 3.1、從結果中取出名為brands的那個聚合,
        // 因為是利用String型別欄位來進行的term聚合,所以結果要強轉為StringTerm型別
        StringTerms agg = (StringTerms) aggPage.getAggregation("brands");
        // 3.2、獲取桶
        List<StringTerms.Bucket> buckets = agg.getBuckets();
        // 3.3、遍歷
        for (StringTerms.Bucket bucket : buckets) {
            // 3.4、獲取桶中的key,即品牌名稱
            System.out.println(bucket.getKeyAsString());
            // 3.5、獲取桶中的文件數量
            System.out.println(bucket.getDocCount());
        }
    }

    /**
     * 巢狀聚合,求平均值
     */
    @Test
    public void testSubAgg() {
        NativeSearchQueryBuilder queryBuilder = new NativeSearchQueryBuilder();
        // 不查詢任何結果
        queryBuilder.withSourceFilter(new FetchSourceFilter(new String[]{""}, null));
        // 1、新增一個新的聚合,聚合型別為terms,聚合名稱為brands,聚合欄位為brand
        queryBuilder.addAggregation(
                AggregationBuilders.terms("brands").field("brand")
                        .subAggregation(AggregationBuilders.avg("priceAvg").field("price")) // 在品牌聚合桶內進行巢狀聚合,求平均值
        );
        // 2、查詢,需要把結果強轉為AggregatedPage型別
        AggregatedPage<Item> aggPage = (AggregatedPage<Item>) this.itemRepository.search(queryBuilder.build());
        // 3、解析
        // 3.1、從結果中取出名為brands的那個聚合,
        // 因為是利用String型別欄位來進行的term聚合,所以結果要強轉為StringTerm型別
        StringTerms agg = (StringTerms) aggPage.getAggregation("brands");
        // 3.2、獲取桶
        List<StringTerms.Bucket> buckets = agg.getBuckets();
        // 3.3、遍歷
        for (StringTerms.Bucket bucket : buckets) {
            // 3.4、獲取桶中的key,即品牌名稱  3.5、獲取桶中的文件數量
            System.out.println(bucket.getKeyAsString() + ",共" + bucket.getDocCount() + "臺");

            // 3.6.獲取子聚合結果:
            InternalAvg avg = (InternalAvg) bucket.getAggregations().asMap().get("priceAvg");
            System.out.println("平均售價:" + avg.getValue());
        }
    }
}

(6)搜尋分詞高亮顯示

下面是測試類中的一個查詢方法,並進行高亮顯示

@org.junit.Test
    public void search() {
        // 構建查詢條件
        NativeSearchQueryBuilder queryBuilder = new NativeSearchQueryBuilder();
        // 新增基本分詞查詢
        queryBuilder.withQuery(QueryBuilders.matchQuery("title", "搜尋引擎"));

        HighlightBuilder.Field hfield= new HighlightBuilder.Field("title")
                .preTags("<em style='color:red'>")
                .postTags("</em>")
                .fragmentSize(100);
        queryBuilder.withHighlightFields(hfield);

        // 搜尋,獲取結果
        Page<Item> items = itemRepository.search(queryBuilder.build());
        // 總條數
        long total = items.getTotalElements();
        System.out.println("total = " + total);
        items.forEach(item -> System.out.println("item = " + item));
    }

但是沒有效果,百度發現,這個版本的mapper實現類沒有設定高亮顯示的欄位,改正後的結果

新建一個MyResultMapper ,繼承AbstractResultMapper 並對其方法進行重寫,結果如下

其中需要上面的BeanUtils的依賴!

package com.example.elasticsearch.springbootelasticsearch.repository;
 
import com.fasterxml.jackson.core.JsonEncoding;
import com.fasterxml.jackson.core.JsonFactory;
import com.fasterxml.jackson.core.JsonGenerator;
import org.apache.commons.beanutils.PropertyUtils;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.get.MultiGetItemResponse;
import org.elasticsearch.action.get.MultiGetResponse;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.common.document.DocumentField;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightField;
import org.springframework.data.domain.Pageable;
import org.springframework.data.elasticsearch.ElasticsearchException;
import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.ScriptedField;
import org.springframework.data.elasticsearch.core.AbstractResultMapper;
import org.springframework.data.elasticsearch.core.DefaultEntityMapper;
import org.springframework.data.elasticsearch.core.EntityMapper;
import org.springframework.data.elasticsearch.core.aggregation.AggregatedPage;
import org.springframework.data.elasticsearch.core.aggregation.impl.AggregatedPageImpl;
import org.springframework.data.elasticsearch.core.mapping.ElasticsearchPersistentEntity;
import org.springframework.data.elasticsearch.core.mapping.ElasticsearchPersistentProperty;
import org.springframework.data.elasticsearch.core.mapping.SimpleElasticsearchMappingContext;
import org.springframework.data.mapping.context.MappingContext;
import org.springframework.stereotype.Component;
import org.springframework.util.Assert;
import org.springframework.util.StringUtils;
 
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.lang.reflect.InvocationTargetException;
import java.nio.charset.Charset;
import java.util.*;

@Component
public class MyResultMapper extends AbstractResultMapper {
 
	private final MappingContext<? extends ElasticsearchPersistentEntity<?>, ElasticsearchPersistentProperty> mappingContext;
 
	public MyResultMapper() {
		this(new SimpleElasticsearchMappingContext());
	}
 
	public MyResultMapper(MappingContext<? extends ElasticsearchPersistentEntity<?>, ElasticsearchPersistentProperty> mappingContext) {
		
		super(new DefaultEntityMapper(mappingContext));
		
		Assert.notNull(mappingContext, "MappingContext must not be null!");
		
		this.mappingContext = mappingContext;
	}
 
	public MyResultMapper(EntityMapper entityMapper) {
		this(new SimpleElasticsearchMappingContext(), entityMapper);
	}
 
	public MyResultMapper(
			MappingContext<? extends ElasticsearchPersistentEntity<?>, ElasticsearchPersistentProperty> mappingContext,
			EntityMapper entityMapper) {
		
		super(entityMapper);
		
		Assert.notNull(mappingContext, "MappingContext must not be null!");
		
		this.mappingContext = mappingContext;
	}
 
	@Override
	public <T> AggregatedPage<T> mapResults(SearchResponse response, Class<T> clazz, Pageable pageable) {
		
		long totalHits = response.getHits().getTotalHits();
		float maxScore = response.getHits().getMaxScore();
 
		List<T> results = new ArrayList<>();
		for (SearchHit hit : response.getHits()) {
			if (hit != null) {
				T result = null;
				if (!StringUtils.isEmpty(hit.getSourceAsString())) {
					result = mapEntity(hit.getSourceAsString(), clazz);
				} else {
					result = mapEntity(hit.getFields().values(), clazz);
				}
 
				setPersistentEntityId(result, hit.getId(), clazz);
				setPersistentEntityVersion(result, hit.getVersion(), clazz);
				setPersistentEntityScore(result, hit.getScore(), clazz);
				
				populateScriptFields(result, hit);
 
                results.add(result);
			}
		}
 
		return new AggregatedPageImpl<T>(results, pageable, totalHits, response.getAggregations(), response.getScrollId(),
				maxScore);
	}
 
    private String concat(Text[] texts) {
        StringBuilder sb = new StringBuilder();
        for (Text text : texts) {
            sb.append(text.toString());
        }
        return sb.toString();
    }
 
 
	private <T> void populateScriptFields(T result, SearchHit hit) {
		if (hit.getFields() != null && !hit.getFields().isEmpty() && result != null) {
			for (java.lang.reflect.Field field : result.getClass().getDeclaredFields()) {
				ScriptedField scriptedField = field.getAnnotation(ScriptedField.class);
				if (scriptedField != null) {
					String name = scriptedField.name().isEmpty() ? field.getName() : scriptedField.name();
					DocumentField searchHitField = hit.getFields().get(name);
					if (searchHitField != null) {
						field.setAccessible(true);
						try {
							field.set(result, searchHitField.getValue());
						} catch (IllegalArgumentException e) {
							throw new ElasticsearchException(
									"failed to set scripted field: " + name + " with value: " + searchHitField.getValue(), e);
						} catch (IllegalAccessException e) {
							throw new ElasticsearchException("failed to access scripted field: " + name, e);
						}
					}
				}
			}
		}
 
        for (HighlightField field : hit.getHighlightFields().values()) {
            try {
                PropertyUtils.setProperty(result, field.getName(), concat(field.fragments()));
            } catch (InvocationTargetException | IllegalAccessException | NoSuchMethodException e) {
                throw new ElasticsearchException("failed to set highlighted value for field: " + field.getName()
                        + " with value: " + Arrays.toString(field.getFragments()), e);
            }
        }
	}
 
	private <T> T mapEntity(Collection<DocumentField> values, Class<T> clazz) {
		return mapEntity(buildJSONFromFields(values), clazz);
	}
 
	private String buildJSONFromFields(Collection<DocumentField> values) {
		JsonFactory nodeFactory = new JsonFactory();
		try {
			ByteArrayOutputStream stream = new ByteArrayOutputStream();
			JsonGenerator generator = nodeFactory.createGenerator(stream, JsonEncoding.UTF8);
			generator.writeStartObject();
			for (DocumentField value : values) {
				if (value.getValues().size() > 1) {
					generator.writeArrayFieldStart(value.getName());
					for (Object val : value.getValues()) {
						generator.writeObject(val);
					}
					generator.writeEndArray();
				} else {
					generator.writeObjectField(value.getName(), value.getValue());
				}
			}
			generator.writeEndObject();
			generator.flush();
			return new String(stream.toByteArray(), Charset.forName("UTF-8"));
		} catch (IOException e) {
			return null;
		}
	}
 
	@Override
	public <T> T mapResult(GetResponse response, Class<T> clazz) {
		T result = mapEntity(response.getSourceAsString(), clazz);
		if (result != null) {
			setPersistentEntityId(result, response.getId(), clazz);
			setPersistentEntityVersion(result, response.getVersion(), clazz);
		}
		return result;
	}
 
	@Override
	public <T> LinkedList<T> mapResults(MultiGetResponse responses, Class<T> clazz) {
		LinkedList<T> list = new LinkedList<>();
		for (MultiGetItemResponse response : responses.getResponses()) {
			if (!response.isFailed() && response.getResponse().isExists()) {
				T result = mapEntity(response.getResponse().getSourceAsString(), clazz);
				setPersistentEntityId(result, response.getResponse().getId(), clazz);
				setPersistentEntityVersion(result, response.getResponse().getVersion(), clazz);
				list.add(result);
			}
		}
		return list;
	}
 
	private <T> void setPersistentEntityId(T result, String id, Class<T> clazz) {
		
		if (clazz.isAnnotationPresent(Document.class)) {
			
			ElasticsearchPersistentEntity<?> persistentEntity = mappingContext.getRequiredPersistentEntity(clazz);
			ElasticsearchPersistentProperty idProperty = persistentEntity.getIdProperty();
 
			// Only deal with String because ES generated Ids are strings !
			if (idProperty != null && idProperty.getType().isAssignableFrom(String.class)) {
				persistentEntity.getPropertyAccessor(result).setProperty(idProperty, id);
			}
		}
	}
 
	private <T> void setPersistentEntityVersion(T result, long version, Class<T> clazz) {
		
		if (clazz.isAnnotationPresent(Document.class)) {
			
			ElasticsearchPersistentEntity<?> persistentEntity = mappingContext.getPersistentEntity(clazz);
			ElasticsearchPersistentProperty versionProperty = persistentEntity.getVersionProperty();
 
			// Only deal with Long because ES versions are longs !
			if (versionProperty != null && versionProperty.getType().isAssignableFrom(Long.class)) {
				// check that a version was actually returned in the response, -1 would indicate that
				// a search didn't request the version ids in the response, which would be an issue
				Assert.isTrue(version != -1, "Version in response is -1");
				persistentEntity.getPropertyAccessor(result).setProperty(versionProperty, version);
			}
		}
	}
 
	private <T> void setPersistentEntityScore(T result, float score, Class<T> clazz) {
 
		if (clazz.isAnnotationPresent(Document.class)) {
 
			ElasticsearchPersistentEntity<?> entity = mappingContext.getRequiredPersistentEntity(clazz);
 
			if (!entity.hasScoreProperty()) {
				return;
			}
 
			entity.getPropertyAccessor(result) //
					.setProperty(entity.getScoreProperty(), score);
		}
	}
}

然後再執行上面的自定義查詢方法,控制檯列印效果如下圖

5,springboot+elasticsearch實戰

虎你呢,沒了!再見!

上面的已經夠用了,兄弟萌!