博客 ES学习看这一篇文章就够了4

ES学习看这一篇文章就够了4

   数栈君   发表于 2023-07-21 10:24  460  0

第六章 ES和SpringBoot整合(Spring-data版本使用)
第1节 分词器介绍
1.1 分词器的作用

1
将原始内容进行拆分,将一段话拆分成单词或者一个一个的字,或者语义单元
1.2 常见分词器

standars
1
ES默认分词器,将词汇单元转成小写,取出一些停用词和标点符号,支持中文,将中文拆分成单个的字
IK分词器
1
一个可以很好的支持中文,并且可以自定义的开源分词器
第2节 standars分词器演示(kibana工具)
2.1 演示英文

命令
1
2
3
4
5
POST _analyze
{
"analyzer": "standard",
"text":"Hello Java"
}
结果
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

将Hello Java分成了两个词分别是hello和java,首字母都变成了小写

{
"tokens": [
{
"token": "hello",
"start_offset": 0,
"end_offset": 5,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "java",
"start_offset": 6,
"end_offset": 10,
"type": "<ALPHANUM>",
"position": 1
}
]
}

2.2 演示中文

命令
1
2
3
4
5
POST _analyze
{
"analyzer": "standard",
"text":"我是中国人"
}
结果
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

{
"tokens": [
{
"token": "我",
"start_offset": 0,
"end_offset": 1,
"type": "<IDEOGRAPHIC>",
"position": 0
},
{
"token": "是",
"start_offset": 1,
"end_offset": 2,
"type": "<IDEOGRAPHIC>",
"position": 1
},
{
"token": "中",
"start_offset": 2,
"end_offset": 3,
"type": "<IDEOGRAPHIC>",
"position": 2
},
{
"token": "国",
"start_offset": 3,
"end_offset": 4,
"type": "<IDEOGRAPHIC>",
"position": 3
},
{
"token": "人",
"start_offset": 4,
"end_offset": 5,
"type": "<IDEOGRAPHIC>",
"position": 4
}
]
}

第3节 ik分词器安装和使用
1
medcl/elasticsearch-analysis-ik分词器是当前对中文支持最好的分词器
IK分词器地址
1
2
3
https://github.com/medcl/elasticsearch-analysis-ik/archive/v6.3.2.zip

下载的分词器一定要和当前安装的ES版本相同
安装步骤
1
2
3
1. 将下载的分词器压缩包复制到我们的es的安装目录D:\soft\elasticsearch-6.3.2\plugins插件目录下
3. 将elasticsearch-analysis-ik解压
4. 重启ES服务
第4节 ik分词器演示(kibana工具)
命令
1
2
3
4
5
6
7
8
9
10
11
12
13
POST _analyze
{
"analyzer": "ik_smart",
"text":"我是中国人"
}

-----------------

POST _analyze
{
"analyzer": "ik_max_word",
"text":"我是中国人"
}
返回效果
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71

ik_smart分词器效果

{
"tokens": [
{
"token": "我",
"start_offset": 0,
"end_offset": 1,
"type": "CN_CHAR",
"position": 0
},
{
"token": "是",
"start_offset": 1,
"end_offset": 2,
"type": "CN_CHAR",
"position": 1
},
{
"token": "中国人",
"start_offset": 2,
"end_offset": 5,
"type": "CN_WORD",
"position": 2
}
]
}

---------------------

ik_max_word分词器效果

{
"tokens": [
{
"token": "我",
"start_offset": 0,
"end_offset": 1,
"type": "CN_CHAR",
"position": 0
},
{
"token": "是",
"start_offset": 1,
"end_offset": 2,
"type": "CN_CHAR",
"position": 1
},
{
"token": "中国人",
"start_offset": 2,
"end_offset": 5,
"type": "CN_WORD",
"position": 2
},
{
"token": "中国",
"start_offset": 2,
"end_offset": 4,
"type": "CN_WORD",
"position": 3
},
{
"token": "国人",
"start_offset": 3,
"end_offset": 5,
"type": "CN_WORD",
"position": 4
}
]
}

第5节 ES和SpringBoot整合
5.1 创建SpringBoot项目

1
2
3
4
5
6
7
添加依赖,依赖在springboot-data中

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>





5.2 配置

1
2
3
4
5
# 设置ES的节点地址
# spring.data.elasticsearch.cluster-nodes=127.0.0.1:9300,127.0.0.1:9301,127.0.0.1:9302
spring.data.elasticsearch.cluster-nodes=127.0.0.1:9300
# es集群名称
spring.data.elasticsearch.cluster-name=elasticsearch
5.3 常见的操作

5.3.1 映射ES文档的实体类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

/**
* Document: 设置文档
* indexName:设置索引
* type: 设置类型
* shards: 设置分片
* replicas: 设置副本
*/
@Document(indexName = "bank",type = "account",shards = 5,replicas = 1)
public class EsAccount {

@Id
private Long id;
@Field(type = FieldType.Long)
private Long account_number;
@Field(type = FieldType.Text) //是有IK分词器(一定要安装IK分词器)
private String firstname;
@Field(type = FieldType.Text)
private String address;
@Field(type = FieldType.Text)
private String gender;
@Field(type = FieldType.Text)
private String city;
@Field(type = FieldType.Long)
private Long balance;
@Field(type = FieldType.Text)
private String lastname;
@Field(type = FieldType.Text)
private String employer;
@Field(type = FieldType.Text)
private String state;
@Field(type = FieldType.Long)
private Long age;
@Field(type = FieldType.Text)
private String email;
}

5.3.2 操作ES的接口

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

public interface EsAccountRepository extends ElasticsearchRepository<EsAccount,Long> {

/**
* 根据名称查询
* 方法名字不能随便乱起
*/
List<EsAccount> findByLastname(String lastname);

/**
* 根据地址查询
* 方法名字不能随便乱起
*/
List<EsAccount> findByAddress(String address);
/**
* 根据名称删除
* 方法名字不能随便乱起
*/
void deleteByFirstname(String firstname);
/**
* 根据地址查询,并且分页
* 方法名字不能随便乱起
*/
Page<EsAccount> findByAddress(String address, Pageable pageable);
}

5.3.3 单元测试操作

常见操作
1
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237

public class EsAccountRepositoryTest extends SpringbootEs01ApplicationTests {

@Autowired
private EsAccountRepository esAccountRepository;

/**
* 测试findAll
*/
@Test
public void findAll1(){
Iterable<EsAccount> esAccounts = esAccountRepository.findAll();
//esAccounts.forEach(System.out::println);
//esAccounts.forEach((x)->{System.out.println(x);});
Iterator<EsAccount> iterator = esAccounts.iterator();
while (iterator.hasNext()){
System.out.println(iterator.next());
}

}
/**
* 测试添加一条数据
* 更新数据,更新和添加主要判断是否有id
*/
@Test
public void saveAccount(){
EsAccount account = new EsAccount();
account.setAccount_number(4570L);
account.setAddress("北京");
account.setAge(18L);
account.setBalance(1000000L);
account.setCity("中国");
account.setEmail("123@.com");
account.setEmployer("李雷");
account.setFirstname("李");
account.setGender("男");
account.setId(4570L);
account.setLastname("李雷");
account.setState("NL");
esAccountRepository.save(account);
}

/**
* 根据ID查询account
*/
@Test
public void findAccountById(){
Optional<EsAccount> esAccount = esAccountRepository.findById(4570L);
System.out.println(esAccount.get());
}
/**
* 根据ID删除
*/
@Test
public void deleteAccount(){
esAccountRepository.deleteById(4570L);
}

/**
* 更新
*/
@Test
public void updateAccount(){

Optional<EsAccount> esAccount = esAccountRepository.findById(456L);
EsAccount account = esAccount.get();
account.setLastname("李雷super");
esAccountRepository.save(account);
}

/**
* 根据Lastname查询(模糊查询)
*/
@Test
public void findByLastname(){
List<EsAccount> accountList = esAccountRepository.findByLastname("李雷");
System.out.println(accountList);
}

/**
* 根据地址查询(模糊查询)
*/
@Test
public void findByAddress(){
List<EsAccount> accounts = esAccountRepository.findByAddress("京");
System.out.println(accounts);
}

/**
* 分页查询
*/
@Test
public void findPage(){
Pageable p = PageRequest.of(0, 5);
Page<EsAccount> accounts = esAccountRepository.findAll(p);
for (EsAccount account : accounts) {
System.out.println(account);
}
}

/**
* 排序分页查询
*/
@Test
public void findPageMult(){
Sort sort = Sort.by(Sort.Direction.DESC, "account_number");
Pageable p = PageRequest.of(0, 5, sort);
Page<EsAccount> accounts = esAccountRepository.findByAddress("Place", p);
for (EsAccount account : accounts) {
System.out.println(account);
}
}


/**
* 复杂查询,采用search方法
* 构建条件查询 使用match语法
* 过滤查询
*/
@Test
public void testMatch01(){
//单个条件
// QueryBuilder query = QueryBuilders.matchQuery("account_number", 20);
//多个条件
NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder();
NativeSearchQueryBuilder queryBuilder = builder
.withQuery(QueryBuilders.matchQuery("account_number", 20))
.withQuery(QueryBuilders.matchQuery("firstname","Elinor"));
NativeSearchQuery build = queryBuilder.build();
Iterable<EsAccount> accounts = esAccountRepository.search(build);
for (EsAccount account : accounts) {
System.out.println(account);
}
}


/**
* 复杂查询,采用search方法
* 构建条件查询 使用match语法
* 短语匹配搜索
*/
@Test
public void testMatchPhrase(){

NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder();
NativeSearchQueryBuilder queryBuilder = builder.withQuery(QueryBuilders.matchPhraseQuery("address", "mill lane"));
NativeSearchQuery searchQuery = queryBuilder.build();
Page<EsAccount> accountPage = esAccountRepository.search(searchQuery);
//accountPage.forEach(System.out::println);
//accountPage.forEach((x)->{System.out.println(x);});
//获取迭代器
Iterator<EsAccount> iterator = accountPage.iterator();
while(iterator.hasNext()){
System.out.println(iterator.next());
}
}

/**
* 组合搜索(bool)
* 同时满足(must)
*/
@Test
public void testMust(){

NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder();
builder.withQuery(QueryBuilders.boolQuery().must(QueryBuilders.matchQuery("address","mill")).must(QueryBuilders.matchQuery("address","lane")));
NativeSearchQuery query = builder.build();
Page<EsAccount> accounts = esAccountRepository.search(query);
for (EsAccount account : accounts) {
System.out.println(account);
}
}

/**
* 满足其中任意一个(should)
*/
@Test
public void testShould(){

NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder();
builder.withQuery(QueryBuilders.boolQuery().should(QueryBuilders.matchQuery("address","lane")).should(QueryBuilders.matchQuery("address","mill")));
NativeSearchQuery searchQuery = builder.build();
Page<EsAccount> accounts = esAccountRepository.search(searchQuery);
for (EsAccount account : accounts) {
System.out.println(account);
}
}
/**
* 同时不满足 must_not
*/
@Test
public void testMustNot(){

NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder();
builder.withQuery(QueryBuilders.boolQuery().mustNot(QueryBuilders.matchQuery("address","mill")).mustNot(QueryBuilders.matchQuery("address","lane")));
NativeSearchQuery query = builder.build();
Page<EsAccount> search = esAccountRepository.search(query);
for (EsAccount esAccount : search) {
System.out.println(esAccount);
}
}

/**
* 满足其中一部分,并且不包含另一部分(组合must和must_not)
*/
@Test
public void testMustAndNotMust(){

NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder();
builder.withQuery(QueryBuilders.boolQuery().mustNot(QueryBuilders.matchQuery("state","ID")).must(QueryBuilders.matchQuery("age","40")));
NativeSearchQuery query = builder.build();
Page<EsAccount> accounts = esAccountRepository.search(query);
for (EsAccount account : accounts) {
System.out.println(account);
}
}

/**
* 过滤搜索 filter
* 过滤出balance字段在20000~30000的文档
*/
@Test
public void testFilter(){
NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder();
builder.withFilter(QueryBuilders.boolQuery().filter(QueryBuilders.rangeQuery("balance").gte(20000).lte(30000)));
NativeSearchQuery query = builder.build();
Page<EsAccount> accounts = esAccountRepository.search(query);
for (EsAccount account : accounts) {
System.out.println(account);
}
}
}

分组聚合
1
2
3
4
5
6
7
8
9
Elasticsearch有一个功能叫做聚合(aggregations),它允许你在数据上生成复杂的分析统计。它很像SQL中的 GROUP BY 但是功能更强大.

Elasticsearch的聚合中声明了两个概念如下:
-- Buckets(桶):满足某个条件的文档集合。
-- Metrics(指标):为某个桶中的文档计算得到的统计信息


我们可以把类似于数据库中的COUNT(*) 看成一个指标
将 GROUP BY 看成一个桶
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90

@Autowired
private ElasticsearchTemplate elasticsearchTemplate;

/**
* 分组聚合 ElasticsearchTemplate
* 单个分组聚合
* 对state字段进行聚合,统计出相同state的文档数量
*/
@Test
public void group01(){

NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder();
//统计state字段相同数据出现的次数
TermsAggregationBuilder field = AggregationBuilders.terms("group_by_state").field("state.keyword");

//查询全部
builder.withQuery(QueryBuilders.matchAllQuery());
//添加聚合条件
builder.addAggregation(field);
//构建
NativeSearchQuery query = builder.build();
//查询,采用ElasticsearchTemplate
Aggregations aggregations = elasticsearchTemplate.query(query, new ResultsExtractor<Aggregations>() {
@Override
public Aggregations extract(SearchResponse response) {
return response.getAggregations();
}
});
//转换成map集合
Map<String, Aggregation> aggregationMap = aggregations.asMap();
//获取响应的聚合子类group_by_state
StringTerms groupByState = (StringTerms) aggregationMap.get("group_by_state");
//获取所有的桶
List<StringTerms.Bucket> buckets = groupByState.getBuckets();
Iterator<StringTerms.Bucket> iterator = buckets.iterator();
while (iterator.hasNext()){
StringTerms.Bucket bucket = iterator.next();
System.out.println(bucket.getKeyAsString());
System.out.println(bucket.getDocCount());
}

}


/**
* 分组聚合 ElasticsearchTemplate
* 多个分组聚合
* 统计出相同state的文档数量,再统计出balance的平均值
*/
@Test
public void group02(){

NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder();
//统计state字段相同数据出现的次数
TermsAggregationBuilder field = AggregationBuilders.terms("group_by_state").field("state.keyword");
//统计出balance的平均值
AvgAggregationBuilder field2 = AggregationBuilders.avg("avg_balance").field("balance");

//查询全部
builder.withQuery(QueryBuilders.matchAllQuery());
//添加聚合条件
builder.addAggregation(field);
builder.addAggregation(field2);
//构建
NativeSearchQuery query = builder.build();
//查询,采用ElasticsearchTemplate
Aggregations aggregations = elasticsearchTemplate.query(query, new ResultsExtractor<Aggregations>() {
@Override
public Aggregations extract(SearchResponse response) {
return response.getAggregations();
}
});
//转换成map集合
Map<String, Aggregation> aggregationMap = aggregations.asMap();
//获取响应的聚合子类group_by_state
StringTerms groupByState = (StringTerms) aggregationMap.get("group_by_state");
//获取响应的聚合子类avg_balance
InternalAvg avgBalance = (InternalAvg) aggregationMap.get("avg_balance");
//获取所有的桶
List<StringTerms.Bucket> buckets = groupByState.getBuckets();
Iterator<StringTerms.Bucket> iterator = buckets.iterator();
while (iterator.hasNext()){
StringTerms.Bucket bucket = iterator.next();
System.out.println(bucket.getKeyAsString());
System.out.println(bucket.getDocCount());
}
//获取指标
double value = avgBalance.getValue();
System.out.println("==="+value);
}

免责申明:

本文系转载,版权归原作者所有,如若侵权请联系我们进行删除!

《数据治理行业实践白皮书》下载地址:https://fs80.cn/4w2atu

《数栈V6.0产品白皮书》下载地址:
https://fs80.cn/cw0iw1

想了解或咨询更多有关袋鼠云大数据产品、行业解决方案、客户案例的朋友,浏览袋鼠云官网:
https://www.dtstack.com/?src=bbs

同时,欢迎对大数据开源项目有兴趣的同学加入「袋鼠云开源框架钉钉技术群」,交流最新开源技术信息,群号码:30537511,项目地址:
https://github.com/DTStack

0条评论
社区公告
  • 大数据领域最专业的产品&技术交流社区,专注于探讨与分享大数据领域有趣又火热的信息,专业又专注的数据人园地

最新活动更多
微信扫码获取数字化转型资料
钉钉扫码加入技术交流群