字符过滤器用于在字符流传递给tokenizer之前对其进行预处理。
字符过滤器接收原始文本作为字符流,并可以通过添加、删除或更改字符来转换该流。
ElasticSearch有许多内置的字符过滤器,可用于构建自定义分析器。
HTML Strip Char Filter
1、文本中删除HTML元素,并用解码后的值替换HTML实体(例如,用&)替换&;)。
例:
POST _analyze
{ "tokenizer": "keyword",
"char_filter": [ "html_strip" ],
"text": "<p>I'm so <b>happy</b>!</p>"
}
上面的句子会产生下面的条件:
[ \nI'm so happy!\n ]
若tokenizer为standard tokenizer,上面的句子会产生下面的条件:
[ I'm, so, happy ]
2、配置:
escaped_tags:不能从原始文本中删除HTML标记的数组。
例:
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "keyword",
"char_filter": ["my_char_filter"]
}
},
"char_filter": {
"my_char_filter": {
"type": "html_strip",
"escaped_tags": ["b"]
}
}
}
}
}
POST my_index/_analyze
{
"analyzer": "my_analyzer",
"text": "<p>I'm so <b>happy</b>!</p>"
}
上面的句子会产生下面的条件:
[ \nI'm so <b>happy</b>!\n ]
Mapping Character Filter
1、映射字符过滤器接受键和值的映射。每当遇到与键相同的字符串时,它就会用与该键关联的值替换它们。
2、配置:必须提供mappings或mappings_path参数。
mappings:映射数组,其中每个元素都具有key=>value。
mappings_path:一个路径,config目录到一个utf-8编码的文本映射文件的绝对或相对路径,其中每行包含一个key=>value映射。
例:
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "keyword",
"char_filter": [
"my_char_filter"
]
}
},
"char_filter": {
"my_char_filter": {
"type": "mapping",
"mappings": [
"٠ => 0",
"١ => 1",
"٢ => 2",
"٣ => 3",
"٤ => 4",
"٥ => 5",
"٦ => 6",
"٧ => 7",
"٨ => 8",
"٩ => 9"
]
}
}
}
}
}
POST my_index/_analyze
{
"analyzer": "my_analyzer",
"text": "My license plate is ٢٥٠١٥"
}
上面的句子会产生下面的条件:
[ My license plate is 25015 ]
Pattern Replace Character Filter
1、模式替换字符筛选器使用正则表达式来匹配应替换为指定替换字符串的字符。写得不好的正则表达式可能运行得非常慢,甚至引发stackoverflowError,并导致运行它的节点突然退出。
2、配置:
pattern:一个Java正则表达式。必传参数。
replacement:替换字符串,它可以使用9语法。
flags:Java正则表达式标志。标志应采用管道分隔。如:"CASE_INSENSITIVE|COMMENTS"
例1:将数字中的任何嵌入破折号替换为下划线,即123-456-789→123__789:
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "standard",
"char_filter": [
"my_char_filter"
]
}
},
"char_filter": {
"my_char_filter": {
"type": "pattern_replace",
"pattern": "(\\d+)-(?=\\d)",
"replacement": "$1_"
}
}
}
}
}
POST my_index/_analyze
{
"analyzer": "my_analyzer",
"text": "My credit card is 123-456-789"
}
上面的句子会产生下面的条件:
[ My, credit, card, is, 123_456_789 ]
例2:使用更改原始文本长度的替换字符串可以用于搜索,在遇到小写字母后接大写字母(即fooBarBaz → foo Bar Baz)时插入一个空格,允许单独查询:
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "standard",
"char_filter": [
"my_char_filter"
],
"filter": [
"lowercase"
]
}
},
"char_filter": {
"my_char_filter": {
"type": "pattern_replace",
"pattern": "(?<=\\p{Lower})(?=\\p{Upper})",
"replacement": " "
}
}
}
},
"mappings": {
"properties": {
"text": {
"type": "text",
"analyzer": "my_analyzer"
}
}
}
}
POST my_index/_analyze
{
"analyzer": "my_analyzer",
"text": "The fooBarBaz method"
}
上面的句子会产生下面的条件:
[ the, foo, bar, baz, method ]
例3:使用更改原始文本长度的替换字符串可以用于搜索,但会导致不正确的高亮显示。
PUT my_index/_doc/1?refresh
{
"text": "The fooBarBaz method"
}
GET my_index/_search
{
"query": {
"match": {
"text": "bar"
}
},
"highlight": {
"fields": {
"text": {}
}
}
}
查询结果:
{
"timed_out": false,
"took": $body.took,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": 0.2876821,
"hits": [{
"_index": "my_index",
"_type": "_doc",
"_id": "1",
"_score": 0.2876821,
"_source": {
"text": "The fooBarBaz method"
},
"highlight": {
"text": ["The foo<em>Ba</em>rBaz method"]
}
}
]
}
}