ElasticSearch Wildcard Tokenizer for Emails - email

Suppose there are five email addresses stored under field "email":
1. {"email": "john_1#gmail.com"}
2. {"email": "john_2#gmail.com"}
3. {"email": "john_3#outlook.com"}
4. {"email": "john_4#outlook.com}
5. {"email": "john_5#yahoo.com"}
When I try to search with full email address I get the proper result. Where as If I try to search with partial email I gives me no result.
For example If I try to search only joh or john_. However if I try to search john_1 I am able to get the result. How to get the wildcard result in this case.
PUT /test
{
"settings": {
"analysis": {
"filter": {
"email": {
"type": "pattern_capture",
"preserve_original": 1,
"patterns": [
"([^#]+)",
"(\\p{L}+)",
"(\\d+)",
"#(.+)",
"([^-#]+)"
]
}
},
"analyzer": {
"email": {
"tokenizer": "uax_url_email",
"filter": [
"email",
"lowercase",
"unique"
]
}
}
}
},
"mappings": {
"emails": {
"properties": {
"email": {
"type": "string",
"analyzer": "email",
"search_analyzer": "standard",
"fields": {
"raw": {
"type": "keyword"
}
}
}
}
}
}
}

Try using Wildcard Query.
Example:
{
"query": {
"wildcard": {
"email": {
"value": "joh*"
}
}
}
}

Related

In elastic we have copy_to parameter for copying the fields. Is there any way to replicate the same in mongodb while indexing?

{
"global_string_mv": {
"match_mapping_type": "string",
"match": "global*_string_mv",
"mapping": {
"type": "text",
"copy_to": "global_fields_string_mv",
"fields": {
"text_en": {
"type": "text",
"analyzer": "text_en_index",
"search_analyzer": "text_en_query"
},
"text_en_shingle": {
"type": "text",
"analyzer": "text_shingle_en_index",
"search_analyzer": "text_shingle_en_query"
},
"keyword": {
"type": "keyword"
}
}
}
}
}
In elastic we have copy_to parameter for copying the fields.
Is there any way to replicate the same in mongodb while indexing????
UPDATE :-
PUT my-index-000001
{
"mappings": {
"properties": {
"first_name": {
"type": "text",
"copy_to": "full_name"
},
"last_name": {
"type": "text",
"copy_to": "full_name"
},
"full_name": {
"type": "text"
}
}
}
}
PUT my-index-000001/_doc/1
{
"first_name": "John",
"last_name": "Smith"
}
GET my-index-000001/_search
{
"query": {
"match": {
"full_name": {
"query": "John Smith",
"operator": "and"
}
}
}
}
The values of the first_name and last_name fields are copied to the full_name field.
The first_name and last_name fields can still be queried for the first name and last name respectively, but the full_name field can be queried for both first and last names.
In Elasticsearch, the copy_to parameter allows you to copy the values of multiple fields into a group field, which can then be queried as a single field.
Similarly we need to know that is this possible from mongodb ??
If it is possible then how ??
For more details refer to below link :-
https://www.elastic.co/guide/en/elasticsearch/reference/current/copy-to.html

Multifield wildcard search in ElasticSearch

Consider this very basic T-SQL query:
select * from Users
where FirstName like '%dm0e776467#mail.com%'
or LastName like '%dm0e776467#mail.com%'
or Email like '%dm0e776467#mail.com%'
How can I write this in Lucene?
I have tried the following:
The query way (does not work at all, no results):
{
"query": {
"bool": {
"should": [
{
"wildcard": {
"firstName": "dm0e776467#mail.com"
}
},
{
"wildcard": {
"lastName": "dm0e776467#mail.com"
}
},
{
"wildcard": {
"email": "dm0e776467#mail.com"
}
}
]
}
}
}
The Multimatch way (returns anything where mail.com is present)
{
"query": {
"multi_match": {
"query": "dm0e776467#mail.com",
"fields": [
"firstName",
"lastName",
"email"
]
}
}
}
A third attempt (returns expected result, but if I only insert "mail", then no results are returned)
{
"query": {
"query_string": {
"query": ""dm0e776467#mail.com"",
"fields": [
"firstName",
"lastName",
"email"
],
"default_operator": "or",
"allow_leading_wildcard": true
}
}
}
It seems to me as there is no way to force Elasticsearch to force a query to use the input string as ONE substring?
The standard (default) analyzer will tokenize this email as follows:
GET _analyze
{
"text": "dm0e776467#mail.com",
"analyzer": "standard"
}
yielding
{
"tokens" : [
{
"token" : "dm0e776467",
...
},
{
"token" : "mail.com",
...
}
]
}
This explains why the multi-match works with any *mail.com suffix and why the wildcards are failing.
I suggest the following modifications to your mapping, inspired by this answer:
PUT users
{
"settings": {
"analysis": {
"filter": {
"email": {
"type": "pattern_capture",
"preserve_original": true,
"patterns": [
"([^#]+)",
"(\\p{L}+)",
"(\\d+)",
"#(.+)",
"([^-#]+)"
]
}
},
"analyzer": {
"email": {
"tokenizer": "uax_url_email",
"filter": [
"email",
"lowercase",
"unique"
]
}
}
}
},
"mappings": {
"properties": {
"email": {
"type": "text",
"analyzer": "email"
},
"firstName": {
"type": "text",
"fields": {
"as_email": {
"type": "text",
"analyzer": "email"
}
}
},
"lastName": {
"type": "text",
"fields": {
"as_email": {
"type": "text",
"analyzer": "email"
}
}
}
}
}
}
Note that I've used .as_email fields on your first- & lastName fields -- you may not want to force them to be mapped as emails by default.
Then after indexing a few samples:
POST _bulk
{"index":{"_index":"users","_type":"_doc"}}
{"firstName":"abc","lastName":"adm0e776467#mail.coms","email":"dm0e776467#mail.com"}
{"index":{"_index":"users","_type":"_doc"}}
{"firstName":"xyz","lastName":"opr","email":"dm0e776467#mail.com"}
{"index":{"_index":"users","_type":"_doc"}}
{"firstName":"zyx","lastName":"dm0e776467#mail.com","email":"qwe"}
{"index":{"_index":"users","_type":"_doc"}}
{"firstName":"abc","lastName":"efg","email":"ijk"}
the wildcards are working perfectly fine:
GET users/_search
{
"query": {
"bool": {
"should": [
{
"wildcard": {
"email": "dm0e776467#mail.com"
}
},
{
"wildcard": {
"lastName.as_email": "dm0e776467#mail.com"
}
},
{
"wildcard": {
"firstName.as_email": "dm0e776467#mail.com"
}
}
]
}
}
}
Do check how this tokenizer works under the hood to prevent 'surprising' query results:
GET users/_analyze
{
"text": "dm0e776467#mail.com",
"field": "email"
}

Loopback 3 get relation from embedded model

I'm using loopback 3 to build a backend with mongoDB.
So i have 3 models: Object, Attachment and AwsS3.
Object has a relation Embeds2Many to Attachment.
Attachment has a relation Many2One to AwsS3.
Objects look like that in mongoDB
[
{
"fieldA": "valueA1",
"attachments": [
{
"id": 1,
"awsS3Id": "1234"
},
{
"id": 2,
"awsS3Id": "1235"
}
]
},
{
"fieldA": "valueA2",
"attachments": [
{
"id": 4,
"awsS3Id": "1236"
},
{
"id": 5,
"awsS3Id": "1237"
}
]
}
]
AwsS3 looks like that in mongoDB
[
{
"id": "1",
"url": "abc.com/1"
},
{
"id": "2",
"url": "abc.com/2"
}
]
The question is: how can i get Objects included Attachment and AwsS3.url over the RestAPI?
I have try with the include and scope filter. But it didn't work. It look like, that this function is not implemented in loopback3, right? Here is what i tried over the GET request:
{
"filter": {
"include": {
"relation": "Attachment",
"scope": {
"include": {
"relation": "awsS3",
}
}
}
}
}
With this request i only got the Objects with Attachments without anything from AwsS3.
UPDATE for the relation definitons
The relation from Object to Attachment:
"Attachment": {
"type": "embedsMany",
"model": "Attachment",
"property": "attachments",
"options": {
"validate": true,
"forceId": false
}
},
The relation from Attachment to AwsS3
in attachment.json
"relations": {
"awsS3": {
"type": "belongsTo",
"model": "AwsS3",
"foreignKey": ""
}
}
in AwsS3.json
"relations": {
"attachments": {
"type": "hasMany",
"model": "Attachment",
"foreignKey": ""
}
}
Try this filter:
{ "filter": { "include": ["awsS3", "attachments"]}}}}

How can I use CloudKit web services to query based on a reference field?

I've got two CloudKit data objects that look somewhat like this:
Parent Object:
{
"records": [
{
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"recordType": "ParentObject",
"fields": {
"fsYear": {
"value": "2015",
"type": "STRING"
},
"displayOrder": {
"value": 2015221153856287200,
"type": "INT64"
},
"fjpFSGuidForReference": {
"value": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"type": "STRING"
},
"fsDateSearch": {
"value": "2015221153856287158",
"type": "STRING"
},
},
"recordChangeTag": "id4w7ivn",
"created": {
"timestamp": 1439149087571,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
},
"modified": {
"timestamp": 1439149087571,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
}
}
],
"total":
}
Child Object:
{
"records": [
{
"recordName": "2015221153856287168",
"recordType": "ChildObject",
"fields": {
"District": {
"value": "002",
"type": "STRING"
},
"ZipCode": {
"value": "12345",
"type": "STRING"
},
"InspecReference": {
"value": {
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"action": "NONE",
"zoneID": {
"zoneName": "_defaultZone"
}
},
"type": "REFERENCE"
},
},
"recordChangeTag": "id4w7lew",
"created": {
"timestamp": 1439149090856,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
},
"modified": {
"timestamp": 1439149090856,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
}
}
],
"total": 1
}
I'm trying to write a query to directly access the CloudKit web service and return the Child Object based on the reference of the parent object.
My test JSON looks something like this:
{"query":{"recordType":"ChildObject","filterBy":{"fieldName":"InspecReference","fieldValue":{ "value" : "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57", "type" : "string" },"comparator":"EQUALS"}},"zoneID":{"zoneName":"_defaultZone"}}
However, I'm getting the following error from CloudKit:
{"uuid":"33db91f3-b768-4a68-9056-216ecc033e9e","serverErrorCode":"BAD_REQUEST","reason":"BadRequestException:
Unexpected input"}
I'm guessing I have the Record Field Dictionary in the query wrong. However, the documentation isn't clear on what this should look like on a reference object.
You have to re-create the actual object of the reference. In this particular case, the JSON looks like this:
{
"query": {
"recordType": "ChildObject",
"filterBy": {
"fieldName": "InspecReference",
"fieldValue": {
"value": {
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"action": "NONE"
},
"type": "REFERENCE"
},
"comparator": "EQUALS"
}
},
"zoneID": {
"zoneName": "_defaultZone"
}
}

ElasticSearch autocomplete returning 0 hits

I am trying to build an autocomplete feature for our database running on MongoDB. We need to provide autocomplete which lets users complete their queries by offering suggestions while they are typing in the search box.
I have a collection of articles from various sources, which is having the following fields :
{
"title" : "Its the title of a random article",
"cont" : { "paragraphs" : [ .... ] },
and so on..
}
I went through a video by Clinton Gormley. From 37:00 through 42:00 minute, Gormley describes an autocomplete using edgeNGram. Also, I referred to this question to recognize that both are almost the same things, just the mappings differ.
So based on these experiences, I built almost identical settings and mapping and then restored articles collection to ensure that it is indexed by ElasticSearch
The indexing scheme is as follows:
POST /title_autocomplete/title
{
"settings": {
"analysis": {
"filter": {
"autocomplete": {
"type": "edgeNGram",
"min_gram": 2,
"max_gram": 50
}
},
"analyzer": {
"title" : {
"type" : "standard",
"stopwords":[]
},
"autocomplete": {
"type" : "autocomplete",
"tokenizer": "standard",
"filter": ["lowercase", "autocomplete"]
}
}
}
},
"mappings": {
"title": {
"type": "multi_field",
"fields" : {
"title" : {
"type": "string",
"analyzer": "title"
},
"autocomplete" : {
"type": "string",
"index_analyzer": "autocomplete",
"search_analyzer" : "title"
}
}
}
}
}
But when I run the search query, I am unable to get any hits!
GET /title_autocomplete/title/_search
{
"query": {
"bool" : {
"must" : {
"match" : {
"title.autocomplete" : "Its the titl"
}
},
"should" : {
"match" : {
"title" : "Its the titl"
}
}
}
}
}
Can anybody please explain what's wrong with the mapping query or settings? I have been reading ElasticSearch docs for over 7 days now but seem to get nowhere more than full text searches!
ElastiSearch version : 0.90.10
MongoDB version : v2.4.9
using _river
Ubuntu 12.04 64bit
UPDATE
I realised that mapping is screwed after applying previous settings:
GET /title_autocomplete/_mapping
{
"title_autocomplete": {
"title": {
"properties": {
"analysis": {
"properties": {
"analyzer": {
"properties": {
"autocomplete": {
"properties": {
"filter": {
"type": "string"
},
"tokenizer": {
"type": "string"
},
"type": {
"type": "string"
}
}
},
"title": {
"properties": {
"type": {
"type": "string"
}
}
}
}
},
"filter": {
"properties": {
"autocomplete": {
"properties": {
"max_gram": {
"type": "long"
},
"min_gram": {
"type": "long"
},
"type": {
"type": "string"
}
}
}
}
}
}
},
"content": {
... paras and all ...
}
"title": {
"type": "string"
},
"url": {
"type": "string"
}
}
}
}
}
Analyzers and filters are actually mapped into the document after the settings are applied whereas original title field is not affected at all! Is this normal??
I guess this explains why the query is not matching. There is no title.autocomplete field or title.title field at all.
So how should I proceed now?
For those facing this problem, its better to delete the index and start again instead of wasting time with the _river just as DrTech pointed out in the comment.
This saves time but is not a solution. (Therefore not marking it as answer.)
The key is to set up the mappings and index before you initiate the river.
We had an existing setup with a mongodb river and an index called coresearch that we wanted to add autocomplete capacity to, this is the set of commands we used to delete the existing index and river and start again.
Stack is:
ElasticSearch 1.1.1
MongoDB 2.4.9
ElasticSearchMapperAttachments v2.0.0
ElasticSearchRiverMongoDb/2.0.0
Ubuntu 12.04.2 LTS
curl -XDELETE "localhost:9200/_river/node"
curl -XDELETE "localhost:9200/coresearch"
curl -XPUT "localhost:9200/coresearch" -d '
{
"settings": {
"analysis": {
"filter": {
"autocomplete_filter": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 20
}
},
"analyzer": {
"autocomplete": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"autocomplete_filter"
]
}
}
}
}
}'
curl -XPUT "localhost:9200/coresearch/_mapping/users" -d '{
"users": {
"properties": {
"firstname": {
"type": "string",
"search_analyzer": "standard",
"index_analyzer": "autocomplete"
},
"lastname": {
"type": "string",
"search_analyzer": "standard",
"index_analyzer": "autocomplete"
},
"username": {
"type": "string",
"search_analyzer": "standard",
"index_analyzer": "autocomplete"
},
"email": {
"type": "string",
"search_analyzer": "standard",
"index_analyzer": "autocomplete"
}
}
}
}'
curl -XPUT "localhost:9200/_river/node/_meta" -d '
{
"type": "mongodb",
"mongodb": {
"servers": [
{ "host": "127.0.0.1", "port": 27017 }
],
"options":{
"exclude_fields": ["time"]
},
"db": "users",
"gridfs": false,
"options": {
"import_all_collections": true
}
},
"index": {
"name": "coresearch",
"type": "documents"
}
}'