I launch following query:
GET archive-bp/_search
{
"query": {
"bool" : {
"filter" : [ {
"bool" : {
"should" : [ {
"terms" : {
"naDataOwnerCode" : [ "ACME-FinServ", "ACME-FinServ CA", "ACME-FinServ NY", "ACME-FinServ TX", "ACME-Shipping APA", "ACME-Shipping Eur", "ACME-Shipping LATAM", "ACME-Shipping ME", "ACME-TelCo-CN", "ACME-TelCo-ESAT", "ACME-TelCo-NL", "ACME-TelCo-PL", "ACME-TelCo-RO", "ACME-TelCo-SA", "ACME-TelCo-Treasury", "Default" ]
}
},
{
"bool" : {
"must_not" : {
"exists" : {
"field" : "naDataOwnerCode"
}
}
}
} ]
}
}, {
"range" : {
"bankCommunicationStatusDate" : {
"from" : "2006-02-27T06:45:47.000Z",
"to" : null,
"time_zone" : "+02:00",
"include_lower" : true,
"include_upper" : true
}
}
} ]
}
}
}
And I receive no results, but the field exists in my index.
When I strip off the data owner part, I still have no results. When I strip off the bankCommunicationDate, I get 10 results, so there is the problem.
The query of only the bankCommunicationDate:
GET archive-bp/_search
{
"query" :
{
"range" : {
"bankCommunicationStatusDate" : {
"from" : "2016-04-27T09:45:43.000Z",
"to" : "2026-04-27T09:45:43.000Z",
"time_zone" : "+02:00",
"include_lower" : true,
"include_upper" : true
}
}
}
}
The mapping of my index contains the following bankCommunicationStatusDate field:
"bankCommunicationStatusDate": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
}
And there are values for the field bankCommunicationStatusDate in elasticsearch:
"bankCommunicationStatusDate": "2016-04-27T09:45:43.000Z"
"bankCommunicationStatusDate": "2016-04-27T09:45:47.000Z"
What is wrong?
What version of Elastic Search do you use?
I guess the reason is that you should use "gte/lte" instead of "from/to/include_lower/include_upper".
According to documentation to version 0.90.4
https://www.elastic.co/guide/en/elasticsearch/reference/0.90/query-dsl-range-query.html
Deprecated in 0.90.4.
The from, to, include_lower and include_upper parameters have been deprecated in favour of gt,gte,lt,lte.
The strange thing is that I have tried your example on elastic search version 1.7 and it returns data!
I guess real depreciation took place much later - between 1.7 and maybe newer version you have.
BTW. You can isolate the problem even further using Sense plugin for Chrome and this code:
DELETE /test
PUT /test
{
"mappings": {
"myData" : {
"properties": {
"bankCommunicationStatusDate": {
"type": "date"
}
}
}
}
}
PUT test/myData/1
{
"bankCommunicationStatusDate":"2016-04-27T09:45:43.000Z"
}
PUT test/myData/2
{
"bankCommunicationStatusDate":"2016-04-27T09:45:47.000Z"
}
GET test/_search
{
"query" :
{
"range" : {
"bankCommunicationStatusDate" : {
"gte" : "2016-04-27T09:45:43.000Z",
"lte" : "2026-04-27T09:45:43.000Z"
}
}
}
}
Related
I am trying to setup index rollover in OpenSearch with simple min_doc_count condition, but I am getting "message": "Missing rollover_alias index setting [index=app_logs-000002]" error.
I have a rollover alias called app_logs, and also have the following policy (for demo purpose it is dummy to rollover after 3 documents) attached to indexes:
PUT _plugins/_ism/policies/rollover_policy
{
"policy": {
"description": "Rollover policy",
"default_state": "rollover",
"states": [
{
"name": "rollover",
"actions": [
{
"rollover": {
"min_doc_count": 3
}
}
],
"transitions": []
}
],
"ism_template": [
{
"index_patterns": [
"app_logs-*"
]
}
]
}
}
GET _cat/aliases:
app_logs app_logs-000001 - - - false
app_logs app_logs-000002 - - - true
GET _cat/indices:
yellow open app_logs-000002 V4j0gxaYTcqoQZvtd0u2zc 1 1 6 0 4.1kb 4.1kb
yellow open app_logs-000001 AnPjlOq6Q5We411z2q_YpQ 1 1 5 0 18.8kb 18.8kb
...
When doing
GET _opendistro/_ism/explain/app_logs-000002?pretty I get:
{
"app_logs-000002" : {
"index.plugins.index_state_management.policy_id" : "rollover_policy",
"index.opendistro.index_state_management.policy_id" : "rollover_policy",
"index" : "app_logs-000002",
"index_uuid" : "V4j0gxaYTcqoQZvtd0u2zc",
"policy_id" : "rollover_policy",
"policy_seq_no" : -2,
"policy_primary_term" : 0,
"rolled_over" : false,
"index_creation_date" : 1659299029428,
"state" : {
"name" : "rollover",
"start_time" : 1659299410303
},
"action" : {
"name" : "rollover",
"start_time" : 1659424192817,
"index" : 0,
"failed" : true,
"consumed_retries" : 3,
"last_retry_time" : 1659424804833
},
"step" : {
"name" : "attempt_rollover",
"start_time" : 1659424192817,
"step_status" : "failed"
},
"retry_info" : {
"failed" : false,
"consumed_retries" : 0
},
"info" : {
"message" : "Missing rollover_alias index setting [index=app_logs-000002]"
},
"enabled" : false
},
"total_managed_indices" : 1
}
When I do GET app_logs-000002/_settings I get:
{
"app_logs-000002" : {
"settings" : {
"index" : {
"creation_date" : "1659299029428",
"number_of_shards" : "1",
"number_of_replicas" : "1",
"uuid" : "V4j0gxaYTcqoQZvtd0u2zc",
"version" : {
"created" : "136227827"
},
"provided_name" : "app_logs-000002"
}
}
}
}
so yes rollover alias setting is really missing there. But I would expect that this will be added automatically.
When I do GET _template I get:
{
"ism_rollover" : {
"order" : 0,
"index_patterns" : [
"app_logs-*"
],
"settings" : {
"index" : {
"opendistro" : {
"index_state_management" : {
"rollover_alias" : "app_logs"
}
}
}
},
"mappings" : { },
"aliases" : { }
}
}
so rollover_alias is there in template. Why this is not used in a new index from template?
Thanks!
I experienced a similar problem. The issue was that the indices needed to be created after the ism policy and template. I'm not sure if you managed to find a solution but perhaps for those future users this could prove useful.
Some docs:
Very useful sample on setting up a rolling index strategy: https://opensearch.org/docs/latest/im-plugin/ism/policies/#sample-policy-with-ism-template-for-auto-rollover
Official AWS docs on the same topic with some examples: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ism.html.
A great writeup on common errors experienced when implementing a rolling index ISM policy: https://aws.amazon.com/premiumsupport/knowledge-center/opensearch-failed-rollover-index/
In your case it appears that the policy was not correctly applying to your indices which is likely a result of you creating your indices before the policy and template were created. If you want to add a policy to an index see the step 6 of Create an ISM policy in the linked AWS docs above:
POST _plugins/_ism/add/my-index
{
"policy_id": "my-policy-id"
}
Here is how I went about solving this problem using a policy and template:
Implement an ISM policy (as you did above)
Create an ISM template
PUT _template/ism_rollover_app
{
"index_patterns": "app_logs-*",
"settings": {
"index": {
"opendistro.index_state_management.rollover_alias": "app_logs"
}
}
}
Create an initial index called app_logs-00001 (or some variant that matches the regex ^.*-\d+$)
This should hopefully see app_logs-00001 be created from the ism_rollover_app template and have the app_logs index associated with it. This should subsequently fix this missing alias issue.
using a "now" in my range query as shown below
"range":{
"metadata_fields.time_start":{
"lte":"now",
"gt":"now-7d"
}
}
results in an error as shown below
{
"error" : {
"root_cause" : [
{
"type" : "parse_exception",
"reason" : "could not read the current timestamp"
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "memsnippet_idx_1.1",
"node" : "XDHi_2BbSQGb33IHDasxfA",
"reason" : {
"type" : "parse_exception",
"reason" : "could not read the current timestamp",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "features that prevent cachability are disabled on this context"
}
}
}
]
},
"status" : 400
}
I upgraded to 7.11 today, my index.requests.cache.enable is true. Not sure why ES cannot read the current date?
edit 1:
"aggregations":{
"named_entities":{
"significant_terms":{
"gnd":{
"background_is_superset":false
},
"field":"predicted_fields.named_entities.display.keyword",
"size":10000,
"background_filter":{
"bool":{
"filter":[
{
"match":{
"owner_id":{
"query":8,
"operator":"and"
}
}
},
{
"range":{
"metadata_fields.time_start":{
"lte":"now-7d"
}
}
}
],
"should":[
],
"must":[
]
}
}
}
}
}
This is my significant aggregations query that is not working, "now" works in a range query when filtering but not in this aggregations.
I have records like:
{
"_id" : ObjectId("5f99cede36fd08653a3d4e92"),
"accessions" : {
"sample_accessions" : {
"5f99ce9636fd08653a3d4e86" : {
"biosampleAccession" : "SAMEA7494329",
"sraAccession" : "ERS5250977",
"submissionAccession" : "ERA3032827",
"status" : "accepted"
},
"5f99ce9636fd08653a3d4e87" : {
"biosampleAccession" : "SAMEA7494330",
"sraAccession" : "ERS5250978",
"submissionAccession" : "ERA3032827",
"status" : "accepted"
}
}
}
}
How do I query by the mongo id in sample_accessions? I thought this should work but it doesn't. What should I be doing?
db.getCollection('collection').find({"accessions.sample_accessions":"5f99ce9636fd08653a3d4e86"})
The id is a key and check whether key is exists or not use $exists, customize response using project to get specific object
db.getCollection('collection').find(
{
"accessions.sample_accessions.5f99ce9636fd08653a3d4e86": {
$exists: true
}
},
{ sample_doc: "$accessions.sample_accessions.5f99ce9636fd08653a3d4e86" }
)
Playground
We are using learninglocker and I am trying to query its mongodb. learninglocker puts escaped urls as object names, which makes them more difficult to search. I am returning 0 results when I would expect to return several.
My find is as follows:
{"statement.object.definition.extensions.http://activitystrea&46;ms/schema/1&46;0/device.device_type": "app"}
I assume that this should be escaped somehow, however, am unsure how.
http://activitystrea&46;ms/schema/1&46;0/device
Sample object:
"statement": {
"version" : "1.0.0",
"actor" : { },
"verb" : { },
"context" : { },
"object" : {
"definition" : {
"extensions" : {
"http://activitystrea&46;ms/schema/1&46;0/device" : {
"device_type" : "app"
}
}
}
}
}
I have the next document in a collection:
{
"_id" : ObjectId("546a7a0f44aee82db8469f6d"),
...
"valoresVariablesIterativas" : [
{
"asignaturaVO" : {
"_id" : ObjectId("546a389c44aee54fc83112e9")
},
"valoresEstaticos" : {
"IT_VAR3" : "",
"IT_VAR1" : "",
"IT_VAR2" : "asdasd"
},
"valoresPreestablecidos" : {
"IT_ASIGNATURA" : "Matemáticas",
"IT_NOTA_DEFINITIVA_ASIGNATURA" : ""
}
},
{
"asignaturaVO" : {
"_id" : ObjectId("546a3d8d44aee54fc83112fa")
},
"valoresEstaticos" : {
"IT_VAR3" : "",
"IT_VAR1" : "",
"IT_VAR2" : ""
},
"valoresPreestablecidos" : {
"IT_ASIGNATURA" : "Español",
"IT_NOTA_DEFINITIVA_ASIGNATURA" : ""
}
}
]
...
}
I want modify an element of the valoresEstaticos, I know the fields "_id", "asignaturaVO", and the key of the item valoresEstaticos that I want modify.
Which is the correct query for this?, I have this:
db.myCollection.findAndModify({
query:{"_id" : ObjectId("546a7a0f44aee82db8469f6d")},
update: {
{valoresVariablesIterativas.asignaturaVO._id: ObjectId("546a389c44aee54fc83112e9")},
{ $set: {}}
}
})
but I dont know how to build a query :(
Help me please, Thank you very much!
You can just use update.
db.myCollection.update(
{ "_id" : ObjectId("546a7a0f44aee82db8469f6d"), "valoresVariablesIterativas.asignaturaVO._id" : ObjectId("546a389c44aee54fc83112e9") },
{ "$set" : { "valoresVariablesIterativas.$.valoresEstaticos.IT_VAR3" : 99 } }
)
assuming you want to update key IT_VAR3. The key is the positional update operator $. The condition on the array in the query portion of the update is redundant for finding the document, but necessary to use the $ to update the correct array element.