using "now" in elasticsearch results in "could not read the current timestamp" error - date

using a "now" in my range query as shown below
"range":{
"metadata_fields.time_start":{
"lte":"now",
"gt":"now-7d"
}
}
results in an error as shown below
{
"error" : {
"root_cause" : [
{
"type" : "parse_exception",
"reason" : "could not read the current timestamp"
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "memsnippet_idx_1.1",
"node" : "XDHi_2BbSQGb33IHDasxfA",
"reason" : {
"type" : "parse_exception",
"reason" : "could not read the current timestamp",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "features that prevent cachability are disabled on this context"
}
}
}
]
},
"status" : 400
}
I upgraded to 7.11 today, my index.requests.cache.enable is true. Not sure why ES cannot read the current date?
edit 1:
"aggregations":{
"named_entities":{
"significant_terms":{
"gnd":{
"background_is_superset":false
},
"field":"predicted_fields.named_entities.display.keyword",
"size":10000,
"background_filter":{
"bool":{
"filter":[
{
"match":{
"owner_id":{
"query":8,
"operator":"and"
}
}
},
{
"range":{
"metadata_fields.time_start":{
"lte":"now-7d"
}
}
}
],
"should":[
],
"must":[
]
}
}
}
}
}
This is my significant aggregations query that is not working, "now" works in a range query when filtering but not in this aggregations.

Related

Why I need to specify $switch default statement while it is optional?

I want to update the documents with a condition. I tried $cond, but it needs an expression for false case. And I don't want to update anything if it is false. Below is the sample of the document:
{
"_id" : ObjectId("5bc29e0d0fc2c40a9d628afe"),
"BasicInfo" : {
"RepNo" : "AE179",
"CompanyName" : "First Bancshares Inc",
"IRSNo" : "640862173",
"CIKNo" : "0000947559",
"Name" : "Ordinary Shares",
"Ticker" : "FBMS",
"CUSIP" : "318916103",
"ISIN" : "US3189161033",
"RIC" : "FBMS.O",
"SEDOL" : "2184300",
"DisplayRIC" : "FBMS.OQ",
"InstrumentPI" : "10552665",
"QuotePI" : "26300255",
"Exchange" : "NASDAQ"
},
"Annual" : {
"Date" : ISODate("2017-12-31T00:00:00.000Z"),
"INC" : {
"SIIB" : {
"Description" : "Interest Income, Bank",
"Value" : 66.06941
},
"STIE" : {
"Description" : "Total Interest Expense",
"Value" : 6.90925
},
"ENII" : {
"Description" : "Net Interest Income",
"Value" : 59.16016
Then, I tried to use $switch since the documentation said default statement is optional.
And I have written the following code:
db.getCollection('FinancialStatement').aggregate([
{"$unwind":"$Annual"},
{"$addFields":{"Annual.Price":
{"$switch":{
branches:[
{
case: {
"$and":[
{"$eq":["$_id", ObjectId("5bc29e0d0fc2c40a9d628afe")]},
{"$eq":["$Annual.Date", ISODate("2017-12-31 00:00:00.000Z")]}
]
},
then: 1000}
],
default: -2000
}
}
}
}
]
)
It basically add a new field called annual.price if objectID and date requirements are met. However, if I omitt the default statement, the program returns an error saying :
Assert: command failed: {
"ok" : 0,
"errmsg" : "$switch could not find a matching branch for an input, and no default was specified.",
"code" : 40066,
"codeName" : "Location40066"
}
From docs on usage of default
Optional. The path to take if no branch case expression evaluates to
true.
Although optional, if default is unspecified and no branch case
evaluates to true, $switch returns an error.
Use $$REMOVE in 3.6.
Something like
Using $cond
{"$addFields":{
"Annual.Price":{
"$cond":[
{
"$and":[
{"$eq":["$_id",ObjectId("5bc29e0d0fc2c40a9d628afe")]},
{"$eq":["$Annual.Date",ISODate("2017-12-31 00:00:00.000Z")]}
]
},
1000,
"$$REMOVE"
]
}
}}
Using $switch
{"$addFields":{
"Annual.Price":{
"$switch":{
"branches":[
{
"case":{
"$and":[
{"$eq":["$_id",ObjectId("5bc29e0d0fc2c40a9d628afe")]},
{"$eq":["$Annual.Date",ISODate("2017-12-31 00:00:00.000Z")]}
]
},
"then":1000
}
],
"default":"$$REMOVE"
}
}
}}

Can we use of Nested where clause in sails js Collection

I am trying below queries to get the output which has start_date greater than current date.
These in structure of my collection:
{
"_id" : ObjectId("5aeac6cd1b7e6f252832ca0e"),
"recruiter_id" : null,
"university_id" : null,
"type" : "quiz",
"name" : "Enter scheduled quiz without end date",
"description" : "Even after detailed market research, some products just don't work in the market. Here's a case study from the Coca-Cola range. ",
"quizpoll_image" : "story_7.jpeg",
"status" : "1",
"quizpoll_type" : "scheduled",
"duration" : {
"duration_type" : "Question wise",
"total_duration" : "1000"
},
"questions" : [
{
"question_id" : "5aeaa4021b7e6f00dc80c5c6"
},
{
"question_id" : "5aeaa59d1b7e6f00dc80c5d2"
}
],
"date_type" : {
"start_date" : ISODate("2018-05-01T00:00:00.000+0000")
},
"created_at" : ISODate("2018-05-01T00:00:00.000+0000"),
"updated_at" : ISODate("2018-04-26T07:58:17.795+0000")
}
This the query I am trying :
var isoCurrentDate = current_date.toISOString();
var quizScheduledData = await QuizListingCollection.find({
where: ({
'type':'quiz',
'status':'1',
'quizpoll_type':'scheduled',
'date_type' :{
'start_date':{
'>': isoCurrentDate
}
}
})
});
This is the error I get when I hit this api in postman
{
"cause": {
"name": "UsageError",
"code": "E_INVALID_CRITERIA",
"details": "Could not use the provided `where` clause. Could not filter by `date_type`: Unrecognized modifier (`start_date`) within provided constraint for `date_type`."
},
"isOperational": true,
"code": "E_INVALID_CRITERIA",
"details": "Could not use the provided `where` clause. Could not filter by `date_type`: Unrecognized modifier (`start_date`) within provided constraint for `date_type`."
}
{
where: {
'type':'quiz',
'status':'1',
'quizpoll_type':'scheduled',
'date_type.start_date': {
'>' : isoCurrentDate
}
}
}

Elasticsearch doesn't find value in range query

I launch following query:
GET archive-bp/_search
{
"query": {
"bool" : {
"filter" : [ {
"bool" : {
"should" : [ {
"terms" : {
"naDataOwnerCode" : [ "ACME-FinServ", "ACME-FinServ CA", "ACME-FinServ NY", "ACME-FinServ TX", "ACME-Shipping APA", "ACME-Shipping Eur", "ACME-Shipping LATAM", "ACME-Shipping ME", "ACME-TelCo-CN", "ACME-TelCo-ESAT", "ACME-TelCo-NL", "ACME-TelCo-PL", "ACME-TelCo-RO", "ACME-TelCo-SA", "ACME-TelCo-Treasury", "Default" ]
}
},
{
"bool" : {
"must_not" : {
"exists" : {
"field" : "naDataOwnerCode"
}
}
}
} ]
}
}, {
"range" : {
"bankCommunicationStatusDate" : {
"from" : "2006-02-27T06:45:47.000Z",
"to" : null,
"time_zone" : "+02:00",
"include_lower" : true,
"include_upper" : true
}
}
} ]
}
}
}
And I receive no results, but the field exists in my index.
When I strip off the data owner part, I still have no results. When I strip off the bankCommunicationDate, I get 10 results, so there is the problem.
The query of only the bankCommunicationDate:
GET archive-bp/_search
{
"query" :
{
"range" : {
"bankCommunicationStatusDate" : {
"from" : "2016-04-27T09:45:43.000Z",
"to" : "2026-04-27T09:45:43.000Z",
"time_zone" : "+02:00",
"include_lower" : true,
"include_upper" : true
}
}
}
}
The mapping of my index contains the following bankCommunicationStatusDate field:
"bankCommunicationStatusDate": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
}
And there are values for the field bankCommunicationStatusDate in elasticsearch:
"bankCommunicationStatusDate": "2016-04-27T09:45:43.000Z"
"bankCommunicationStatusDate": "2016-04-27T09:45:47.000Z"
What is wrong?
What version of Elastic Search do you use?
I guess the reason is that you should use "gte/lte" instead of "from/to/include_lower/include_upper".
According to documentation to version 0.90.4
https://www.elastic.co/guide/en/elasticsearch/reference/0.90/query-dsl-range-query.html
Deprecated in 0.90.4.
The from, to, include_lower and include_upper parameters have been deprecated in favour of gt,gte,lt,lte.
The strange thing is that I have tried your example on elastic search version 1.7 and it returns data!
I guess real depreciation took place much later - between 1.7 and maybe newer version you have.
BTW. You can isolate the problem even further using Sense plugin for Chrome and this code:
DELETE /test
PUT /test
{
"mappings": {
"myData" : {
"properties": {
"bankCommunicationStatusDate": {
"type": "date"
}
}
}
}
}
PUT test/myData/1
{
"bankCommunicationStatusDate":"2016-04-27T09:45:43.000Z"
}
PUT test/myData/2
{
"bankCommunicationStatusDate":"2016-04-27T09:45:47.000Z"
}
GET test/_search
{
"query" :
{
"range" : {
"bankCommunicationStatusDate" : {
"gte" : "2016-04-27T09:45:43.000Z",
"lte" : "2026-04-27T09:45:43.000Z"
}
}
}
}

Meteor elemmatch does not match expectations

in shell / robomongo
db.mycol.find({_id:"jodi"},{
"progress":{
$elemMatch:{
"status":"todo"
}
}
});
and the
result
this my publish
Meteor.publish('mycol', function() {
if(!this.userId)
return null;
coli = Coli.find({clientId:this.userId});
return coli;
});
and my helper
Coli.find({_id:"jodi"},{
"progress":{
$elemMatch:{
"status":"todo"
}
}
}).fetch()
result from console log(meteor)result
i want show only status todo only
not all
if i running on my console(robomongo) its work
but if i running on my script,all data show (not only status todo)
and this my db
{
"_id" : "jodi",
"clientId" : "BdTw5TtipGkodGLNY",
"project" : "jodi",
"progress" : [
{
"_id" : "ewCzYjeid9G5vpqNy",
"createdAt" : "2016-02-22T12:41:56+07:00",
"status" : "todo",
"title" : "jodi",
"detail" : "jodi"
},
{
"_id" : "ewCzYjeid9G5vpqsNy",
"createdAt" : "2016-02-22T12:41:56+07:00",
"status" : "doing",
"title" : "jodi",
"detail" : "jodi"
}
]
}
i want only show data array in status : todo
in shell it works but in my script status other than todo also appeared
Try this:
Coli.find({_id:"jodi"},
{"progress":{$elemMatch:{"status":"todo"}} },
{fields:{"progress.$": 1}}
).fetch();
thanks everyone
this my fixed code
Coli.find({
"progress":{
$elemMatch:{
"status":"doing"
}
}
})

MongoDB MapReduce producing different results for each document

This is a follow-up from this question, where I tried to solve this problem with the aggregation framework. Unfortunately, I have to wait before being able to update this particular mongodb installation to a version that includes the aggregation framework, so have had to use MapReduce for this fairly simple pivot operation.
I have input data in the format below, with multiple daily dumps:
"_id" : "daily_dump_2013-05-23",
"authors_who_sold_books" : [
{
"id" : "Charles Dickens",
"original_stock" : 253,
"customers" : [
{
"time_bought" : 1368627290,
"customer_id" : 9715923
}
]
},
{
"id" : "JRR Tolkien",
"original_stock" : 24,
"customers" : [
{
"date_bought" : 1368540890,
"customer_id" : 9872345
},
{
"date_bought" : 1368537290,
"customer_id" : 9163893
}
]
}
]
}
I'm after output in the following format, that aggregates across all instances of each (unique) author across all daily dumps:
{
"_id" : "Charles Dickens",
"original_stock" : 253,
"customers" : [
{
"date_bought" : 1368627290,
"customer_id" : 9715923
},
{
"date_bought" : 1368622358,
"customer_id" : 9876234
},
etc...
]
}
I have written this map function...
function map() {
for (var i in this.authors_who_sold_books)
{
author = this.authors_who_sold_books[i];
emit(author.id, {customers: author.customers, original_stock: author.original_stock, num_sold: 1});
}
}
...and this reduce function.
function reduce(key, values) {
sum = 0
for (i in values)
{
sum += values[i].customers.length
}
return {num_sold : sum};
}
However, this gives me the following output:
{
"_id" : "Charles Dickens",
"value" : {
"customers" : [
{
"date_bought" : 1368627290,
"customer_id" : 9715923
},
{
"date_bought" : 1368622358,
"customer_id" : 9876234
},
],
"original_stock" : 253,
"num_sold" : 1
}
}
{ "_id" : "JRR Tolkien", "value" : { "num_sold" : 3 } }
{
"_id" : "JK Rowling",
"value" : {
"customers" : [
{
"date_bought" : 1368627290,
"customer_id" : 9715923
},
{
"date_bought" : 1368622358,
"customer_id" : 9876234
},
],
"original_stock" : 183,
"num_sold" : 1
}
}
{ "_id" : "John Grisham", "value" : { "num_sold" : 2 } }
The even indexed documents have the customers and original_stock listed, but an incorrect sum of num_sold.
The odd indexed documents only have the num_sold listed, but it is the correct number.
Could anyone tell me what it is I'm missing, please?
Your problem is due to the fact that the format of the output of the reduce function should be identical to the format of the map function (see requirements for the reduce function for an explanation).
You need to change the code to something like the following to fix the problem, :
function map() {
for (var i in this.authors_who_sold_books)
{
author = this.authors_who_sold_books[i];
emit(author.id, {customers: author.customers, original_stock: author.original_stock, num_sold: author.customers.length});
}
}
function reduce(key, values) {
var result = {customers:[] , num_sold:0, original_stock: (values.length ? values[0].original_stock : 0)};
for (i in values)
{
result.num_sold += values[i].num_sold;
result.customers = result.customers.concat(values[i].customers);
}
return result;
}
I hope that helps.
Note : the change num_sold: author.customers.length in the map function. I think that's what you want