This is how my datas look like
{
"name": "thename",
"openingTimes": {
"monday": [
{
"start": "10:00",
"end": "14:00"
},
{
"start": "19:00",
"end": "02:30"
}
]
}
}
I want to query this document saying, opened on monday between 13:00 and 14:00.
I tried this filter but it doesn't return my document:
{
"filter": {
"range": {
"openingTimes.monday.start": {
"lte": "13:00"
},
"openingTimes.monday.end": {
"gte": "14:00"
}
}
}
}
If I simply say opened on monday at 13:00, it works:
{
"filter": {
"range": {
"openingTimes.monday.start": {
"lte": "13:00"
}
}
}
}
Or even closing on monday from 14:00, works too:
{
"filter": {
"range": {
"openingTimes.monday.start": {
"gte": "14:00"
}
}
}
}
but combining both of them doens't give me anything. How can I manage to create a filter meaning opened on monday between 13:00 and 14:00 ?
EDIT
This is how I mapped the openingTime field
{
"properties": {
"monday": {
"type": "nested",
"properties": {
"start": {"type": "date","format": "hour_minute"},
"end": {"type": "date","format": "hour_minute"}
}
}
}
}
SOLUTION (#DanTuffery)
Based on #DanTuffery answer I changed my filter to his (which is working perfectly) and added the type definition of my openingTime attribute.
For the record I am using elasticsearch as my primary db through Ruby-on-Rails using the following gems:
gem 'elasticsearch-rails', git: 'git://github.com/elasticsearch/elasticsearch-rails.git'
gem 'elasticsearch-model', git: 'git://github.com/elasticsearch/elasticsearch-rails.git'
gem 'elasticsearch-persistence', git: 'git://github.com/elasticsearch/elasticsearch-rails.git', require: 'elasticsearch/persistence/model'
Here is how my openingTime attribute's mapping looks like:
attribute :openingTimes, Hash, mapping: {
type: :object,
properties: {
monday: {
type: :nested,
properties: {
start:{type: :date, format: 'hour_minute'},
end: {type: :date, format: 'hour_minute'}
}
},
tuesday: {
type: :nested,
properties: {
start:{type: :date, format: 'hour_minute'},
end: {type: :date, format: 'hour_minute'}
}
},
...
...
}
}
And here is how I implemented his filter:
def self.openedBetween startTime, endTime, day
self.search filter: {
nested: {
path: "openingTimes.#{day}",
filter: {
bool: {
must: [
{range: {"openingTimes.#{day}.start"=> {lte: startTime}}},
{range: {"openingTimes.#{day}.end" => {gte: endTime}}}
]
}
}
}
}
end
First create your mapping with the openingTimes object at the top level.
/PUT http://localhost:9200/demo/test/_mapping
{
"test": {
"properties": {
"openingTimes": {
"type": "object",
"properties": {
"monday": {
"type": "nested",
"properties": {
"start": {
"type": "date",
"format": "hour_minute"
},
"end": {
"type": "date",
"format": "hour_minute"
}
}
}
}
}
}
}
}
Index your document
/POST http://localhost:9200/demo/test/1
{
"name": "thename",
"openingTimes": {
"monday": [
{
"start": "10:00",
"end": "14:00"
},
{
"start": "19:00",
"end": "02:30"
}
]
}
}
With a nested filter query you can search for the document with the start and end fields within boolean range queries:
/POST http://localhost:9200/demo/test/_search
{
"query": {
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"nested": {
"path": "openingTimes.monday",
"filter": {
"bool": {
"must": [
{
"range": {
"openingTimes.monday.start": {
"lte": "13:00"
}
}
},
{
"range": {
"openingTimes.monday.end": {
"gte": "14:00"
}
}
}
]
}
}
}
}
}
}
}
Related
I have an index that includes a field and when a '#' is input, I cannot get the query to find the #.
Field Data: "#3213939"
Query:
GET /invoices/_search
{
"query": {
"bool": {
"should": [
{
"match": {
"referenceNumber": {
"query": "#32"
}
}
},
{
"wildcard": {
"referenceNumber": {
"value": "*#32*"
}
}
}
]
}
}
}
"#" character drops during standard text analyzer this is why you can't find it.
POST _analyze
{
"text": ["#3213939"]
}
Response:
{
"tokens": [
{
"token": "3213939",
"start_offset": 1,
"end_offset": 8,
"type": "<NUM>",
"position": 0
}
]
}
You can update the analyzer and customize it.
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-standard-analyzer.html
OR
you can use referenceNumber.keyword field.
GET test_invoices/_search
{
"query": {
"bool": {
"should": [
{
"match": {
"referenceNumber": {
"query": "#32"
}
}
},
{
"wildcard": {
"referenceNumber.keyword": {
"value": "*#32*"
}
}
}
]
}
}
}
I have list documents in elasticsearch which contains various fileds.
documents looks like below.
{
"role": "api_user",
"apikey": "key1"
"data":{},
"#timestamp": "2021-10-06T16:47:13.555Z"
},
{
"role": "api_user",
"apikey": "key1"
"data":{},
"#timestamp": "2021-10-06T18:00:00.555Z"
},
{
"role": "api_user",
"apikey": "key1"
"data":{},
"#timestamp": "2021-10-07T13:47:13.555Z"
}
]
I wanted to find the number of documents present in specifi date range with 1day interval, let's say
2021-10-05T00:47:13.555Z to 2021-10-08T00:13:13.555Z
I am trying the below aggregation for the result.
{
"size": 0,
"query": {
"filter": {
"bool": {
"must": [
{
"range": {
"#timestamp": {
"gte": "2021-10-05T00:47:13.555Z",
"lte": "2021-10-08T00:13:13.555Z",
"format": "strict_date_optional_time"
}
}
}
]
}
}
},
"aggs": {
"data": {
"date_histogram": {
"field": "#timestamp",
"calendar_interval": "day"
}
}
}
}
The expected output should be:-
For 2021-10-06 I should get 2 documents and 2021-10-07 I should get 1 document and if the docs are not present I should get count as 0.
the below solution works
{
"size":0,
"query":{
"bool":{
"must":[
],
"filter":[
{
"match_all":{
}
},
{
"range":{
"#timestamp":{
"gte":"2021-10-05T00:47:13.555Z",
"lte":"2021-10-08T00:13:13.555Z",
"format":"strict_date_optional_time"
}
}
}
],
"should":[
],
"must_not":[
]
}
},
"aggs":{
"data":{
"date_histogram":{
"field":"#timestamp",
"fixed_interval":"12h",
"time_zone":"Asia/Calcutta",
"min_doc_count":1
}
}
}
}
I would like to get a list of all the forks of a specific repository.
When I try the following on explorer
repository( owner: "someOrg", name: "specificRepo"){
name
forkCount
forks(first: 12){
totalCount
nodes{
name
}
}
}
}
It returns the fork count correctly, but inside nodes, the name is just the original repo name. But I would like it to give the names of all the forked repositories.
{
"data": {
"repository": {
"name": "specificRepo",
"forkCount": 12,
"forks": {
"totalCount": 1,
"nodes": [
{
"name": "specificRepo",
}
]
}
}
}
}
If you fork a repo and then change the name, the name field will reflect the changed name, not the original name. For example, here's a fork of Semantic-UI:
{
repository(
owner: "Semantic-Org"
name: "Semantic-Ui"
) {
name
forkCount
forks(
first: 12
orderBy: { field: NAME, direction: DESC }
) {
totalCount
nodes {
name
}
}
}
}
{
"data": {
"repository": {
"name": "Semantic-UI",
"forkCount": 4936,
"forks": {
"totalCount": 4743,
"nodes": [
{
"name": "WEB_JS_GUI-Semantic-UI"
},
{
"name": "Vanz-Sing-In"
},
{
"name": "Somewhat-Semantic-UI"
},
{
"name": "semantic_1.0_experiment"
},
{
"name": "semanticui"
},
{
"name": "semantic.ui_main"
},
{
"name": "Semantic-UI-V2"
},
{
"name": "Semantic-UI-tr"
},
{
"name": "Semantic-UI-tr"
},
{
"name": "Semantic-UI-Stylus"
},
{
"name": "Semantic-UI-pt-br"
},
{
"name": "Semantic-UI-pp"
}
]
}
}
}
}
Today you can request to add the nameWithOwner field as well and even the url. That will give you the information you need.
{
repository(
owner: "Semantic-Org"
name: "Semantic-Ui"
) {
name
forkCount
forks(
first: 12
orderBy: { field: NAME, direction: DESC }
) {
totalCount
nodes {
name
nameWithOwner
url
}
}
}
}
Which will give you:
{
"data": {
"repository": {
"name": "Semantic-UI",
"forkCount": 5133,
"forks": {
"totalCount": 4919,
"nodes": [
{
"name": "Vanz-Sing-In",
"nameWithOwner": "semantic-apps/Vanz-Sing-In",
"url": "https://github.com/semantic-apps/Vanz-Sing-In"
},
etc.
}
}
}
}
I want to filter out the Sum_PKTS which value is lower than 10.
How could I merge the two query string?
Is it possible?
BTW, the "Sum_PKTS" field is sum by "field" : "Packet.
the goal is to filter the local IP and aggregate "packet" field, and finally filter the Sum_PKTS which value is lower than 10.
{
"range":{
"Sum_PKTS":{
"gte": 10
}
}
}
--
GET /_search
{
"size" : 0,
"query": {
"bool": {
"should": [
{
"match":{"IPV4_DST_ADDR":"192.168.0.0/16"}
},
{
"match":{"IPV4_SRC_ADDR":"192.168.0.0/16"}
}
],
"minimum_should_match": 1,
"must":[
{
"range":{
"#timestamp":{
"gte":"now-5m"
}
}
}
]
}
},
"aggs": {
"DST_Local_IP": {
"filter": {
"bool": {
"filter": {
"match":{"IPV4_DST_ADDR":"192.168.0.0/16"}
}
}
},
"aggs": {
"genres":{
"terms" : {
"field" : "IPV4_DST_ADDR" ,
"order" : { "Sum_PKTS" : "desc" }
},
"aggs":{
"Sum_PKTS": {
"sum" : { "field" : "Packet" }
}
}
}
}
},
"SRC_Local_IP": {
"filter": {
"bool": {
"filter": {
"match":{"IPV4_SRC_ADDR":"192.168.0.0/16"}
}
}
},
"aggs": {
"genres":{
"terms" : {
"field" : "IPV4_SRC_ADDR" ,
"order" : { "Sum_PKTS" : "desc" }
},
"aggs":{
"Sum_PKTS": {
"sum" : { "field" : "Packet" }
}
}
}
}
}
}
}
thank you in advance!
You can achieve what you want using a bucket selector pipeline aggregation (see the two Sum_PKTS_gte_10 aggregations below):
{
"size": 0,
"query": {
"bool": {
"should": [
{
"match": {
"IPV4_DST_ADDR": "192.168.0.0/16"
}
},
{
"match": {
"IPV4_SRC_ADDR": "192.168.0.0/16"
}
}
],
"minimum_should_match": 1,
"must": [
{
"range": {
"#timestamp": {
"gte": "now-5m"
}
}
}
]
}
},
"aggs": {
"DST_Local_IP": {
"filter": {
"bool": {
"filter": {
"match": {
"IPV4_DST_ADDR": "192.168.0.0/16"
}
}
}
},
"aggs": {
"genres": {
"terms": {
"field": "IPV4_DST_ADDR",
"order": {
"Sum_PKTS": "desc"
}
},
"aggs": {
"Sum_PKTS": {
"sum": {
"field": "Packet"
}
},
"Sum_PKTS_gte_10": {
"bucket_selector": {
"buckets_path": {
"sum_packets": "Sum_PKTS"
},
"script": "params.sum_packets >= 10"
}
}
}
}
}
},
"SRC_Local_IP": {
"filter": {
"bool": {
"filter": {
"match": {
"IPV4_SRC_ADDR": "192.168.0.0/16"
}
}
}
},
"aggs": {
"genres": {
"terms": {
"field": "IPV4_SRC_ADDR",
"order": {
"Sum_PKTS": "desc"
}
},
"aggs": {
"Sum_PKTS": {
"sum": {
"field": "Packet"
}
},
"Sum_PKTS_gte_10": {
"bucket_selector": {
"buckets_path": {
"sum_packets": "Sum_PKTS"
},
"script": "params.sum_packets >= 10"
}
}
}
}
}
}
}
}
I want to put double filter in aggs.
such like this.
"aggs": {
"download1" : {
"filter" : [
{ "term": { "IPV4_DST_ADDR":"192.168.0.159"}},
{ "range": { "LAST_SWITCHED": { "gte": "now-5m" } }}
],
"aggs" : {
"downlod_bytes" : { "sum" : { "field" : "IN_BYTES" } }
}
}
}
but it show me an error:
"error": {
"root_cause": [
{
"type": "parsing_exception",
"reason": "Expected [START_OBJECT] under [filter], but got a [START_ARRAY] in [download1]",
"line": 33,
"col": 24
}
]}
How can I do, thank you in advance!
You need to combine both queries with a bool/filter
{
"aggs": {
"download1": {
"filter": {
"bool": {
"filter": [
{
"term": {
"IPV4_DST_ADDR": "192.168.0.159"
}
},
{
"range": {
"LAST_SWITCHED": {
"gte": "now-5m"
}
}
}
]
}
},
"aggs": {
"downlod_bytes": {
"sum": {
"field": "IN_BYTES"
}
}
}
}
}
}