Actual Json
{
"hits": {
"hits": [
{
"_source": {
"customer": {
"name": "Gnana",
"address": {
"AdrLine": "11 main lane",
"zipcode": "08598"
},
"contact": {
"firstcontact": "fff",
"secondcontact": "yyyy"
}
}
}
}
]
}
}
Joltspec
[
{
"operation": "shift",
"spec": {
"hits": {
"hits": {
"": {
"_source": {
"customer": {
"name": "customers[&3].&1.name",
"address": {
"AdrLine": "customers[&4].&2.&1.adrline",
"Country": "customers[&4].&2.&1.country"
},
"contact": {
"firstcontact": "customers[&4].&2.&1.contact1",
"secondcontact": "customers[&4].&2.&1.contact2"
}
}
}
}
}
}
}
},
{
"operation": "default",
"spec": {
"customers[]": {
"": {
"customer": null
}
}
}
}
]
I get the desired output with the following spec.
{
"customers": [
{
"customer": {
"name": "Gnana",
"address": {
"adrline": "11 main lane"
},
"contact": {
"contact1": "fff",
"contact2": "yyyy"
}
}
}
]
}
Scenario 2 when customer is null
{
"hits": {
"hits": [
{
"_source": {
"customer": null
}
}
]
}
}
I need output as
{
"customers": [
{
"customer": null
}
]
}
I am not getting customer:null in actual transformation
{
"customers": []
}
scenario 2 with address as null in customer object
{
"hits": {
"hits": [
{
"_source": {
"customer": {
"name": "Gnana",
"address": null,
"contact": {
"firstcontact": "fff",
"secondcontact": "yyyy"
}
}
}
}
]
}
}
Here address is null and I need output as
{
"customers": [
{
"customer": {
"name": "Gnana",
"address": null,
"contact": {
"contact1": "fff",
"contact2": "yyyy"
}
}
}
]
}
but I am not getting "address": null in actual transformation
{
"customers": [
{
"customer": {
"name": "Gnana",
"contact": {
"contact1": "fff",
"contact2": "yyyy"
}
}
}
]
}
How to handle when (Customer)object in Json in null as I need customer:null in my response also the Object inside the another Object is null (address in customer object) as I need {customer:{Address:null}} in Jolt transformation.
Answered here. Requires to two "defaulting" steps.
https://github.com/bazaarvoice/jolt/issues/651
Related
how can I do this?
This is the array....
Can you please help me?
Can you please give me the answer???? Thanks a lot
{
"results": {
"data": [
{
"name": "xx",
"typeRelationship": [
{
"relationship": "parent",
"type": {
"id": "yyyyy",
}
}
],
"id": "xxxxxxxx"
},
{
"name": "yy",
"typeRelationship": [
{
"relationshipType": "parent",
"type": {
"id": "CCCC"
}
},
{
"relationshipType": "child",
"service": {
"id": "DDDD"
}
},
{
"relationshipType": "child",
"service": {
"id": "xxxxxxxx"
}
}
],
"id": "yyyyy"
}
]
}}
expected:
This is expected:
{
"data" : [ {
"id" : "xxxx",
"href" : "xxxxxx",
"relation":"parent"
} ]
}
For some reason I need to type so it does let me update!!!
This works.
[
{
"operation": "shift",
"spec": {
"data": {
"*": {
"type": {
"id": {
"xxxx": {
"#3": "data[]"
}
}
}
}
}
}
}
]
Edit 1
The below spec moves all the values which as id=xxxxx to the data array.
[
{
"operation": "shift",
"spec": {
"data": {
"*": {
"type": {
"*": {
"id": {
"xxxx": {
"#(2)": "data[]",
"#(4,relation)": "data[&3].relation"
}
}
}
}
}
}
}
}
]
This totally works.
Thanks.
Can you please let me know what is 2? 3? 4?
Because my array is a bit different and I want to fix those numbers but does not work....
{
"results": {
"data": [
{
"name": "xx",
"typeRelationship": [
{
"relationship": "parent",
"type": {
"id": "yyyyy",
}
}
],
"id": "xxxxxxxx"
},
{
"name": "yy",
"typeRelationship": [
{
"relationshipType": "parent",
"type": {
"id": "CCCC"
}
},
{
"relationshipType": "child",
"service": {
"id": "DDDD"
}
},
{
"relationshipType": "child",
"service": {
"id": "xxxxxxxx"
}
}
],
"id": "yyyyy"
}
]
}
}
expected:
{
"rows" : [ {
"rowdata" : {
"relationshipType" : "child",
"Name" : "yy",
"id" : "yyyyy"
}
} ]
}
I want to pass an event_id to Kibana/Elastic Search and find the min and max dates from the #timestamp field for this event_id. Then I want to set the date range to these dates and show all the results. I assume this is doable.
I can get the min and max with this aggregation:
GET /filebeat-*/_search
{
"query": {
"match": {
"event_id": 1234
}
},
"aggs" : {
"min_date": {"min": {"field": "#timestamp" }},
"max_date": {"max": {"field": "#timestamp" }}
}
}
and I can get the results by searching for the specific date range:
GET /filebeat-*/_search
{
"query": {
"bool": {
"filter": {
"range": {"#timestamp": {"gte": "2020-09-11T13:35:35.000Z", "lte": "2020-09-24T20:35:07.000Z"}}
}
}
}
}
how can I combine the two so that I can just change the event_id and have an auto date range type feature?
EDIT:
I can do this:
GET /filebeat-*/_search
{
"query": {
"bool": {
"must": {
"match": {
"event_id": 1234
}
},
"filter": {
"range": {
"#timestamp": {
"lte": "2020-09-25",
"gte": "2020-09-24"
}
}
}
}
},
"aggs": {
"min_date": {
"min": {
"field": "#timestamp"
}
},
"max_date": {
"max": {
"field": "#timestamp"
}
}
}
}
But what I would like to do is something like:
GET /filebeat-*/_search
{
"query": {
"bool": {
"must": {
"match": {
"event_id": 1234
}
},
"filter": {
"range": {
"#timestamp": {
"lte": "max_date",
"gte": "min_date"
}
}
}
}
},
"aggs": {
"min_date": {
"min": {
"field": "#timestamp"
}
},
"max_date": {
"max": {
"field": "#timestamp"
}
}
}
}
But this causes the error: "failed to parse date field [min_date]"
Is it possible to use the aggregated min and max values to define the date range?
Since you have not provided any sample index data, so applying range query on date type field
Adding a working example with index mapping, data, search query, and search result
Index Mapping:
{
"mappings": {
"properties": {
"date": {
"type": "date"
}
}
}
}
Index Data:
{
"date": "2015-02-10",
"event_id":"1234"
}
{
"date": "2015-01-01",
"event_id":"1235"
}
{
"date": "2015-02-01",
"event_id":"1234"
}
{
"date": "2015-02-01",
"event_id":"1235"
}
{
"date": "2015-01-20",
"event_id":"1234"
}
Search Query:
{
"query": {
"bool": {
"must": {
"match": {
"event_id": 1234
}
},
"filter": {
"range": {
"date": {
"lte": "2015-02-15",
"gte": "2015-01-11"
}
}
}
}
},
"aggs": {
"min_date": {
"min": {
"field": "date"
}
},
"max_date": {
"max": {
"field": "date"
}
}
}
}
Search Result:
"hits": {
"total": {
"value": 3,
"relation": "eq"
},
"max_score": 0.44183272,
"hits": [
{
"_index": "stof_64127765",
"_type": "_doc",
"_id": "3",
"_score": 0.44183272,
"_source": {
"date": "2015-02-01",
"event_id": "1234"
}
},
{
"_index": "stof_64127765",
"_type": "_doc",
"_id": "1",
"_score": 0.44183272,
"_source": {
"date": "2015-02-10",
"event_id": "1234"
}
},
{
"_index": "stof_64127765",
"_type": "_doc",
"_id": "5",
"_score": 0.44183272,
"_source": {
"date": "2015-01-20",
"event_id": "1234"
}
}
]
},
"aggregations": {
"max_date": {
"value": 1.4235264E12,
"value_as_string": "2015-02-10T00:00:00.000Z"
},
"min_date": {
"value": 1.421712E12,
"value_as_string": "2015-01-20T00:00:00.000Z"
}
}
I am new to JOLT transformation. I am trying to create a transform spec.
I have an list of categories in the object where I need to only transform few details.
My sample code and spec re shown below.
In "0/SYS_CATALOG_DESCRIPTION" list, I need to convert it to a String based on the lang, i.e for en_US, I need to get AA Products
end result will be "_description" : "AA Products"
The "subCategories" should give me the following result:
"subCategories": [
{
"_id": "ce_155584",
"_parentIds": ["ce_128375"],
"_description": "Filters" //based on lang = en_US
}
]
Sample JSON:
{
"total": 16,
"max_score": 2.2809339,
"hits": [
{
"_index": "bosch-dms-frontend-service_en_us_1558584002",
"_type": "categories",
"_id": "ce_128375",
"_score": 2.2809339,
"_source": {
"_parentIds": [
"1234"
],
"0/SYS_CATALOG_DESCRIPTION": [
{
"lang": "de_DE",
"value": "AA Produkte"
},
{
"lang": "en_US",
"value": "AA Products"
}
],
"subCategories": [
{
"_index": "bosch-dms-frontend-service_en_us_1558584002",
"_type": "categories",
"_id": "ce_155584",
"_score": 2.2809339,
"_source": {
"_parentIds": [
"ce_128375"
],
"0/SYS_CATALOG_DESCRIPTION": [
{
"lang": "en_US",
"value": "Filters"
},
{
"lang": "zh_CN",
"value": "AA Filters (CN)"
}
],
"0/SYS_SYSTEMNAME": "AA_Filters"
}
}
]
}
}
]
}
SPEC:
[
{
"operation": "shift", // shift operation
"spec": {
"hits": {
"*": {
"_id": "_id",
"_source": {
"_parentIds": "_parentIds",
"0/SYS_CATALOG_DESCRIPTION": "_description",
}
}
}
}
}
]
The end result will be
{
"_id" : "ce_128375",
"_parentIds" : [ "1234" ],
"_description" : "AA Products (BR)",
"subCategories": [
{
"_id": "ce_155584",
"_score": 2.2809339,
"_parentIds": ["ce_128375"],
"_description" : "Filters"
}
]
}
I tried several ways but could not achieve the result.
Thank you.
Check if this spec is what you need:
[
{
"operation": "shift", // shift operation
"spec": {
"hits": {
"*": {
"_id": ["&",
"subCategories.[]._parentIds[]"],
"_source": {
"_parentIds": "&",
"0/SYS_CATALOG_DESCRIPTION": {
"*": {
"lang": {
"en_US": {
"#(2,value)": "_description"
}
}
}
},
"subCategories": {
"*": {
"_id": "subCategories.[&1].&",
"_score": "subCategories.[&1].&",
"_source": {
"0/SYS_CATALOG_DESCRIPTION": {
"*": {
"lang": {
"en_US": {
"#(2,value)": "subCategories.[&6]._description"
}
}
}
}
}
}
}
}
}
}
}
}
]
I have a complex JSON object (I've simplified it for this example) that I cannot figure out the JOLT transform JSON for. Does anybody have any ideas of what the JOLT spec file should be?
Original JSON
[
{
"date": {
"isoDate": "2019-03-22"
},
"application": {
"name": "SiebelProject"
},
"applicationResults": [
{
"reference": {
"name": "Number of Code Lines"
},
"result": {
"value": 44501
}
},
{
"reference": {
"name": "Transferability"
},
"result": {
"grade": 3.1889542208002064
}
}
]
},
{
"date": {
"isoDate": "2019-03-21"
},
"application": {
"name": "SiebelProject"
},
"applicationResults": [
{
"reference": {
"name": "Number of Code Lines"
},
"result": {
"value": 45000
}
},
{
"reference": {
"name": "Transferability"
},
"result": {
"grade": 3.8
}
}
]
}
]
Desired JSON after transformation and sorting by "Name" ASC, "Date" DESC
[
{
"Name": "SiebelProject",
"Date": "2019-03-22",
"Number of Code Lines": 44501,
"Transferability" : 3.1889542208002064
},
{
"Name": "SiebelProject",
"Date": "2019-03-21",
"Number of Code Lines": 45000,
"Transferability" : 3.8
}
]
I couldn't find a way to do the sort (I'm not even sure you can sort descending in JOLT) but here's a spec to do the transform:
[
{
"operation": "shift",
"spec": {
"*": {
"date": {
"isoDate": "[#3].Date"
},
"application": {
"name": "[#3].Name"
},
"applicationResults": {
"*": {
"reference": {
"name": {
"Number of Code Lines": {
"#(3,result.value)": "[#7].Number of Code Lines"
},
"Transferability": {
"#(3,result.grade)": "[#7].Transferability"
}
}
}
}
}
}
}
}
]
After that there are some tools (like jq I think) that could do the sort.
I want to sum two variable in REST API,and order by it.
This is my REST API:
"aggs": {
"genres": {
"terms": {
"field": "L7_PROTO_NAME.keyword",
"order": {
"sum_bytes": "desc"
}
},
"aggs": {
"in_bytes": {
"sum": {
"field": "IN_BYTES"
}
},
"out_bytes": {
"sum": {
"field": "OUT_BYTES"
}
}
}
thank you in advance!
You need to create another sub-aggregation that sums the two fields and then order the terms aggregation by that sub-aggregation:
{
"query": {
"bool": {
"should": [
{
"term": {
"_index": "logstash-2018.01.02"
}
},
{
"term": {
"IPV4_DST_ADDR": "192.168.0.159"
}
},
{
"term": {
"IPV4_SRC_ADDR": "192.168.0.159"
}
}
]
}
},
"aggs": {
"genres": {
"terms": {
"field": "L7_PROTO_NAME.keyword",
"order": {
"sum_bytes": "desc"
}
},
"aggs": {
"in_bytes": {
"sum": {
"field": "IN_BYTES"
}
},
"out_bytes": {
"sum": {
"field": "OUT_BYTES"
}
},
"sum_bytes": {
"sum": {
"script": {
"source": "doc.IN_BYTES.value + doc.OUT_BYTES.value"
}
}
}
}
}
}
}
Since scripts are quite computation heavy, you should sum those two fields at indexing time and index the result as a new field that you can use directly in your aggregation, like this:
{
"query": {
"bool": {
"should": [
{
"term": {
"_index": "logstash-2018.01.02"
}
},
{
"term": {
"IPV4_DST_ADDR": "192.168.0.159"
}
},
{
"term": {
"IPV4_SRC_ADDR": "192.168.0.159"
}
}
]
}
},
"aggs": {
"genres": {
"terms": {
"field": "L7_PROTO_NAME.keyword",
"order": {
"sum_bytes": "desc"
}
},
"aggs": {
"in_bytes": {
"sum": {
"field": "IN_BYTES"
}
},
"out_bytes": {
"sum": {
"field": "OUT_BYTES"
}
},
"sum_bytes": {
"sum": {
"field": "SUM_BYTES"
}
}
}
}
}
}