I have this kind of jsonb data in one of column named "FORM" in a my table "process" and I want to create view with some data which are inside of row named field I just want name and value form field named array in this jsonb.
here the jsonb:
{
"column": [
{
"row": {
"id": "ebc7afddad474aee8f82930b6dc328fe",
"name": "Details",
"field": [
{
"name": {
"id": "50a5613e97e04cb5b8d32afa8a9975d1",
"label": "name"
},
"value": {
"stringValue": "yhfghg"
}
}
]
}
},
{
"row": {
"id": "5b7471413cbc44c1a39895020bf2ec58",
"name": "leave details",
"field": [
{
"name": {
"id": "bb127e8284c84692aa217539c4312394",
"label": "date"
},
"value": {
"dateValue": 1549065600
}
},
{
"name": {
"id": "33b2c5d1a968481d9d5e386db487de52",
"label": "days",
"options": {
"allowedValues": [
{
"item": "1"
},
{
"item": "2"
},
{
"item": "3"
},
{
"item": "4"
},
{
"item": "5"
}
]
},
"defaultValue": {
"radioButtonValue": "1"
}
},
"value": {
"radioButtonValue": "3"
}
}
]
}
}
]
}
and i want to this kind of jsonb in view data comes from subarray called field inside the object named row......
[
{
"name": {
"id": "50a5613e97e04cb5b8d32afa8a9975d1"
},
"value": {
"stringValue": "yhfghg"
}
},
{
"name": {
"id": "bb127e8284c84692aa217539c4312394"
},
"value": {
"dateValue": 1549065600
}
},
{
"name": {
"id": "33b2c5d1a968481d9d5e386db487de52"
},
"value": {
"radioButtonValue": "3"
}
}
]
How can I do this?
I used jsonb_array_elements twice to expand the two arrays, then used json_build_object to make the result structure and jsonb_agg combine the several rows generated above into a single JSONB array.
I included a row number is the results so I could later apply group by so that results from several "process" rows would not be accidentally combined by the jsonb_agg.
with cols as (select jsonb_array_elements( "FORM" ->'column') as r
,row_number() over () as n from "process" )
,cols2 as (select jsonb_array_elements(r->'row'->'field') as v
,n from cols)
select jsonb_agg(json_build_object('name',v->'id','value',v->'value'))
from cols2 group by n;
Related
I have a Json document in cloudant as:
{
"createdAt": "2022-10-26T09:16:29.472Z",
"user_id": "4499c1c2-7507-4707-b0e4-ec83e2d2f34d",
"_id": "606a4d591031c14a8c48fcb4a9541ff0"
}
{
"createdAt": "2022-10-24T11:15:24.269Z",
"user_id": "c4bdcb54-3d0a-4b6a-a8a9-aa12e45345f3",
"_id": "fb24a15d8fb7cdf12feadac08e7c05dc"
}
{
"createdAt": "2022-10-24T11:08:24.269Z",
"user_id": "06d67681-e2c4-4ed4-b40a-5a2c5e7e6ed9",
"_id": "2d277ec3dd8c33da7642b72722aa93ed"
}
I have created a index json as:
{
"type": "json",
"partitioned": false,
"def": {
"fields": [
{
"createdAt": "asc"
},
{
"user_id": "asc"
}
]
}
}
I have created a index text as:
{
"type": "text",
"partitioned": false,
"def": {
"default_analyzer": "keyword",
"default_field": {},
"selector": {},
"fields": [
{
"_id": "string"
},
{
"createdAt": "string"
},
{
"user_id": "string"
}
],
"index_array_lengths": true
}
}
I have created a selctor cloudant query :
{
"selector": {
"$and": [
{
"createdAt": {
"$exists": true
}
},
{
"user_id": {
"$exists": true
}
}
]
},
"fields": [
"createdAt",
"user_id",
"_id"
],
"sort": [
{
"createdAt": "desc"
}
],
"limit": 10,
"skip": 0
}
This code work fine inside the cloudant ambient.
My problem is in the Search Index.
I created this function code that works,
function (doc) {
index("specialsearch", doc._id);
if(doc.createdAt){
index("createdAt", doc.createdAt, {"store":true})
}
if(doc.user_id){
index("user_id", doc.user_id, {"store":true})
}
}
result by this url:
// https://[user]-bluemix.cloudant.com/[database]/_design/attributes/_search/by_all?q=*:*&counts=["createdAt"]&limit=2
{
"total_rows": 10,
"bookmark": "xxx",
"rows": [
{
"id": "fb24a15d8fb7cdf12feadac08e7c05dc",
"order": [
1.0,
0
],
"fields": {
"createdAt": "2022-10-24T11:15:24.269Z",
"user_id": "c4bdcb54-3d0a-4b6a-a8a9-aa12e45345f3"
}
},
{
"id": "dad431735986bbf41b1fa3b1cd30cd0f",
"order": [
1.0,
0
],
"fields": {
"createdAt": "2022-10-24T11:07:02.138Z",
"user_id": "76f03307-4497-4a19-a647-8097fa288e77"
}
},
{
"id": "2d277ec3dd8c33da7642b72722aa93ed",
"order": [
1.0,
0
],
"fields": {
"createdAt": "2022-10-24T11:08:24.269Z",
"user_id": "06d67681-e2c4-4ed4-b40a-5a2c5e7e6ed9"
}
}
]
}
but it doesn't return the id sorted by date based on the createdAt and user_id keys.
What I would like is to get a list of an organized search with the index of the createdAt and user_id keys without having to indicate the value; a wildcard type search
Where am I wrong?
I have read several posts and guides but I did not understand how to do it.
Thanks for your help.
You say you want to return a list of id, createdAt and user_id, sorted by createdAt and user_id. And that you want all the documents returned.
If that is the case, what you need to do is simply create a MapReduce view of your data that emits the createdAt and user_id fields in that order, i.e. :
function (doc) {
emit([doc.createdAt, doc.user_id], 1);
}
You don't need to include the document id because that comes for free.
You can then query the view by visiting the URL:
https://<URL>/<database>/_design/<ddoc_name>/_view/<view_name>
You will get all the docs like this:
{"total_rows":3,"offset":0,"rows":[
{"id":"2d277ec3dd8c33da7642b72722aa93ed","key":["2022-10-24T11:08:24.269Z","06d67681-e2c4-4ed4-b40a-5a2c5e7e6ed9"],"value":1},
{"id":"fb24a15d8fb7cdf12feadac08e7c05dc","key":["2022-10-24T11:15:24.269Z","c4bdcb54-3d0a-4b6a-a8a9-aa12e45345f3"],"value":1},
{"id":"606a4d591031c14a8c48fcb4a9541ff0","key":["2022-10-26T09:16:29.472Z","4499c1c2-7507-4707-b0e4-ec83e2d2f34d"],"value":1}
]}
I've never used JSONB columns before, so I'm struggling with a simple query.
I need to select all the value fields from this json below. The output should be: value1, value2, value3, value4, value5, value6, value7, value8
That's as far as I got, but I could not find a way to go deeper in the json.
SELECT result -> 'report' ->> 'products' AS values FROM example_table
I appreciate your help.
CREATE TABLE example_table(
id SERIAL PRIMARY KEY,
result JSONB NOT NULL);
INSERT INTO example_table(result)
VALUES('{
"report": {
"products": [
{
"productName": "Product One",
"types": [
{
"type": "Type One",
"metadata": {
"prices": [
{
"price": {
"value": "value1"
}
},
{
"price": {
"value": "value2"
}
}
]
}
},
{
"type": "Type Two",
"metadata": {
"prices": [
{
"price": {
"value": "value3"
}
},
{
"price": {
"value": "value4"
}
}
]
}
}
]
},
{
"productName": "Product Two",
"types": [
{
"type": "Type One",
"metadata": {
"prices": [
{
"price": {
"value": "value5"
}
},
{
"price": {
"value": "value6"
}
}
]
}
},
{
"type": "Type Two",
"metadata": {
"prices": [
{
"price": {
"value": "value7"
}
},
{
"price": {
"value": "value8"
}
}
]
}
}
]
}
]
}
}');
You should use CROSS JOIN and jsonb_array_elements functions to extract array for each element of records
Demo
select
et.id,
string_agg(ptmp.value -> 'price' ->> 'value', ',')
from
example_table et
cross join jsonb_array_elements(et.result -> 'report' -> 'products') p
cross join jsonb_array_elements (p.value -> 'types') pt
cross join jsonb_array_elements (pt.value -> 'metadata' -> 'prices') ptmp
group by et.id
I have a complex JSON object (I've simplified it for this example) that I cannot figure out the JOLT transform JSON for. Does anybody have any ideas of what the JOLT spec file should be?
Original JSON
[
{
"date": {
"isoDate": "2019-03-22"
},
"application": {
"name": "SiebelProject"
},
"applicationResults": [
{
"reference": {
"name": "Number of Code Lines"
},
"result": {
"value": 44501
}
},
{
"reference": {
"name": "Transferability"
},
"result": {
"grade": 3.1889542208002064
}
}
]
},
{
"date": {
"isoDate": "2019-03-21"
},
"application": {
"name": "SiebelProject"
},
"applicationResults": [
{
"reference": {
"name": "Number of Code Lines"
},
"result": {
"value": 45000
}
},
{
"reference": {
"name": "Transferability"
},
"result": {
"grade": 3.8
}
}
]
}
]
Desired JSON after transformation and sorting by "Name" ASC, "Date" DESC
[
{
"Name": "SiebelProject",
"Date": "2019-03-22",
"Number of Code Lines": 44501,
"Transferability" : 3.1889542208002064
},
{
"Name": "SiebelProject",
"Date": "2019-03-21",
"Number of Code Lines": 45000,
"Transferability" : 3.8
}
]
I couldn't find a way to do the sort (I'm not even sure you can sort descending in JOLT) but here's a spec to do the transform:
[
{
"operation": "shift",
"spec": {
"*": {
"date": {
"isoDate": "[#3].Date"
},
"application": {
"name": "[#3].Name"
},
"applicationResults": {
"*": {
"reference": {
"name": {
"Number of Code Lines": {
"#(3,result.value)": "[#7].Number of Code Lines"
},
"Transferability": {
"#(3,result.grade)": "[#7].Transferability"
}
}
}
}
}
}
}
}
]
After that there are some tools (like jq I think) that could do the sort.
I'm new to jolt transformation. I was wondering if there is a way to do a validation on data type then proceed.
I'm processing a json to insert record into hbase. From source I'm getting timestamp repeated for the same resource id which I want to use for row key.
So I just retrieve the first timestamp and concate with resource id to create row key. But I have an issue when there is only one timestamp in the record i.e when its not a list. Appreciate if someone can help me how to handle this situation.
input data
{ "resource": {
"id": "200629068",
"name": "resource_name_1)",
"parent": {
"id": 200053744,
"name": "parent_name"
},
"properties": {
"AP_ifSpeed": "0",
"DisplaySpeed": "0 (NotApplicable)",
"description": "description"
}
},
"data": [
{
"metric": {
"id": "2215",
"name": "metric_name 1"
},
"timestamp": 1535064595000,
"value": 0
},
{
"metric": {
"id": "2216",
"name": "metric_name_2"
},
"timestamp": 1535064595000,
"value": 1
}
]
}
Jolt transformation
[{
"operation": "shift",
"spec": {
"resource": {
// "id": "resource_&",
"name": "resource_&",
"id": "resource_&",
"parent": {
"id": "parent_&",
"name": "parent_&"
},
"properties": {
"*": "&"
}
},
"data": {
"*": {
"metric": {
"id": {
"*": {
"#(3,value)": "&1"
}
},
"name": {
"*": {
"#(3,value)": "&1"
}
}
},
"timestamp": "timestamp"
}
}
}
}, {
"operation": "shift",
"spec": {
"timestamp": {
// get first element from list
"0": "&1"
},
"*": "&"
}
},
{
"operation": "modify-default-beta",
"spec": {
"rowkey": "=concat(#(1,resource_id),'_',#(1,timestamp))"
}
}
]
Output I'm getting
{ "resource_name" : "resource_name_1)",
"resource_id" : "200629068",
"parent_id" : 200053744,
"parent_name" : "parent_name",
"AP_ifSpeed" : "0",
"DisplaySpeed" : "0 (NotApplicable)",
"description" : "description",
"2215" : 0,
"metric_name 1" : 0,
"timestamp" : 1535064595000,
"2216" : 1,
"metric_name_2" : 1,
"rowkey" : "200629068_1535064595000"
}
when there is only one timestamp then I get
"rowkey" : "200629068_"
In your shift make the output "timestamp" always be an array, even if the incoming data array only has one element in it.
"timestamp": "timestamp[]"
I've got two CloudKit data objects that look somewhat like this:
Parent Object:
{
"records": [
{
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"recordType": "ParentObject",
"fields": {
"fsYear": {
"value": "2015",
"type": "STRING"
},
"displayOrder": {
"value": 2015221153856287200,
"type": "INT64"
},
"fjpFSGuidForReference": {
"value": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"type": "STRING"
},
"fsDateSearch": {
"value": "2015221153856287158",
"type": "STRING"
},
},
"recordChangeTag": "id4w7ivn",
"created": {
"timestamp": 1439149087571,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
},
"modified": {
"timestamp": 1439149087571,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
}
}
],
"total":
}
Child Object:
{
"records": [
{
"recordName": "2015221153856287168",
"recordType": "ChildObject",
"fields": {
"District": {
"value": "002",
"type": "STRING"
},
"ZipCode": {
"value": "12345",
"type": "STRING"
},
"InspecReference": {
"value": {
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"action": "NONE",
"zoneID": {
"zoneName": "_defaultZone"
}
},
"type": "REFERENCE"
},
},
"recordChangeTag": "id4w7lew",
"created": {
"timestamp": 1439149090856,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
},
"modified": {
"timestamp": 1439149090856,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
}
}
],
"total": 1
}
I'm trying to write a query to directly access the CloudKit web service and return the Child Object based on the reference of the parent object.
My test JSON looks something like this:
{"query":{"recordType":"ChildObject","filterBy":{"fieldName":"InspecReference","fieldValue":{ "value" : "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57", "type" : "string" },"comparator":"EQUALS"}},"zoneID":{"zoneName":"_defaultZone"}}
However, I'm getting the following error from CloudKit:
{"uuid":"33db91f3-b768-4a68-9056-216ecc033e9e","serverErrorCode":"BAD_REQUEST","reason":"BadRequestException:
Unexpected input"}
I'm guessing I have the Record Field Dictionary in the query wrong. However, the documentation isn't clear on what this should look like on a reference object.
You have to re-create the actual object of the reference. In this particular case, the JSON looks like this:
{
"query": {
"recordType": "ChildObject",
"filterBy": {
"fieldName": "InspecReference",
"fieldValue": {
"value": {
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"action": "NONE"
},
"type": "REFERENCE"
},
"comparator": "EQUALS"
}
},
"zoneID": {
"zoneName": "_defaultZone"
}
}