Curly braces missing after replace json data - postgresql

Following data in table dashboard_data. The column name is mr_data.
{"priority_id": "123", "urgent_problem_id": "111", "important_problem_id": "222"}
{"priority_id": "456", "urgent_problem_id": "", "important_problem_id": "333"}
{"priority_id": "789", "urgent_problem_id": "444", "important_problem_id": ""}
Query-
UPDATE
dashboard_data
SET
mr_data = replace(dashboard_data.mr_data,'urgent_problem_id','urgent_problem_ids')
WHERE
mr_data->>'urgent_problem_id' IS NOT NULL;
Expected result:
{"priority_id": "123", "urgent_problem_ids": {"111"}, "important_problem_ids": {"222"}}
{"priority_id": "456", "urgent_problem_ids": {""}, "important_problem_ids": {"333"}}
{"priority_id": "789", "urgent_problem_ids": {"444"}, "important_problem_ids": {""}}
Is there any way that during replace we get {} representation of data as shown in expected result.

Assuming Postgres 9.5 or newer.
You can use jsonb_set to add a proper JSON array with a new key, then remove the old key from the JSON (which is the only way to rename a key)
update dashboard_data
set mr_data = jsonb_set(mr_data, '{urgent_problem_ids}',
jsonb_build_array(mr_data -> 'urgent_problem_id'), true)
- 'urgent_problem_id'
where mr_data ? 'urgent_problem_id';
jsonb_build_array(mr_data -> 'urgent_problem_id') creates a proper JSON array with the (single) value from the urgent_problem_id that value is then stored under the new key urgent_problem_ids and finally the old key urgent_problem_id is removed using the - operator.
Online example: http://rextester.com/POG52716
If your column is not a JSONB (which it should be) then you need to cast the column inside jsonb_set() and cast the result back to a json

Related

Convert Row Count to INT in Azure Data Factory

I am trying to use a Lookup Activity to return a row count. I am able to do this, but once I do, I would like to run an If Statement against it and if the count returns more than 20MIL in rows, I want to execute an additional pipeline for further table manipulation. The issue, however, is that I can not compare the returned value to a static integer. Below is the current Dynamic Expression I have for this If Statement:
#greater(int(activity('COUNT_RL_WK_GRBY_LOOKUP').output),20000000)
and when fired, the following error is returned:
{
"errorCode": "InvalidTemplate",
"message": "The function 'int' was invoked with a parameter that is not valid. The value cannot be converted to the target type",
"failureType": "UserError",
"target": "If Condition1",
"details": ""
}
Is it possible to convert this returned value to an integer in order to make the comparison? If not, is there a possible work around in order to achieve my desired result?
Looks like the issue is with your dynamic expression. Please correct your dynamic expression similar to below and retry.
If firstRowOnly is set to true : #greater(int(activity('COUNT_RL_WK_GRBY_LOOKUP').output.firstRow.propertyname),20000000)
If firstRowOnly is set to false : #greater(int(activity('COUNT_RL_WK_GRBY_LOOKUP').output.value[zero based index].propertyname),20000000)
The lookup result is returned in the output section of the activity run result.
When firstRowOnly is set to true (default), the output format is as shown in the following code. The lookup result is under a fixed firstRow key. To use the result in subsequent activity, use the pattern of #{activity('MyLookupActivity').output.firstRow.TableName}.
Sample Output JSON code is as follows:
{
"firstRow":
{
"Id": "1",
"TableName" : "Table1"
}
}
When firstRowOnly is set to false, the output format is as shown in the following code. A count field indicates how many records are returned. Detailed values are displayed under a fixed value array. In such a case, the Lookup activity is followed by a Foreach activity. You pass the value array to the ForEach activity items field by using the pattern of #activity('MyLookupActivity').output.value. To access elements in the value array, use the following syntax: #{activity('lookupActivity').output.value[zero based index].propertyname}. An example is #{activity('lookupActivity').output.value[0].tablename}.
Sample Output JSON Code is as follows:
{
"count": "2",
"value": [
{
"Id": "1",
"TableName" : "Table1"
},
{
"Id": "2",
"TableName" : "Table2"
}
]
}
Hope this helps.
Do this - when you run the debugger look at the output from your lookup. It will give a json string including the alias for the result of your query. If it's not firstrow set then you get a table. But for first you'll get output then firstRow and then your alias. So that's what you specify.
For example...if you put alias of your count as Row_Cnt then...
#greater(activity('COUNT_RL_WK_GRBY_LOOKUP').output.firstRow.Row_Cnt,20000000)
You don't need the int function. You were trying to do that (just like I was!) because it was complaining about datatype. That's because you were returning a bunch of json text as the output instead of the value you were after. Totally makes sense after you realize how it works. But it is NOT intuitively obvious because it's coming back with data but its string stuff from json, not the value you're after. And functions like equals are just happy with that. It's not until you try to do something like greater where it looks for numeric value that it chokes.

Is it possible in Grafana using a source data table with a JSON field, to get an attribute from that field?

We configure Grafana to use a table input data source, it works very well with the fields already defined (like time, status, values, etc.).
But now a new field has been added to the table that is a serialized JSON object, returned from a process we can not modify.
We need to use a value (timestamp) that is a property of this serialized object in that table string field.
One serialized field value example is this:
{"timestamp":"2020-02-23T18:25:44.012Z","status":"fail","errors":[{"timestamp":"2020-02-23T18:25:43.511Z","message":"invalid key: key is shorter than minimum 16 bytes"},{"timestamp":"2020-02-23T18:25:43.851Z","message":"unauthorized: authorization not possible"}]}
The pretty print is:
{
"timestamp": "2020-02-23T18:25:44.012Z",
"status": "fail",
"errors": [
{
"timestamp": "2020-02-23T18:25:43.511Z",
"message": "invalid key: key is shorter than minimum 16 bytes"
},
{
"timestamp": "2020-02-23T18:25:43.851Z",
"message": "unauthorized: authorization not possible"
}
]
}
Is there any way to use a value like: field.timestamp or field.errors[0].timestamp ?
Is there a Plugin that allows it ?, or is not possible at all ?
Use PostgreSQL JSON column select in your Grafana query, e.g.:
SELECT
field->'timestamp',
...

How do I use ADF copy activity with multiple rows in source?

I have source which is JSON array, sink is SQL server. When I use column mapping and see the code I can see mapping is done to first element of array so each run produces single record despite the fact that source has multiple records. How do I use copy activity to import ALL the rows?
"enableStaging": false,
"translator": {
"type": "TabularTranslator",
"schemaMapping": {
"['#odata.context']": "BuyerFinancing",
"['#odata.nextLink']": "PropertyCondition",
"value[0].AssociationFee": "AssociationFee",
"value[0].AssociationFeeFrequency": "AssociationFeeFrequency",
"value[0].AssociationName": "AssociationName",
Use * as the source field to indicate all elements in json format. For example, with json:
{
"results": [
{"field1": "valuea", "field2": "valueb"},
{"field1": "valuex", "field2": "valuey"}
]
}
and a database table with a column result to store the json. The mapping with results as the collection and * and the sub element will create two records with:
{"field1": "valuea", "field2": "valueb"}
{"field1": "valuex", "field2": "valuey"}
in the result field.
Copy Data Field Mapping
ADF support cross apply for json array. Please check the example in this doc. https://learn.microsoft.com/en-us/azure/data-factory/supported-file-formats-and-compression-codecs#jsonformat-example
For schema mapping: https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-schema-and-type-mapping#schema-mapping

Query to convert a unknown value to json

I have the structure like,
select to_json('[{
"11111":
{
"Number":{"11111"},
"createdTime":"2018-06-25 10:30:11.047 +0530",
"errorMessage":"invalid"
}
}]')
If I try to convert to json structure, I am getting the following error:
ERROR: could not determine polymorphic type because input has type unknown
I need to get a valid json format.
Thanks..
to_json is used to convert e.g. a record or other values that are not a JSON value to proper a JSON value.
You apparently want to use a string value as a JSON.
In order to be able to do that, you need to supply valid JSON. The part "Number":{"11111"} is however invalid JSON, you need to remove the curly braces.
select '[{
"11111":
{
"Number": "11111",
"createdTime":"2018-06-25 10:30:11.047 +0530",
"errorMessage":"invalid"
}
}]'::json
But why are you using a JSON array if you only have a single value? From what you have shown a single JSON value would make more sense:
select '{
"11111":
{
"Number": "11111",
"createdTime":"2018-06-25 10:30:11.047 +0530",
"errorMessage":"invalid"
}
}'::json

How to push a JSON object to a nested array in a JSONB column

I need to somehow push a JSON object to a nested array of potentionally existing JSON objects - see "pages" in the below JSON snippet.
{
"session_id": "someuuid",
"visitor_ui": 1,
"pages": [
{
"datetime": "2016-08-13T19:45:40.259Z",
"duration,": 0,
"device_id": 1,
"url": {
"path": "/"
}
},
{
"datetime": "2016-08-14T19:45:40.259Z",
"duration,": 0,
"device_id": 1,
"url": {
"path": "/test"
}
},
// how can i push a new value (page) here??
]
"visit_page_count": 2
}
I'm aware of the jsonb_set(target jsonb, path text[], new_value jsonb[, create_missing boolean]) (although still finding it a bit hard to comprehend) but I guess using that, would require that I first SELECT the whole JSONB column, in order to find out how many elements inside "pages" already exists and what index to push it to using jsonb_set, right? I'm hoping theres a way in Postgres 9.5 / 9.6 to achieve the equivalent of what we know in programming languages eg. pages.push({"key": "val"}).
What would be the best and easiest way to do this with Postgresql 9.5 or 9.6?
The trick to jsonb_set() is that it modifies part of a jsonb object, but it returns the entire object. So you pass it the current value of the column and the path you want to modify ("pages" here, as a string array), then you take the existing array (my_column->'pages') and append || the new object to it. All other parts of the jsonb object remain as they were. You are effectively assigning a completely new object to the column but that is irrelevant because an UPDATE writes a new row to the physical table anyway.
UPDATE my_table
SET my_column = jsonb_set(my_column, '{pages}', my_column->'pages' || new_json, true);
The optional create_missing parameter set to true here adds the "pages" object if it does not already exist.