Json_query check which rows have a special value in their json list - tsql

I have a table that each row contains a json column. Inside the json column I have an object containing an array of tags. What I want to do is to see which rows in my table have the tag that I am searching for.
Here is an example of my data:
Row 1:
Id :xxx
Jsom Column:
{
"tags":[
{"name":"blue dragon", weight:0.80},
{"name":"Game", weight:0.90}
]
}
Row 2:
Id : yyy
Jsom Column:
{
"tags":[
{"name":"Green dragon", weight:0.70},
{"name":"fantasy", weight:0.80}
]
}
So I want to write a code that if I search for Green, it returns only row 2 and if I search for dragon it returns both row 1 and 2. How can I do that?
I know I can write this to access my array, but more than that I am clueless :\
I am looking for something like this
Select * from myTable
where JSON_query([JsonColumn], '$.tags[*].name') like '%dragon%'
update
My final query looking like this
select DISTINCT t.id, dv.[key], dv.value
from #t t
cross apply openjson(doc,'$.tags') as d
where json_Value( d.value,'$.name') like '%dragon%'

Something like this:
declare #t table(id int, doc nvarchar(max))
insert into #t(id,doc) values
(1,'
{
"tags":[
{"name":"blue dragon", "weight":"0.80"},
{"name":"Game", "weight":"0.90"}
]
}'),(2,'
{
"tags":[
{"name":"Green dragon", "weight":"0.70"},
{"name":"fantasy", "weight":"0.80"}
]
}')
select t.id, dv.[key], dv.value
from #t t
cross apply openjson(doc,'$.tags') as d
cross apply openjson(d.value) dv
where dv.value like '%dragon%'

Related

Search for string in jsonb values - PostgreSQL

For simplicity, a row of table looks like this:
key: "z06khw1bwi886r18k1m7d66bi67yqlns",
reference_keys: {
"KEY": "1x6t4y",
"CODE": "IT137-521e9204-ABC-TESTE"
"NAME": "A"
},
I have a jsonb object like this one {"KEY": "1x6t4y", "CODE": "IT137-521e9204-ABC-TESTE", "NAME": "A"} and I want to search for a query in the values of any key. If my query is something like '521e9204' I want it to return the row that reference_keys has '521e9204' in any value. Basicly the keys don't matter for this scenario.
Note: The column reference_keys and so the jsonb object, are always a 1 dimensional array.
I have tried a query like this:
SELECT * FROM table
LEFT JOIN jsonb_each_text(table.reference_keys) AS j(k, value) ON true
WHERE j.value LIKE '%521e9204%'
The problem is that it duplicates rows, for every key in the json and it messes up the returned items.
I have also thinked of doing something like this:
SELECT DISTINCT jsonb_object_keys(reference_keys) from table;
and then use a query like:
SELECT * FROM table
WHERE reference_keys->>'CODE' like '%521e9204%'
It seems like this would work but I really don't want to rely on this solution.
You can rewrite your JOIN to an EXISTS condition to avoid the duplicates:
SELECT t.*
FROM the_table t
WHERE EXISTS (select *
from jsonb_each_text(t.reference_keys) AS j(k, value)
WHERE j.value LIKE '%521e9204%');
If you are using Postgres 12 or later, you can also use a JSON path query:
where jsonb_path_exists(reference_keys, 'strict $.** ? (# like_regex "521e9204")')

why is column display order changed in json_agg funtion than that of a temp table - PostgreSQL

I am creating a temp table in a PostgreSQL function\proc.
CREATE TEMP TABLE tbl_temp_class(id SERIAL PRIMARY key, batch_id INT, class_id INT, class_name VARCHAR);
later I am dynamically adding columns to this table, using dynamic sql.
the l_column_counter is incremented with in a for loop untill n
l_sql_query := CONCAT('ALTER TABLE tbl_temp_class ADD column ', 'col', '_', l_column_counter, ' varchar default('''');');
EXECUTE l_sql_query;
At then end I want the tbl_temp_class result as a json array. Hence I'm doing below.
select json_agg(ut)
from (
select *
from tbl_temp_class
order by id) ut;
I expect the result for the above query to be
[
{
"id":1,
"batch_id":1,
"class_id":1,
"class_name":"Maths",
"col_1":"",
"col_2":"",
"col_3":"",
"col_4":"",
"col_5":""
},
{
"id":2,
"batch_id":1,
"class_id":2,
"class_name":"History",
"col_1":"",
"col_2":"",
"col_3":"",
"col_4":"",
"col_5":""
}
]
however, the result I am getting is as below. The column display order is scrambled.
Any idea how to fix this? Is this because the json is generated out of a temp table?
I need the column display order in the final json array to be same as the column display order in the temp table.
[
{
"id":1,
"col_1":"",
"col_2":"",
"col_3":"",
"col_4":"",
"col_5":"",
"class_id":1,
"batch_id":1,
"class_name":"Maths",
},
{
"id":2,
"col_1":"",
"col_2":"",
"col_3":"",
"col_4":"",
"col_5":"",
"class_id":2,
"batch_id":1,
"class_name":"History",
}
]
Did you try an ORDER BY in the aggregation?
SELECT json_agg(ut ORDER BY id) -- this where you want to use the ORDER BY
FROM (
SELECT *
FROM tbl_temp_class
ORDER BY id) ut;

Deleting a jsonb array item by name

I have the following table
CREATE TABLE country (
id INTEGER NOT NULL PRIMARY KEY ,
name VARCHAR(50),
extra_info JSONB
);
INSERT INTO country(id,extra_info)
VALUES (1, '{ "name" : "France", "population" : "65000000", "flag_colours": ["red", "blue","white"]}');
INSERT INTO country(id,extra_info)
VALUES (2, '{ "name": "Spain", "population" : "47000000", "borders": ["Portugal", "France"] }');
and i can add an element to the array like this
UPDATE country SET extra_info = jsonb_set(extra_info, '{flag_colours,999999999}', '"green"', true);
and update like this
UPDATE country SET extra_info = jsonb_set(extra_info, '{flag_colours,0}', '"yellow"');
I now would like to delete an array item with a known index or name.
How would i delete a flag_color element by index or by name?
Update
Delete by index
UPDATE country SET extra_info = extra_info #- '{flag_colours,-1}'
How can i delete by name?
As Arrays do not have direct access to items in a straightforward way, we can try to approach this differently through unnesting -> filtering elements -> stitching things back together. I have formulated a code example with ordered comments to help.
CREATE TABLE new_country AS
-- 4. Return a new array (for immutability) that contains the new desired set of colors
SELECT id, name, jsonb_set(extra_info, '{flag_colours}', new_colors, FALSE)
FROM country
-- 3. Use Lateral join to apply this to every row
LEFT JOIN LATERAL (
-- 1. First unnest the desired elements from the Json array as text (to enable filtering)
WITH prep AS (SELECT jsonb_array_elements_text(extra_info -> 'flag_colours') colors FROM country)
SELECT jsonb_agg(colors) new_colors -- 2. Form a new jsonb array after filtering
FROM prep
WHERE colors <> 'red') lat ON TRUE;
In the case you would like to update only the affected column without recreating the main table, you can:
UPDATE country
SET extra_info=new_extra_info
FROM new_country
WHERE country.id = new_country.id;
I have broken it down to two queries to improve readability; however you can also use a subquery instead of creating a new table (new_country).
With the subquery, it should look like:
UPDATE country
SET extra_info=new_extra_info
FROM (SELECT id, name, jsonb_set(extra_info, '{flag_colours}', new_colors, FALSE) new_extra_info
FROM country
-- 3. Use Lateral join to scale this across tables
LEFT JOIN LATERAL (
-- 1. First unnest the desired elements from the Json array as text (to enable filtering)
WITH prep AS (SELECT jsonb_array_elements_text(extra_info -> 'flag_colours') colors FROM country)
SELECT jsonb_agg(colors) new_colors -- 2. Form a new jsonb array after filtering
FROM prep
WHERE colors <> 'red') lat ON TRUE) new_country
WHERE country.id = new_country.id;
Additionally, you may filter rows via (As of PostgreSQL 9.4):
SELECT *
FROM country
WHERE (extra_info -> 'flag_colours') ? 'red'
Actually PG12 allows to do it without JOIN LATERAL:
SELECT jsonb_path_query_array(j #> '{flag_colours}', '$[*] ? (# != "red")'),
jsonb_set(j, '{flag_colours}', jsonb_path_query_array(j #> '{flag_colours}', '$[*] ? (# != "red")'))
FROM (SELECT '{ "name" : "France", "population" : "65000000",
"flag_colours": ["red", "blue","white"]}'::jsonb AS j
) AS j
WHERE j #? '$.flag_colours[*] ? (# == "red")';
jsonb_path_query_array | jsonb_set
------------------------+---------------------------------------------------------------------------------
["blue", "white"] | {"name": "France", "population": "65000000", "flag_colours": ["blue", "white"]}
(1 row)

How to filter data from postgresql which has jsonb nested field in the array field of jsonb?

i have a table with a jsonb column and documents are like these(simplified)
{
"a": 1,
"rg": [
{
"rti": 2
}
]
}
I want to filter all the rows which has 'rg' field and there is at least one 'rti'field in the array.
My current solution is
log->>'rg' ilike '%rti%'
Is there another approach, probably a faster solution exists.
Another approach would be applying jsonb_each to the jsonb object and then jsonb_array_elements_text to the extracted value from jsonb_each method :
select id, js_value2
from
(
select (js).value as js_value, jsonb_array_elements_text((js).value) as js_value2,id
from
(
select jsonb_each(log) as js, id
from tab
) q
where (js).key = 'rg'
) q2
where js_value2 like '%rti%';
Demo

PostgreSQL - jsonb_each

I have just started to play around with jsonb on postgres and finding examples hard to find online as it is a relatively new concept.I am trying to use jsonb_each_text to printout a table of keys and values but get a csv's in a single column.
I have the below json saved as as jsonb and using it to test my queries.
{
"lookup_id": "730fca0c-2984-4d5c-8fab-2a9aa2144534",
"service_type": "XXX",
"metadata": "sampledata2",
"matrix": [
{
"payment_selection": "type",
"offer_currencies": [
{
"currency_code": "EUR",
"value": 1220.42
}
]
}
]
}
I can gain access to offer_currencies array with
SELECT element -> 'offer_currencies' -> 0
FROM test t, jsonb_array_elements(t.json -> 'matrix') AS element
WHERE element ->> 'payment_selection' = 'type'
which gives a result of "{"value": 1220.42, "currency_code": "EUR"}", so if i run the below query I get (I have to change " for ')
select * from jsonb_each_text('{"value": 1220.42, "currency_code": "EUR"}')
Key | Value
---------------|----------
"value" | "1220.42"
"currency_code"| "EUR"
So using the above theory I created this query
SELECT jsonb_each_text(data)
FROM (SELECT element -> 'offer_currencies' -> 0 AS data
FROM test t, jsonb_array_elements(t.json -> 'matrix') AS element
WHERE element ->> 'payment_selection' = 'type') AS dummy;
But this prints csv's in one column
record
---------------------
"(value,1220.42)"
"(currency_code,EUR)"
The primary problem here, is that you select the whole row as a column (PostgreSQL allows that). You can fix that with SELECT (jsonb_each_text(data)).* ....
But: don't SELECT set-returning functions, that can often lead to errors (or unexpected results). Instead, use f.ex. LATERAL joins/sub-queries:
select first_currency.*
from test t
, jsonb_array_elements(t.json -> 'matrix') element
, jsonb_each_text(element -> 'offer_currencies' -> 0) first_currency
where element ->> 'payment_selection' = 'type'
Note: function calls in the FROM clause are implicit LATERAL joins (here: CROSS JOINs).
WITH testa AS(
select jsonb_array_elements
(t.json -> 'matrix') -> 'offer_currencies' -> 0 as jsonbcolumn from test t)
SELECT d.key, d.value FROM testa
join jsonb_each_text(testa.jsonbcolumn) d ON true
ORDER BY 1, 2;
tetsa get the temporal jsonb data. Then using lateral join to transform the jsonb data to table format.