Conversion of plain PostgreSQL query to Slick query - postgresql

I have a single table ABC with these (relevant) columns
create table abc
(
transaction_id uuid not null,
store_items jsonb not null,
);
store_items is a Sequence[StoreItem] that looks like this:
{"itemId": "123",
"isAccountSafe": false
},
{"itemId": "456",
"isAccountSafe": true
},
{"itemId": "789",
"isAccountSafe": false
}
I want to query the count of store_items in abc where isAccountSafe is false, in the above example the result would be 2. The tricky part is that I'm not joining multiple tables, I'm joining a single table with one of its columns.
Here's the postgres SQL that I got so far:
select count(transaction_id)
from abc
cross join jsonb_array_elements(store_items) elem
where not (elem->>'isAccountSafe')::boolean
I've been wracking my brain figuring out how to do this in slick. My guess was to first do a query of the store_items first and then do a joinLeft, something like below, but it's wrong. I don't know how to filter isAccountSafe that sits inside the jsonb column.
val getStoreItems = abc.map(_.storeItems)
val finalQuery = abc
.joinLeft(getStoreItems)

Related

Search for string in jsonb values - PostgreSQL

For simplicity, a row of table looks like this:
key: "z06khw1bwi886r18k1m7d66bi67yqlns",
reference_keys: {
"KEY": "1x6t4y",
"CODE": "IT137-521e9204-ABC-TESTE"
"NAME": "A"
},
I have a jsonb object like this one {"KEY": "1x6t4y", "CODE": "IT137-521e9204-ABC-TESTE", "NAME": "A"} and I want to search for a query in the values of any key. If my query is something like '521e9204' I want it to return the row that reference_keys has '521e9204' in any value. Basicly the keys don't matter for this scenario.
Note: The column reference_keys and so the jsonb object, are always a 1 dimensional array.
I have tried a query like this:
SELECT * FROM table
LEFT JOIN jsonb_each_text(table.reference_keys) AS j(k, value) ON true
WHERE j.value LIKE '%521e9204%'
The problem is that it duplicates rows, for every key in the json and it messes up the returned items.
I have also thinked of doing something like this:
SELECT DISTINCT jsonb_object_keys(reference_keys) from table;
and then use a query like:
SELECT * FROM table
WHERE reference_keys->>'CODE' like '%521e9204%'
It seems like this would work but I really don't want to rely on this solution.
You can rewrite your JOIN to an EXISTS condition to avoid the duplicates:
SELECT t.*
FROM the_table t
WHERE EXISTS (select *
from jsonb_each_text(t.reference_keys) AS j(k, value)
WHERE j.value LIKE '%521e9204%');
If you are using Postgres 12 or later, you can also use a JSON path query:
where jsonb_path_exists(reference_keys, 'strict $.** ? (# like_regex "521e9204")')

PostgreSQL SELECT only values inside jsonb data

I have a postgresql data with values (it is jsonb type column):
SELECT data FROM orders;
[
{
"food_id": "1",
"table": "A12",
},
{
"food_id": "2",
"table": "A14",
}
]
I can easily SELECT by providing data as it is, but how to convert it into simplified ?
My expected result:
SELECT ??? as food_tables FROM orders;
["A12", "A14"]
I personally still did not understand how jsonb_array_elements() works.
Thanks!
If you are using Postgres 12 or later, you can use jsonb_path_query_array()
select jsonb_path_query_array(data, '$[*].table') as food_tables
from orders
You could perform a lateral cross join with the unnested array elements and extract the attributes:
SELECT jsonb_agg(d.elem -> 'table')
FROM orders
CROSS JOIN LATERAL jsonb_array_elements(orders.data) AS d(elem)
GROUP BY orders.id;
Use array_agg instead of jsonb_agg if you want a PostgreSQL array.
It is a mistake to model tabular data as a JSON array. Change your data model so that each array element becomes a row in a database table.

Writing a rather obtuse JSON query using Slick

I am looking to translate an SQL query (Postgres) into Scala Slick code for use in my Play application.
The data looks something like this:
parent_id | json_column
----------+-----------------------------------------
| [ {"id": "abcde-12345", "data": "..."}
2 | , {"id": "67890-fghij", "data": "..."}
| , {"id": "klmno-00000", "data": "..."} ]
Here's my query in PostgreSQL:
SELECT * FROM table1
WHERE id IN (
SELECT id
FROM
table1 t1,
json_array_elements(t1.json_column) e,
json_to_record(e.value) AS r("id" text, data text)
WHERE
"id" = 'abcde-12345'
AND t1.parent_id = 2
);
This finds the results I need; any objects in t1 that include a "row" in the json_column array that has the id of "abcde-12345". The "parent_id" and "id" will be passed in to this query via query parameters (both Strings).
How would I write this query in Scala using Slick?
The easiest - maybe laziest? - way is probably to just use plain sql ..
sql"""[query]""".as[ (type1,type2..) ]
using the $var notation for the variables.
Otherwise you can use SimpleFunction to map the json calls, but I'm not quite sure how that works when they generate multiple results per row. Seems that might get complicated..

How to filter data from postgresql which has jsonb nested field in the array field of jsonb?

i have a table with a jsonb column and documents are like these(simplified)
{
"a": 1,
"rg": [
{
"rti": 2
}
]
}
I want to filter all the rows which has 'rg' field and there is at least one 'rti'field in the array.
My current solution is
log->>'rg' ilike '%rti%'
Is there another approach, probably a faster solution exists.
Another approach would be applying jsonb_each to the jsonb object and then jsonb_array_elements_text to the extracted value from jsonb_each method :
select id, js_value2
from
(
select (js).value as js_value, jsonb_array_elements_text((js).value) as js_value2,id
from
(
select jsonb_each(log) as js, id
from tab
) q
where (js).key = 'rg'
) q2
where js_value2 like '%rti%';
Demo

PostgreSQL - GROUP_CONCAT for JSONB column

I try to find a way to concat JSONB values using Postgres.
For example I have two lines :
INSERT INTO "testConcat" ("id", "json_data", "groupID")
VALUES (1, {"name": "JSON_name1", "value" : "Toto"}, 5);
INSERT INTO "testConcat" ("id", "json_data", "groupID")
VALUES (2, {"name": "JSON_name2"}, 5);
I would like to do something like :
SELECT GROUP_CONCAT(json_data)
FROM testConcat
GROUP BY groupID
AND as results to obtain something like :
[{"name": "JSON_name1", "value": "Toto"}, {"name": "JSON_name2"}]
I try the creation of aggregate function, but when there is the same key into the JSON, then they are merged and only the last values is preserved :
DROP AGGREGATE IF EXISTS jsonb_merge(jsonb);
CREATE AGGREGATE jsonb_merge(jsonb) (
SFUNC = jsonb_concat(jsonb, jsonb),
STYPE = jsonb,
INITCOND = '{}'
);
When I use this function as here :
SELECT jsonb_merge(json_data)
FROM testConcat
GROUP BY groupID
The result is :
{"name": "JSON_name2", "value": "Toto"}
And not as those that I want because the
{"name": "JSON_name1"}
is missing. The function preserve only the different keys and merge the other one with the last value.
Thanks for any help
If there is always only a single key/value pair in the JSON document, you can do this without a custom aggregate function:
SELECT groupid, jsonb_object_agg(k,v order by id)
FROM testconcat, jsonb_each(json_data) as x(k,v)
group by groupid;
The "last" value is defined by the ordering on the id column
The custom aggregate function might be faster though.
Finally I just find a solution, even if it is not the best, it seems to works.
I create the agreate function, as previously described with a small modification :
DROP AGGREGATE IF EXISTS jsonb_merge(jsonb);
CREATE AGGREGATE jsonb_merge(jsonb) (
SFUNC = jsonb_concat(jsonb, jsonb),
STYPE = jsonb,
INITCOND = '[]'
);
I just replace :
INITCOND = '{}'
with
INITCOND = '[]'
And after used it as previously :
SELECT jsonb_merge(json_data)
FROM testConcat
GROUP BY groupID