I have a table with three columns: id, name and position. I want to create a JSON array as following:
[
{"id": 443, "name": "first"},
{"id": 645, "name": "second"}
]
This should be listed by the position column.
I started with the following query:
with output as
(
select id, name
from the_table
)
select array_to_json(array_agg(output))
from output
This works, great. Now I want to add the ordering. I started with this:
with output as
(
select id, name, position
from the_table
)
select array_to_json(array_agg(output order by output.position))
from output
Now the output is as following:
[
{"id": 443, "name": "first", "position": 1},
{"id": 645, "name": "second", "position": 2}
]
But I don't want the position field in the output.
I am facing a chicken-egg problem: I need the position column to be able to order on it, but I also don't want the position column, as I don't want it in the result output.
How can I fix this?
I don't think the following query is correct, as table ordering is (theoretically) not preserved between queries:
with output as
(
select id, name
from the_table
order by position
)
select array_to_json(array_agg(output))
from output
There are two ways (at least):
Build JSON object:
with t(x,y) as (values(1,1),(2,2))
select json_agg(json_build_object('x',t.x) order by t.y) from t;
Or delete unnecessary key:
with t(x,y) as (values(1,1),(2,2))
select json_agg((to_jsonb(t)-'y')::json order by t.y) from t;
Note that in the second case you need some type casts because - operator defined only for JSONB type.
Also note that I used direct JSON aggregation json_agg() instead of pair array_to_json(array_agg())
Related
I have a column in postgresl 14 table that is a list of dictionary with 4 keys like this:
[{"id": 14771, "stock": "alfa-12", "name": "rooftop", "store": "MI"},
{"id": 14700, "stock": "beta-10", "name": "stove", "store": "UK"}]
This list can contain dozens of dicts but they all have id, stock, name and store as keys.
Is there a way to query this field and get it as regular columns like this:
id stock name store
14771 alfa-12 rooftop MI
14700 beta-10 stove UK
Use jsonb_array_elements() to get all elements of the json array and jsonb_to_record() to convert the elements to records. Read about the functions in the documentation.
select id, name, stock, store
from my_table
cross join jsonb_array_elements(json_col)
cross join jsonb_to_record(value) as t(id int, name text, stock text, store text)
Test it in db<>fiddle.
For simplicity, a row of table looks like this:
key: "z06khw1bwi886r18k1m7d66bi67yqlns",
reference_keys: {
"KEY": "1x6t4y",
"CODE": "IT137-521e9204-ABC-TESTE"
"NAME": "A"
},
I have a jsonb object like this one {"KEY": "1x6t4y", "CODE": "IT137-521e9204-ABC-TESTE", "NAME": "A"} and I want to search for a query in the values of any key. If my query is something like '521e9204' I want it to return the row that reference_keys has '521e9204' in any value. Basicly the keys don't matter for this scenario.
Note: The column reference_keys and so the jsonb object, are always a 1 dimensional array.
I have tried a query like this:
SELECT * FROM table
LEFT JOIN jsonb_each_text(table.reference_keys) AS j(k, value) ON true
WHERE j.value LIKE '%521e9204%'
The problem is that it duplicates rows, for every key in the json and it messes up the returned items.
I have also thinked of doing something like this:
SELECT DISTINCT jsonb_object_keys(reference_keys) from table;
and then use a query like:
SELECT * FROM table
WHERE reference_keys->>'CODE' like '%521e9204%'
It seems like this would work but I really don't want to rely on this solution.
You can rewrite your JOIN to an EXISTS condition to avoid the duplicates:
SELECT t.*
FROM the_table t
WHERE EXISTS (select *
from jsonb_each_text(t.reference_keys) AS j(k, value)
WHERE j.value LIKE '%521e9204%');
If you are using Postgres 12 or later, you can also use a JSON path query:
where jsonb_path_exists(reference_keys, 'strict $.** ? (# like_regex "521e9204")')
Following this documentation: https://www.postgresql.org/docs/9.5/functions-json.html I came across this syntax for searching a json array for a constant value using single quotes.
I'd like to do the same but search for the value of a field in a table I'm joining to. I've tried a number of variations of this:
SELECT tableA.id, tableB.json_array FROM tableA
LEFT JOIN tableB ON (tableB.json_array)::jsonb #> tableA.id;
But am always running into type-related issues. Does the #> operator only work with constants? How can I solve this problem?
If your data is in JSON ARRAY format then you can use Postgres jsonb_array_elements_text function which is extracting values of array elements. After doing this you can easily use key values in a query or on where conditions.
Sample query for you:
-- sample format for json_array field: [{"id": 110}, {"id": 115}, {"id": 130}, {"id": 145}, {"id": 152}, {"id": 165}]
select b.* from tableB b
cross join jsonb_array_elements_text(b.json_array) b2(pvalue)
where
(b2.pvalue::jsonb->'id')::int4 > 100
-- (b2.pvalue::jsonb->'id')::int4 = 102
-- (b2.pvalue::jsonb->'id')::int4 in (50, 51, 55)
I have table with JSON-b field like this:
id | data
----------
1 | '{"points": [{"id": 10, "address": "Test 1"}, {"id": 20, "address": "Test 2"}, {"id": 30, "address": "Test 3"}]}'
2 | '{"points": [{"id": 40, "address": "Test 444"}, {"id": 20, "address": "Test 222"}, {"id": 50, "address": "Test 555"}]}'
The JSON-b field "data" contains "points" array.
How to get all "points" whose point id is contained in an array [40, 20]? Like classic IN:
... IN (40,20)
Query must use GIN index!!! Array IDs will be sub-query.
You could almost do it with a functional index using a jsonb_path_query_array to extract the data. But as far as I can tell, not quite.
create index on t using gin (jsonb_path_query_array(x,'$.points[*].id'));
And then query with:
select * from t where jsonb_path_query_array(x,'$.points[*].id') ?| '{20,40}';
The problem is that ?| only works with text elements, while in your data the values of 'id' are integers, not text. I thought jsonpath would provide a way to convert them to text, but if it does, I cannot find it.
So instead I think you will have to define your own function which accepts jsonb, and returns int[] or text[] (or jsonb which is an array of text conversions). Then you can build an index on the results of this function. Don't forget to declare it immutable.
You will need to unnest the array (essentially normalizing your data model "on-the-fly") then you can use a subquery to check the value:
select t.*
from the_table t
where exists (select *
from jsonb_array_elements(t.data -> 'points') as x(element)
where (x.element ->> 'id')::int in (select id
from other_table))
I have some rows that look like this:
"id","name","data","created_at"
110,"customerAdd","{\"age\": \"23\", \"name\": \"sally\", \"status\": \"valid\"}","2016-05-10 00:18:53.325752"
111,"customerUpdate","{\"age\": \"19\", \"name\": \"sally\"}","2016-05-10 00:18:53.331443"
112,"customerDelete","{\"name\": \"sally\", \"status\": \"deleted\"}","2016-05-10 00:18:53.338929"
Using a query like this:
SELECT * FROM events WHERE data->>'name' = 'sally';
I'm looking for a way to flatten / reduce / join all of these rows into one row.
Either in JSON, like this:
{'age': 19, 'status': 'deleted', name: 'sally'}
Or In actual columns, like this:
age, status, name
19, deleted, sally
The general idea is that I want to resolve each prop to the last value it was assigned, but I'd like this to happen deeply.
I'm just looking for any way to get started.
You can use DISTINCT ON the keys and values from jsonb_each_text, ordered by created_at desc, and then recombine all values using jsonb_object:
with
events(id,name,data,created_at) as (
values (110,'customerAdd','{"age": "23", "name": "sally", "status": "valid"}'::jsonb,'2016-05-10 00:18:53.325752'::timestamp),
(111,'customerUpdate','{"age": "19", "name": "sally"}','2016-05-10 00:18:53.331443'),
(112,'customerDelete','{"name": "sally", "status": "deleted"}','2016-05-10 00:18:53.338929')
),
flattened as (
select distinct on (kv.key) *
from events e
join lateral jsonb_each_text(data) kv on true
order by kv.key, e.created_at desc
)
select jsonb_object(array(select key from flattened order by key),
array(select value from flattened order by key))