jsonb-search to only show the spec value - postgresql

I found most of you questing in this tread but I have problem to get the right bit out of my query,
The jsonb-column looks like this:
[
{"price": 67587, "timestamp": "2016-02-11T06:51:30.696427Z"},
{"price": 33964, "timestamp": "2016-02-14T06:49:25.381834Z"},
{"price": 58385, "timestamp": "2016-02-19T06:57:05.819455Z"}, etc..
]
the query looks like this:
SELECT * FROM store_product_history
WHERE EXISTS (SELECT 1 FROM jsonb_array_elements(store_prices)
as j(data) WHERE (data#>> '{price}') LIKE '%236%');
Which of course gives me the whole rows for the result but I would like to only get like only the timestamps-values from the the rows, is this possible?

If you use jsonb_array_elements() in a lateral join you will be able to select single json attributes, e.g.
with store_product_history(store_prices) as (
values
('[
{"price": 67587, "timestamp": "2016-02-11T06:51:30.696427Z"},
{"price": 33964, "timestamp": "2016-02-14T06:49:25.381834Z"},
{"price": 58385, "timestamp": "2016-02-19T06:57:05.819455Z"}
]'::jsonb)
)
select data
from store_product_history,
jsonb_array_elements(store_prices) as j(data)
where (data#>> '{price}') like '%6%';
data
--------------------------------------------------------------
{"price": 67587, "timestamp": "2016-02-11T06:51:30.696427Z"}
{"price": 33964, "timestamp": "2016-02-14T06:49:25.381834Z"}
(2 rows)
Or:
select data->>'timestamp' as timestamp
from store_product_history,
jsonb_array_elements(store_prices) as j(data)
where (data#>> '{price}') like '%6%';
timestamp
-----------------------------
2016-02-11T06:51:30.696427Z
2016-02-14T06:49:25.381834Z
(2 rows)

Related

How to use wildcard in the path to search jsonb values for postgres?

Using postgres version 10.13
This is my datatable jsongraphs
id
jsongraph
1
{ "data": {"scopes_by_id": { "121": { "id": 121, "pk": 121, "name": "Prework" } }, "commonsites_by_id": {"123": {"id": 123, "pk": 123, "name": "Somewhere over the rainbow"}}}}
2
{ "data": {"scopes_by_id": { "156": { "id": 156, "pk": 156, "name": "ABC" } }, "commonsites_by_id": {"123": {"id": 123, "pk": 123, "name": "Somewhere over the rainbow"}}}}
I want the distinct values of scope id and site id which should be (121, 123), (156,123)
So I tried
SELECT DISTINCT
jsongraph->'data'->'scopes_by_id'->>'pk' ,
jsongraph->'data'->'commonsites_by_id'->>'pk' from jsongraphs;
This won't work because the path should be like data->scopes_by_id->121->>pk but I cannot know beforehand the value of 121 in between.
Is there a way to get the values of what I need by filling in some kind of wildcard in the path?
E.g.data->scopes_by_id->{*}->>pk like that?
ANd because this is legacy data, it's also hard to change the data itself.
As the nesting level seems to be fixed, you could do something like this:
select j.id, scopes.*, commonsites.*
from jsongraphs j
cross join lateral (
select jsonb_agg(j.jsongraph #> array['data','scopes_by_id', t1.scope_id, 'pk']) as scope_ids
from jsonb_each_text(j.jsongraph #> '{data,scopes_by_id}') as t1(scope_id)
) scopes
cross join lateral (
select jsonb_agg(j.jsongraph #> array['data','commonsites_by_id', t2.site_id, 'pk']) as common_ids
from jsonb_each_text(j.jsongraph #> '{data,commonsites_by_id}') as t2(site_id)
) commonsites
order by id;
The sub-queries extract all key below the respective part (e.g. scopes_by_id) and then uses the #>' operator to access the path for each id inside the original JSON value. And finally all PK values are aggregated back into a single array.
This returns the PK values from each part separately as an array in order to handle the situation where you have a different number of "scope ids" and "commonsite ids"
If you just want "the first" id from each section, you can remove the aggregation and use a LIMIT clause:
select j.id, scopes.*, commonsites.*
from jsongraphs j
cross join lateral (
select j.jsongraph #> array['data','scopes_by_id', t1.scope_id, 'pk'] as scope_id
from jsonb_each_text(j.jsongraph #> '{data,scopes_by_id}') as t1(scope_id)
limit 1
) scopes
cross join lateral (
select j.jsongraph #> array['data','commonsites_by_id', t2.site_id, 'pk'] as common_id
from jsonb_each_text(j.jsongraph #> '{data,commonsites_by_id}') as t2(site_id)
limit 1
) commonsites
order by id;
Not sure on which level you want to apply the "distinct" part for this.
In Postgres 12 or later, you could achieve the same with:
select id,
jsonb_path_query_array(j.jsongraph, 'strict $.data.scopes_by_id.**.pk') as scopes,
jsonb_path_query_array(j.jsongraph, 'strict $.data.commonsites_by_id.**.pk') as common
from jsongraphs ;
order by id;
Online example

Handle "excluded" updates in ksqlDB

I've created a stream and a table in this way:
CREATE STREAM user_stream
(id VARCHAR, name VARCHAR, age INT)
WITH (kafka_topic='user_topic', value_format='json', partitions=1);
CREATE TABLE user_table AS
SELECT
id,
LATEST_BY_OFFSET(name) as name,
LATEST_BY_OFFSET(age) as age
FROM user_stream
GROUP BY id
EMIT CHANGES;
And submit some event to the user_topic as:
{ "id": "user_1", "name": "Sherie Shine", "age": 31 }
{ "id": "user_2", "name": "Liv Denman", "age": 52 }
{ "id": "user_3", "name": "Frona Ness", "age": 44 }
Then query the table as:
SELECT * FROM user_table WHERE age > 40 EMIT CHANGES;
We'll get two rows:
+------------+----------------+-------+
|SID |NAME |AGE |
+------------+----------------+-------+
|user_2 |Frona Ness |44 |
|user_3 |Liv Denman |52 |
Post another message to the user_topic:
{ "id": "user_3", "age": 35 }
I'm expecting user_3 will be removed from the current query, but I've received nothing.
If I interrupt the current query with Ctrl+C and issue the same query again, I'll see only user_2 as it's the only one with age > 40 now.
How can we handle the update to remove a row from the filter?
The issue is gone after upgrading confluent-6.1.1 to confluent-6.2.0.

How to delete a node from a JSONB Array across all table rows in Postges?

I have a table called "Bookmarks" that contains several standard rows and also a JSONB column called "columnsettings"
The content of this JSONB column looks like this.
[
{
"data": "id",
"width": 25
},
{
"data": "field_1",
"width": 125
},
{
"data": "field_12",
"width": 125
},
{
"data": "field_11",
"width": 125
},
{
"data": "field_2",
"width": 125
},
{
"data": "field_7",
"width": 125
},
{
"data": "field_8",
"width": 125
},
{
"data": "field_9",
"width": 125
},
{
"data": "field_10",
"width": 125
}
]
I am trying to write an update statement which would update this columnsettings by removing a specific node I specify. For example, I might want to update the columnsettings and remove just the node where data='field_2' as an example.
I have tried a number of things...I believe it will look something like this, but this is wrong.
update public."Bookmarks"
set columnsettings =
jsonb_set(columnsettings, (columnsettings->'data') - 'field_2');
What is the correct syntax to remove a node within a JSONB Array like this?
I did get a version working when there is a single row. This correctly updates the JSONB column and removes the node
UPDATE public."Bookmarks" SET columnsettings = columnsettings - (select position-1 from public."Bookmarks", jsonb_array_elements(columnsettings) with ordinality arr(elem, position) WHERE elem->>'data' = 'field_2')::int
However, I want it to apply to every row in the table. When there is more than 1 row, I get the error " more than one row returned by a subquery used as an expression"
How do I get this query to update all rows in the table?
UPDATED, the answer provided solved my issue.
I now have another JSONB column where I need to do the same filtering. The structure is a bit different, it looks likke this
{
"filters": [
{
"field": "field_8",
"value": [
1
],
"header": "Colors",
"uitype": 7,
"operator": "searchvalues",
"textvalues": [
"Red"
],
"displayfield": "field_8_options"
}
],
"rowHeight": 1,
"detailViewWidth": 1059
}
I tried using the syntax the same way as follows:
UPDATE public."Bookmarks"
SET tabsettings = filtered_elements.tabsettings
FROM (
SELECT bookmarkid, JSONB_AGG(el) as tabsettings
FROM public."Bookmarks",
JSONB_ARRAY_ELEMENTS(tabsettings) AS el
WHERE el->'filters'->>'field' != 'field_8'
GROUP BY bookmarkid
) AS filtered_elements
WHERE filtered_elements.bookmarkid = public."Bookmarks".bookmarkid;
This gives an error: "cannot extract elements from an object"
I thought I had the syntax correct, but how should this line be formatted?
WHERE el->'filters'->>'field' != 'field_8'
I tried this format as well to get to the array. This doesn't given an error, but it doesn't find any matches...even though there are records.
UPDATE public."Bookmarks"
SET tabsettings = filtered_elements.tabsettings
FROM (
SELECT bookmarkid, JSONB_AGG(el) as tabsettings
FROM public."Bookmarks",
JSONB_ARRAY_ELEMENTS(tabsettings->'filters') AS el
WHERE el->>'field' != 'field_8'
GROUP BY bookmarkid
) AS filtered_elements
WHERE filtered_elements.bookmarkid = public."Bookmarks".bookmarkid;
UPDATED .
This query now seems to work if there is more than one "filter" in the array.
However, if there is only 1 element in array which should be excluded, it doesn't remove the item.
UPDATE public."Bookmarks"
SET tabsettings = filtered_elements.tabsettings
FROM (
SELECT bookmarkid,
tabsettings || JSONB_BUILD_OBJECT('filters', JSONB_AGG(el)) as tabsettings
FROM public."Bookmarks",
-- this must be an array
JSONB_ARRAY_ELEMENTS(tabsettings->'filters') AS el
WHERE el->>'field' != 'field_8'
GROUP BY bookmarkid
) AS filtered_elements
WHERE filtered_elements.bookmarkid = public."Bookmarks".bookmarkid;
You can deconstruct, filter, and re-construct the JSONB array. Something like this should work:
UPDATE bookmarks
SET columnsettings = filtered_elements.columnsettings
FROM (
SELECT id, JSONB_AGG(el) as columnsettings
FROM bookmarks,
JSONB_ARRAY_ELEMENTS(columnsettings) AS el
WHERE el->>'data' != 'field_2'
GROUP BY id
) AS filtered_elements
WHERE filtered_elements.id = bookmarks.id;
Using JSONB_ARRAY_ELEMENTS, you transform the JSONB array into rows, one per object, which you call el. Then you can access the data attribute to filter out the "field_2" entry. Finally, you group by id to put the remainign values back together, and update the corresponding row.
EDIT If your data is a nested array in an object, override the object on the specific key:
UPDATE bookmarks
SET tabsettings = filtered_elements.tabsettings
FROM (
SELECT id,
tabsettings || JSONB_BUILD_OBJECT('filters', JSONB_AGG(el)) as tabsettings
FROM bookmarks,
-- this must be an array
JSONB_ARRAY_ELEMENTS(tabsettings->'filters') AS el
WHERE el->>'field' != 'field_2'
GROUP BY id
) AS filtered_elements
WHERE filtered_elements.id = bookmarks.id;

Store and update jsonb value in Postgres

I have a table such as:
ID | Details
1 | {"name": "my_name", "phone": "1234", "address": "my address"}
2 | {"name": "his_name", "phone": "4321", "address": "his address"}
In this, Details is a jsonb object. I want to add another field named 'tags' to jsonb which should have some particular keys. In this case, "name", "phone". The final state after execution of the query should be:
ID | Details
1 | {"tags": {"name": "my_name", "phone": "1234"},"name": "my_name", "phone": "1234", "address":"my address"}
2 | {"tags": {"name": "his_name", "phone": "4321"},"name": "his_name", "phone": "4321", "address":"his address"}
I can think of the following steps to get this done:
Loop over each row and extract the details["name"] and details["phone"] in variables.
Add these variables to the jsonb.
I cant think of how the respective postgres query for this should be. Please guide.
use jsonb_build_object
update t set details
= jsonb_build_object ( 'tags',
jsonb_build_object( 'name', details->>'name', 'phone',details->>'phone')
)
|| details
DEMO
Use the concatenate operator, of course!
https://www.postgresql.org/docs/current/functions-json.html
update t1 set details = details || '{"tags": {"name": "my_name"}}' where id = 1
You can extract the keys you are interested in, build a new json value and append that to the column:
update the_table
set details = details || jsonb_build_object('tags',
jsonb_build_object('name', details -> 'name',
'phone', details -> 'phone'));

Postgres: How to string pattern match query a json column?

I have a column with json type but I'm wondering how to select filter it i.e.
select * from fooTable where myjson like "orld";
How would I query for a substring match like the above. Say searching for "orld" under "bar" keys?
{ "foo": "hello", "bar": "world"}
I took a look at this documentation but it is quite confusing.
https://www.postgresql.org/docs/current/static/datatype-json.html
Use the ->> operator to get json attributes as text, example
with my_table(id, my_json) as (
values
(1, '{ "foo": "hello", "bar": "world"}'::json),
(2, '{ "foo": "hello", "bar": "moon"}'::json)
)
select t.*
from my_table t
where my_json->>'bar' like '%orld'
id | my_json
----+-----------------------------------
1 | { "foo": "hello", "bar": "world"}
(1 row)
Note that you need a placeholder % in the pattern.