I have a jsonb field called passengers, with the following structure:
note that persons is an array
{
"adults": {
"count": 2,
"persons": [
{
"age": 45,
"name": "Prof. Kyleigh Walsh II",
"birth_date": "01-01-1975"
},
{
"age": 42,
"name": "Milford Wiza",
"birth_date": "02-02-1978"
}
]
}
}
How may I perform a query against the name field of this JSONB? For example, to select all rows which match the name field Prof?
Here's my rudimentary attempt:
SELECT passengers from opportunities
WHERE 'passengers->adults' != NULL
AND 'passengers->adults->persons->name' LIKE '%Prof';
This returns 0 rows, but as you can see I have one row with the name Prof. Kyleigh Walsh II
This: 'passengers->adults->persons->name' LIKE '%Prof'; checks if the string 'passengers->adults->persons->name' ends with Prof.
Each key for the JSON operator needs to be a separate element, and the column name must not be enclosed in single quotes. So 'passengers->adults->persons->name' needs to be passengers -> 'adults' -> 'persons' -> 'name'
The -> operator returns a jsonb value, you want a text value, so the last operator should be ->>
Also != null does not work, you need to use is not null.
SELECT passengers
from opportunities
WHERE passengers -> 'adults' is not NULL
AND passengers -> 'adults' -> 'persons' ->> 'name' LIKE 'Prof%';
The is not null condition isn't really necessary, because that is implied with the second condition. The second condition could be simplified to:
SELECT passengers
from opportunities
WHERE passengers #>> '{adults,persons,name}' LIKE 'Prof%';
But as persons is an array, the above wouldn't work and you need to use a different approach.
With Postgres 9.6 you will need a sub-query to unnest the array elements (and thus iterate over each one).
SELECT passengers
from opportunities
WHERE exists (select *
from jsonb_array_elements(passengers -> 'adults' -> 'persons') as p(person)
where p.person ->> 'name' LIKE 'Prof%');
To match a string at the beginning with LIKE, the wildcard needs to be at the end. '%Prof' would match 'Some Prof' but not 'Prof. Kyleigh Walsh II'
With Postgres 12, you could use a SQL/JSON Path expression:
SELECT passengers
from opportunities
WHERE passengers #? '$.adults.persons[*] ? (#.name like_regex "Prof.*")'
Related
This my jonb data
contact:{
"name": "Jonh",
"country": ["USA", "UK"],
}
And my query:
SELECT * FROM public.product where contact -> 'country' = ARRAY['USA','UK'];
Executed the query and got this ERROR: operator does not exist: jsonb = text[]
So how do I fix this error?
You need to compare it with a JSONB array:
select *
from product
where contact -> 'country' = '["USA","UK"]'::jsonb;
But this depends on the order of the elements in the array. If you want to test all keys regardless of the order, the ?& operator might be better:
where contact -> 'country' ?& array['UK','USA']
That would however also return rows that contain additional elements in the array. If you need to match all elements exactly regardless of the order you could use the #> operator twice:
where contact -> 'country' #> '["USA","UK"]'::jsonb
and contact -> 'country' <# '["USA","UK"]'::jsonb
Use to_jsonb():
SELECT (('{"a": ["x","y","z"]}'::jsonb)->'a') = to_jsonb(array['x','y','z']);
^^ Returns true.
Assume I have userinfo table with person column containing the following jsonb object:
{
"skills": [
{
"name": "php"
},
{
"name": "Python"
}
]
}
In order to get Python skill i would write the following query
select * from userinfo
where person -> 'skills' #> '[{"name":"Python"}]'
It works well, but if specify '[{"name":"python"}]' as lower case it doesn't return me what i want.
How can i write case insensitive query there?
Postgre version is 11.2
you can do that when unnesting with an exists predicate:
select u.*
from userinfo u
where exists (select *
from jsonb_array_elements(u.person -> 'skills') as s(j)
-- make sure to only do this for rows that actually contain an array
where jsonb_typeof(u.person -> 'skills') = 'array'
and lower(s.j ->> 'name') = 'python');
Online example: https://rextester.com/XKVUA73952
demo:db<>fiddle
AFAIK there is no in-built JSON function for that. So, you have to convert the JSON-String to lower case (meaning casting into type text, lower-case it, recast it into type jsonb):
WHERE lower(person::text)::jsonb -> 'skills' #> '[{"name":"python"}]'
select * from (select 1 as id, 'fname' as firstname, 'sname' as surname, jsonb_array_elements('{
"skills": [{"name": "php"},{"name": "Python"}]}'::jsonb->'skills') skill) p
where p.skill->>'name' ilike 'python';
to suit the tables in the question it'd be something like
select * from (select *, jsonb_array_elements(person->'skills') skill from userinfo) u
where u.skill->>'name' ilike 'python';
Just a note, this will return multiple entries for the same userinfo if you start looking for multiple skills .. if you use the above, you'd want to group by the fields you want returned or select distinct id, username etc.
like (assuming there's an id column in the userinfo table)
select distinct id from (select *, jsonb_array_elements(person->'skills') skill from userinfo) u
where u.skill->>'name' ilike 'python' or u.skill->>'name' ilike 'php';
it all depends what you want to do
i have a table with a jsonb column and documents are like these(simplified)
{
"a": 1,
"rg": [
{
"rti": 2
}
]
}
I want to filter all the rows which has 'rg' field and there is at least one 'rti'field in the array.
My current solution is
log->>'rg' ilike '%rti%'
Is there another approach, probably a faster solution exists.
Another approach would be applying jsonb_each to the jsonb object and then jsonb_array_elements_text to the extracted value from jsonb_each method :
select id, js_value2
from
(
select (js).value as js_value, jsonb_array_elements_text((js).value) as js_value2,id
from
(
select jsonb_each(log) as js, id
from tab
) q
where (js).key = 'rg'
) q2
where js_value2 like '%rti%';
Demo
I have been searching all over to find a way to do this.
I am trying to clean up a table with a lot of duplicated jsonb fields.
There are some examples out there, but as a little twist, I need to exclude one key/value pair in the jsonb field, to get the result I need.
Example jsonb
{
"main": {
"orders": {
"order_id": "1"
"customer_id": "1",
"update_at": "11/23/2017 17:47:13"
}
}
Compared to:
{
"main": {
"orders": {
"order_id": "1"
"customer_id": "1",
"updated_at": "11/23/2017 17:49:53"
}
}
If I can exclude the "updated_at" key when comparing, the query should find it a duplicate and this, and possibly other, duplicated entries should be deleted, keeping only one, the first "original" one.
I have found this query, to try and find the duplicates. But it doesn't take my situation into account. Maybe someone can help structuring this to meet the requirements.
SELECT t1.jsonb_field
FROM customers t1
INNER JOIN (SELECT jsonb_field, COUNT(*) AS CountOf
FROM customers
GROUP BY jsonb_field
HAVING COUNT(*)>1
) t2 ON t1.jsonb_field=t2.jsonb_field
WHERE
t1.customer_id = 1
Thanks in advance :-)
If the Updated at is always at the same path, then you can remove it:
SELECT t1.jsonb_field
FROM customers t1
INNER JOIN (SELECT jsonb_field, COUNT(*) AS CountOf
FROM customers
GROUP BY jsonb_field
HAVING COUNT(*)>1
) t2 ON
t1.jsonb_field #-'{main,orders,updated_at}'
=
t2.jsonb_field #-'{main,orders,updated_at}'
WHERE
t1.customer_id = 1
See https://www.postgresql.org/docs/9.5/static/functions-json.html
additional operators
EDIT
If you dont have #- you might just cast to text, and do a regex replace
regexp_replace(t1.jsonb_field::text, '"update_at": "[^"]*?"','')::jsonb
=
regexp_replace(t2.jsonb_field::text, '"update_at": "[^"]*?"','')::jsonb
I even think, you don't need to cast it back to jsonb. But to be save.
Mind the regex matche ANY "update_at" field (by key) in the json. It should not match data, because it would not match an escaped closing quote \", nor find the colon after it.
Note the regex actually should be '"update_at": "[^"]*?",?'
But on sql fiddle that fails. (maybe depends on the postgresbuild..., check with your version, because as far as regex go, this is correct)
If the comma is not removed, the cast to json fails.
you can try '"update_at": "[^"]*?",'
no ? : that will remove the comma, but fail if update_at was the last in the list.
worst case, nest the 2
regexp_replace(
regexp_replace(t1.jsonb_field::text, '"update_at": "[^"]*?",','')
'"update_at": "[^"]*?"','')::jsonb
for postgresql 9.4
Though sqlfidle only has 9.3 and 9.6
9.3 is missing the json_object_agg. But the postgres doc says it is in 9.4. So this should work
It will only work, if all records have objects under the important keys.
main->orders
If main->orders is a json array, or scalar, then this may give an error.
Same if {"main": [1,2]} => error.
Each json_each returns a table with a row for each key in the json
json_object_agg aggregates them back to a json array.
The case statement filters the one key on each level that needs to be handled.
In the deepest nest level, it filters out the updated_at row.
On sqlfidle set query separator to '//'
If you use psql client, replace the // with ;
create or replace function foo(x json)
returns jsonb
language sql
as $$
select json_object_agg(key,
case key when 'main' then
(select json_object_agg(t2.key,
case t2.key when 'orders' then
(select json_object_agg(t3.key, t3.value)
from json_each(t2.value) as t3
WHERE t3.key <> 'updated_at'
)
else t2.value
end)
from json_each(t1.value) as t2
)
else t1.value
end)::jsonb
from json_each(x) as t1
$$ //
select foo(x)
from
(select '{ "main":{"orders":{"order_id": "1", "customer_id": "1", "updated_at": "11/23/2017 17:49:53" }}}'::json as x) as t1
x (the argument) may need to be jsonb, if that is your datatype
I have just started to play around with jsonb on postgres and finding examples hard to find online as it is a relatively new concept.I am trying to use jsonb_each_text to printout a table of keys and values but get a csv's in a single column.
I have the below json saved as as jsonb and using it to test my queries.
{
"lookup_id": "730fca0c-2984-4d5c-8fab-2a9aa2144534",
"service_type": "XXX",
"metadata": "sampledata2",
"matrix": [
{
"payment_selection": "type",
"offer_currencies": [
{
"currency_code": "EUR",
"value": 1220.42
}
]
}
]
}
I can gain access to offer_currencies array with
SELECT element -> 'offer_currencies' -> 0
FROM test t, jsonb_array_elements(t.json -> 'matrix') AS element
WHERE element ->> 'payment_selection' = 'type'
which gives a result of "{"value": 1220.42, "currency_code": "EUR"}", so if i run the below query I get (I have to change " for ')
select * from jsonb_each_text('{"value": 1220.42, "currency_code": "EUR"}')
Key | Value
---------------|----------
"value" | "1220.42"
"currency_code"| "EUR"
So using the above theory I created this query
SELECT jsonb_each_text(data)
FROM (SELECT element -> 'offer_currencies' -> 0 AS data
FROM test t, jsonb_array_elements(t.json -> 'matrix') AS element
WHERE element ->> 'payment_selection' = 'type') AS dummy;
But this prints csv's in one column
record
---------------------
"(value,1220.42)"
"(currency_code,EUR)"
The primary problem here, is that you select the whole row as a column (PostgreSQL allows that). You can fix that with SELECT (jsonb_each_text(data)).* ....
But: don't SELECT set-returning functions, that can often lead to errors (or unexpected results). Instead, use f.ex. LATERAL joins/sub-queries:
select first_currency.*
from test t
, jsonb_array_elements(t.json -> 'matrix') element
, jsonb_each_text(element -> 'offer_currencies' -> 0) first_currency
where element ->> 'payment_selection' = 'type'
Note: function calls in the FROM clause are implicit LATERAL joins (here: CROSS JOINs).
WITH testa AS(
select jsonb_array_elements
(t.json -> 'matrix') -> 'offer_currencies' -> 0 as jsonbcolumn from test t)
SELECT d.key, d.value FROM testa
join jsonb_each_text(testa.jsonbcolumn) d ON true
ORDER BY 1, 2;
tetsa get the temporal jsonb data. Then using lateral join to transform the jsonb data to table format.