I have jsonb data as :
id | data
1 | {"last": "smith", "first": "john", "title": "mr"}
2 | {"last": "smith jr", "first": "john", "title": "mr"}
3 | {"last": "roberts", "first": "Julia", "title": "miss"}
4 | {"last": "2nd", "first": "john smith", "title": "miss"}
I need to search for records which match with "John smith"; So, in this case IDs - 1,2,4
I cannot separate the search for each key => value pair; I need to get concatenated entry for records to check against incoming request.
I have tried
select * from contacts where jsonb_concat(data->>'title'::TEXT || data->>'first'::TEXT || data->>'last'::TEXT) ilike "John smith";
This doesn't work because I am trying to concat values and not jsonb object. Is there any way to concat jsonb values specified by keys?
I solved it myself with some research..
my query is like -
select * from contacts where trim(regexp_replace(CONCAT(data->>'title'::TEXT,' ',data->>'first'::TEXT,' ',data->>'last'::TEXT), '\s+', ' ', 'g')) ilike '%john%';
For PostgreSQL versions > 9.1 .. you can use '\s+' instead of '\s+'.
hope this helps someone.
Related
I have a postgres table with a jsonb colum like this:
create table if not exists doc
(
id uuid not null
constraint pkey_doc_id
primary key,
data jsonb not null
);
INSERT INTO doc (id, data) VALUES ('3cf40366-ea58-402d-b63b-c9d6fdf99ec8', '{"Id": "3cf40366-ea58-402d-b63b-c9d6fdf99ec8", "Tags": [{"Key": "inoivce", "Value": "70086"},{"Key": "customer", "Value": "100233"}] }' );
INSERT INTO doc (id, data) VALUES ('ae2d1119-adb9-41d2-96e9-53445eaf97ab', '{"Id": "ae2d1119-adb9-41d2-96e9-53445eaf97ab", "Tags": [{"Key": "project", "Value": "12345"},{"Key": "customer", "Value": "100233"}]}' );b9-41d2-96e9-53445eaf97ab", "Tags": [{"Key": "customer", "Value": "100233"}]}' )
Tags.Key in the first row contains a typo inoivce which I want to fix to invoice:
{
"Id": "3cf40366-ea58-402d-b63b-c9d6fdf99ec8",
"Tags": [{
"Key": "inoivce",
"Value": "70086"
},{
"Key": "customer",
"Value": "100233"
}]
}
I tried this:
update doc set data = jsonb_set(
data,
'{"Tags"}',
$${"Key":"invoice"}$$
) where data #> '{"Tags": [{ "Key":"inoivce"}]}';
The typo gets fixed but I'm loosing the other Tags elements in the array:
{
"Id": "3cf40366-ea58-402d-b63b-c9d6fdf99ec8",
"Tags": [{"Key": "invoice"}]
}
How can I fix the typo without removing the other elements of the Tags array?
Dbfiddle for repro.
One possible solution, not so obvious : we need a CTE because the idea here is to loop on the 'Tags' jsonb array elements using the jsonb_agg aggregate function to rebuild the array, but the SET clause of an UPDATE doesn't accept aggregate functions ...
WITH list AS
( SELECT d.id, (d.data - 'Tags') || jsonb_build_object('Tags', jsonb_agg(jsonb_set(e.content, '{Key}' :: text[], to_jsonb(replace(e.content->>'Key', 'inoivce', 'invoice'))) ORDER BY e.id)) AS data
FROM doc AS d
CROSS JOIN LATERAL jsonb_array_elements(d.data->'Tags') WITH ORDINALITY AS e(content, id)
WHERE d.data #? '$.Tags[*] ? (exists(# ? (#.Key == "inoivce")))'
GROUP BY d.id, d.data
)
UPDATE doc AS d
SET data = l.data
FROM list AS l
WHERE d.id = l.id
see the result in dbfiddle
I have jsonb datatype, where each row has a name, last_updated, among other keys. How would I go about creating a query, which would leave only 1 row per name per day?
i.e. this:
id | data
1 | {"name": "foo1", "last_updated": "2019-10-06T09:29:30.000Z"}
2 | {"name": "foo1", "last_updated": "2019-10-06T01:29:30.000Z"}
3 | {"name": "foo1", "last_updated": "2019-10-07T01:29:30.000Z"}
4 | {"name": "foo2", "last_updated": "2019-10-06T09:29:30.000Z"}
5 | {"name": "foo2", "last_updated": "2019-10-06T01:29:30.000Z"}
6 | {"name": "foo2", "last_updated": "2019-10-06T02:29:30.000Z"}
becomes:
id | data
1 | {"name": "foo1", "last_updated": "2019-10-06T09:29:30.000Z"}
3 | {"name": "foo1", "last_updated": "2019-10-07T01:29:30.000Z"}
4 | {"name": "foo2", "last_updated": "2019-10-06T09:29:30.000Z"}
This query will run on some 9 million rows, on roughly 300 names.
Try something like this:
Table
create table test (
id serial,
data jsonb
);
Data
insert into test (data) values
('{"name": "foo1", "last_updated": "2019-10-06T09:29:30.000Z"}'),
('{"name": "foo1", "last_updated": "2019-10-06T01:29:30.000Z"}'),
('{"name": "foo1", "last_updated": "2019-10-07T01:29:30.000Z"}'),
('{"name": "foo2", "last_updated": "2019-10-06T09:29:30.000Z"}'),
('{"name": "foo2", "last_updated": "2019-10-06T01:29:30.000Z"}'),
('{"name": "foo2", "last_updated": "2019-10-06T02:29:30.000Z"}');
Query
with latest as (
select data->>'name' as name, max(data->>'last_updated') as last_updated
from test
group by data->>'name'
)
delete from test t
where not exists (
select 1 from latest
where t.data->>'name' = name
and t.data->>'last_updated' = last_updated
);
select * from test;
Example
https://dbfiddle.uk/?rdbms=postgres_10&fiddle=2415e6f2c9c7980e69d178a331120dcd
You might have to index your jsonb column like create index on test((data->>'name'));; you could do that for last_updated also.
I make the assumption that a user doesn't have identical last_updated.
If that assumption is not true, you could try this:
with ranking as (
select
row_number() over (partition by data->>'name' order by data->>'last_updated' desc) as sr,
x.*
from test x
)
delete from test t
where not exists (
select 1 from ranking
where sr = 1
and id = t.id
);
In this case, we first give a serial number to users' records. Each user's latest_updated time gets sr 1.
Then, we ask the database to delete all records that aren't a match for sr 1's id.
Example: https://dbfiddle.uk/?rdbms=postgres_10&fiddle=dba1879a755ed0ec90580352f82554ee
I have field extra with type jsonb in my table product. And I need get uniq key with uniq values for each key from all products row. Example data in extra field
{"SIZE": "110/116", "COLOUR": "Vit", "GENDER": "female", "AGE_GROUP": "Kids", "ALTERNATIVE_IMAGE": "some_path"}
for now I use query like this
select DISTINCT e.key, array_agg(DISTINCT e.value) as fields
from products AS p
join jsonb_each_text(p.extras) e on true
GROUP BY e.key
Ad have respnse (small part with some keys in full response all keys are present) like this
[
{
"key": "AGE_GROUP",
"fields": "{Adult,children,Kids}"
},
{
"key": "GENDER",
"fields": "{female,male,man}"
}
]
how to change it to array for fields alias ?
like this
[
{
"AGE_GROUP": ["Adult","children","Kids"]
},
{
"GENDER": ["female","male","man"]
}
]
or maybe whould be great like this
[
"some_alias": [{"AGE_GROUP": "Adult", "AGE_GROUP": "children", "AGE_GROUP": "Kids"}],
"some_alias": [{"GENDER": "female", "GENDER": "male", "GENDER": "man"}]
]
This will get you the former form I think:
select jsonb_agg(jsonb_build_object(k,v)) from (
select DISTINCT e.key, jsonb_agg(DISTINCT e.value) as fields
from products AS p
join jsonb_each_text(p.extras) e on true
GROUP BY e.key
) b(k,v);
Best regards,
Bjarni
I want to retrieve data by specific field operation it store array of object. i want to add new object in it.
CREATE TABLE justjson ( id INTEGER, doc JSONB);
INSERT INTO justjson VALUES ( 1, '[
{
"name": "abc",
"age": "22"
},
{
"name": "def",
"age": "23"
}
]');
retrieve data where age is greater then and equal to 23 how is possible
eg using jsonb_array_elements:
t=# with a as (select *,jsonb_array_elements(doc) k from justjson)
select k from a where (k->>'age')::int >= 23;
k
------------------------------
{"age": "23", "name": "def"}
(1 row)
I have the following:
SELECT *
FROM (
SELECT '{"people": [{"name": "Bob", "occupation": "janitor"}, {"name": "Susan", "occupation": "CEO"}]}'::jsonb as data
) as b
WHERE data->'people' #> '[{"name":"Bob"}]'::jsonb;
I am filtering for the object '{"name": "Bob", "occupation": "janitor"}'
How do I return Bob's occupation ("janitor")?
SELECT data->'people'->>'occupation'
FROM (
SELECT '{"people": [{"name": "Bob", "occupation": "janitor"}, {"name": "Susan", "occupation": "CEO"}]}'::jsonb as data
) as b
WHERE data->'people' #> '[{"name":"Bob"}]'::jsonb;
returns
?column?
--------
NULL
Looking for:
occupation
----------
janitor
If you don't care about anything else on the row where the jsonb is, you can take all elements out of the jsonb and then use them as separate elements to select from
SELECT data->>'occupation' as occupation
FROM (
SELECT jsonb_array_elements(
'{"people":
[
{"name": "Bob", "occupation": "janitor"},
{"name": "Susan", "occupation": "CEO"}
]
}'::jsonb->'people') as data) as b
WHERE data #> '{"name":"Bob"}';
Results
occupation
-----------
janitor
(1 row)
Your "people" element is an array. You can get at the elements of an array with the jsonb_array_elements function. After that, you can just filter on person->>'name':
SELECT person->>'occupation' as occupation
FROM (
SELECT person.value as person
FROM (
SELECT
'{"people":
[
{"name": "Bob", "occupation": "janitor"},
{"name": "Susan", "occupation": "CEO"}
]
}'::jsonb as data
) a
CROSS JOIN
jsonb_array_elements(data->'people') as person
) b
WHERE person->>'name' = 'Bob';
Note that ->> returns text, while -> returns jsonb.