I have the following table
CREATE TABLE country (
id INTEGER NOT NULL PRIMARY KEY ,
name VARCHAR(50),
extra_info JSONB
);
INSERT INTO country(id,extra_info)
VALUES (1, '{ "name" : "France", "population" : "65000000", "flag_colours": ["red", "blue","white"]}');
INSERT INTO country(id,extra_info)
VALUES (2, '{ "name": "Spain", "population" : "47000000", "borders": ["Portugal", "France"] }');
SELECT extra_info->>'name' as Name, extra_info->>'population' as Population
FROM country
I would like to select id and extra info
SELECT id,extra_info->>'population' as Population,extra_info->'flag_colours'->>1 as colors
FROM country
This query shows only id,population but the flag_colors is null.
I also would like to use flag_colors in a condition
SELECT extra_info->>'population' as Population FROM country where extra_info->'flag_colours'->>0
i get this error
ERROR: argument of WHERE must be type boolean, not type text
LINE 1: ...o->>'population' as Population FROM country where extra_info...
^
SQL state: 42804
Character: 67
How can i fix the two queries?
Wrote my query this way
SELECT *
FROM country
WHERE (extra_info -> 'flag_colours') ? 'red' and (extra_info -> 'flag_colours') ? 'white'
Many thanks to alt-f4
updated answer https://stackoverflow.com/a/62858683/492293
Related
I'm trying out generated column with postgres-12. I need to create a table with generated column with JSON data. I'm going to receive "name" field as key there . However, while doing so - I got below error:
postgres=# create table json_tab2 (data jsonb ,
postgres(# "json_tab2.pname" text generated always as (data ->> "name" ) stored
postgres(# );
ERROR: column "name" does not exist
LINE 2: ...on_tab2.pname" text generated always as (data ->> "name" ) ...
After this: I tried to alter existing table- because that has value into json data for generated column - so it should be able to identify "name" now. This time I ran below:
postgres=# alter table json_tab add column Pname text generated always as (data ->> "name") stored
;
ERROR: column "name" does not exist
However, "name" has value here:
data
-------------------------------------------------
{"age": 31, "city": "New York", "name": "John"}
I'm unable to understand - what I'm doing wrong here
The righthand side of the ->> operator should be a value. In this case, since it's a string, you need to surround it with single quotes ('):
create table json_tab2 (
data jsonb,
pname text generated always as (data ->> 'name') stored
-- Here ---------------------------------^----^
);
I have the following table
CREATE TABLE country (
id INTEGER NOT NULL PRIMARY KEY ,
name VARCHAR(50),
extra_info JSONB
);
INSERT INTO country(id,extra_info)
VALUES (1, '{ "name" : "France", "population" : "65000000", "flag_colours": ["red", "blue","white"]}');
INSERT INTO country(id,extra_info)
VALUES (2, '{ "name": "Spain", "population" : "47000000", "borders": ["Portugal", "France"] }');
and i can add an element to the array like this
UPDATE country SET extra_info = jsonb_set(extra_info, '{flag_colours,999999999}', '"green"', true);
and update like this
UPDATE country SET extra_info = jsonb_set(extra_info, '{flag_colours,0}', '"yellow"');
I now would like to delete an array item with a known index or name.
How would i delete a flag_color element by index or by name?
Update
Delete by index
UPDATE country SET extra_info = extra_info #- '{flag_colours,-1}'
How can i delete by name?
As Arrays do not have direct access to items in a straightforward way, we can try to approach this differently through unnesting -> filtering elements -> stitching things back together. I have formulated a code example with ordered comments to help.
CREATE TABLE new_country AS
-- 4. Return a new array (for immutability) that contains the new desired set of colors
SELECT id, name, jsonb_set(extra_info, '{flag_colours}', new_colors, FALSE)
FROM country
-- 3. Use Lateral join to apply this to every row
LEFT JOIN LATERAL (
-- 1. First unnest the desired elements from the Json array as text (to enable filtering)
WITH prep AS (SELECT jsonb_array_elements_text(extra_info -> 'flag_colours') colors FROM country)
SELECT jsonb_agg(colors) new_colors -- 2. Form a new jsonb array after filtering
FROM prep
WHERE colors <> 'red') lat ON TRUE;
In the case you would like to update only the affected column without recreating the main table, you can:
UPDATE country
SET extra_info=new_extra_info
FROM new_country
WHERE country.id = new_country.id;
I have broken it down to two queries to improve readability; however you can also use a subquery instead of creating a new table (new_country).
With the subquery, it should look like:
UPDATE country
SET extra_info=new_extra_info
FROM (SELECT id, name, jsonb_set(extra_info, '{flag_colours}', new_colors, FALSE) new_extra_info
FROM country
-- 3. Use Lateral join to scale this across tables
LEFT JOIN LATERAL (
-- 1. First unnest the desired elements from the Json array as text (to enable filtering)
WITH prep AS (SELECT jsonb_array_elements_text(extra_info -> 'flag_colours') colors FROM country)
SELECT jsonb_agg(colors) new_colors -- 2. Form a new jsonb array after filtering
FROM prep
WHERE colors <> 'red') lat ON TRUE) new_country
WHERE country.id = new_country.id;
Additionally, you may filter rows via (As of PostgreSQL 9.4):
SELECT *
FROM country
WHERE (extra_info -> 'flag_colours') ? 'red'
Actually PG12 allows to do it without JOIN LATERAL:
SELECT jsonb_path_query_array(j #> '{flag_colours}', '$[*] ? (# != "red")'),
jsonb_set(j, '{flag_colours}', jsonb_path_query_array(j #> '{flag_colours}', '$[*] ? (# != "red")'))
FROM (SELECT '{ "name" : "France", "population" : "65000000",
"flag_colours": ["red", "blue","white"]}'::jsonb AS j
) AS j
WHERE j #? '$.flag_colours[*] ? (# == "red")';
jsonb_path_query_array | jsonb_set
------------------------+---------------------------------------------------------------------------------
["blue", "white"] | {"name": "France", "population": "65000000", "flag_colours": ["blue", "white"]}
(1 row)
I am trying to upsert into a table with jsonb field based on multiple json properties in the jsonb field using below query
insert into testtable(data) values('{
"key": "Key",
"id": "350B79AD",
"value": "Custom"
}')
On conflict(data ->>'key',data ->>'id')
do update set data =data || '{"value":"Custom"}'
WHERE data ->> 'key' ='Key' and data ->> 'appid'='350B79AD'
Above query throws error as below
ERROR: syntax error at or near "->>"
LINE 8: On conflict(data ->>'key',data ->>'id')
am I missing something obvious here?
I suppose you want to insert unique id and key combination value into the table. Then you need a unique constraint for them :
create unique index on testtable ( (data->>'key'), (data->>'id') );
and also use extra parentheses for the on conflict clause as tuple :
on conflict( (data->>'key'), (data->>'id') )
and qualify the jsonb column name ( data ) by table name (testtable) whenever you meet after do update set or after where clauses as testtable.data. So, convert your statement to :
insert into testtable(data) values('{
"key": "Key",
"id": "350B79AD",
"value": "Custom1"
}')
on conflict( (data->>'key'), (data->>'id') )
do update set data = testtable.data || '{"value":"Custom2"}'
where testtable.data ->> 'key' ='Key' and testtable.data ->> 'id'='350B79AD';
btw, data ->> 'appid'='350B79AD' converted to data ->> 'id'='350B79AD' ( appid -> id )
Demo
I try to find a way to concat JSONB values using Postgres.
For example I have two lines :
INSERT INTO "testConcat" ("id", "json_data", "groupID")
VALUES (1, {"name": "JSON_name1", "value" : "Toto"}, 5);
INSERT INTO "testConcat" ("id", "json_data", "groupID")
VALUES (2, {"name": "JSON_name2"}, 5);
I would like to do something like :
SELECT GROUP_CONCAT(json_data)
FROM testConcat
GROUP BY groupID
AND as results to obtain something like :
[{"name": "JSON_name1", "value": "Toto"}, {"name": "JSON_name2"}]
I try the creation of aggregate function, but when there is the same key into the JSON, then they are merged and only the last values is preserved :
DROP AGGREGATE IF EXISTS jsonb_merge(jsonb);
CREATE AGGREGATE jsonb_merge(jsonb) (
SFUNC = jsonb_concat(jsonb, jsonb),
STYPE = jsonb,
INITCOND = '{}'
);
When I use this function as here :
SELECT jsonb_merge(json_data)
FROM testConcat
GROUP BY groupID
The result is :
{"name": "JSON_name2", "value": "Toto"}
And not as those that I want because the
{"name": "JSON_name1"}
is missing. The function preserve only the different keys and merge the other one with the last value.
Thanks for any help
If there is always only a single key/value pair in the JSON document, you can do this without a custom aggregate function:
SELECT groupid, jsonb_object_agg(k,v order by id)
FROM testconcat, jsonb_each(json_data) as x(k,v)
group by groupid;
The "last" value is defined by the ordering on the id column
The custom aggregate function might be faster though.
Finally I just find a solution, even if it is not the best, it seems to works.
I create the agreate function, as previously described with a small modification :
DROP AGGREGATE IF EXISTS jsonb_merge(jsonb);
CREATE AGGREGATE jsonb_merge(jsonb) (
SFUNC = jsonb_concat(jsonb, jsonb),
STYPE = jsonb,
INITCOND = '[]'
);
I just replace :
INITCOND = '{}'
with
INITCOND = '[]'
And after used it as previously :
SELECT jsonb_merge(json_data)
FROM testConcat
GROUP BY groupID
I'm having issues with the below query for my iPhone app. When the app runs the query it takes quite a while to process the result, maybe around a second or so... I was wondering if the query can be optimised in anyway? I'm using the FMDB framework to proces all my SQL.
select pd.discounttypeid, pd.productdiscountid, pd.quantity, pd.value, p.name, p.price, pi.path
from productdeals as pd, product as p, productimages as pi
where pd.productid = 53252
and pd.discounttypeid == 8769
and pd.productdiscountid = p.parentproductid
and pd.productdiscountid = pi.productid
and pi.type = 362
order by pd.id
limit 1
My statements are below for the tables:
CREATE TABLE "ProductImages" (
"ProductID" INTEGER,
"Type" INTEGER,
"Path" TEXT
)
CREATE TABLE "Product" (
"ProductID" INTEGER PRIMARY KEY,
"ParentProductID" INTEGER,
"levelType" INTEGER,
"SKU" TEXT,
"Name" TEXT,
"BrandID" INTEGER,
"Option1" INTEGER,
"Option2" INTEGER,
"Option3" INTEGER,
"Option4" INTEGER,
"Option5" INTEGER,
"Price" NUMERIC,
"RRP" NUMERIC,
"averageRating" INTEGER,
"publishedDate" DateTime,
"salesLastWeek" INTEGER
)
CREATE TABLE "ProductDeals" (
"ID" INTEGER,
"ProductID" INTEGER,
"DiscountTypeID" INTEGER,
"ProductDiscountID" INTEGER,
"Quantity" INTEGER,
"Value" INTEGER
)
Do you have indexes on foreign key columns (productimages.productid and product.parentproductid), and the columns you use to find right product deal (productdeals.productid and productdeals.discounttypeid)? If not, that could be the cause of poor performance.
You can create them like this:
CREATE INDEX idx_images_productid ON productimages(productid);
CREATE INDEX idx_products_parentid ON products(parentproductid);
CREATE INDEX idx_deals ON productdeals(productid, discounttypeid);
The below query could help you out to reduce the execution time, moreover try to create the indexes the fields correctly to fasten your query.
select pd.discounttypeid, pd.productdiscountid, pd.quantity, pd.value, p.name,
p.price, pi.path from productdeals pd join product p on pd.productdiscountid =
p.parentproductid join productimages pi on pd.productdiscountid = pi.productid where
pd.productid = 53252 and pd.discounttypeid = 8769 and pi.type = 362 order by pd.id
limit 1
Thanks