Postgres 9.6 fetch key value pair from text field - postgresql

I have a postgres table where column named 'arguments' (type text) contains below text:
testdb=> select arguments from table_name;
arguments
----------------------------------------------------
array ( +
'description' => array ( +
'country' => 'USA', +
'state' => 'california', +
'zip' => '12345', +
'x' => 'y', +
), +
)
(1 row)
How do I select individual key value pair in a select query. Say I want to extract only state and zip.

Related

Create an index that guarantees uniqueness between union of two columns

Consider the following table:
CREATE TABLE items (
ident text NOT NULL,
label_one text NOT NULL,
label_two text
);
Is there a way I could create a uniqueness constraint, on ident and either label_one and label_two?
So for example:
This insert would work:
INSERT INTO items (ident, label_one, label_two) VALUES ('foo', 'a', 'b')
But these inserts would fail:
INSERT INTO items (ident, label_one, label_two) VALUES ('foo', 'a', 'x')
It can't insert 'a' into label_one, Because label_one already has the value 'a'
INSERT INTO items (ident, label_one, label_two) VALUES ('foo', 'b', 'x')
It can't insert 'b' into label_one, Because label_two already has the value 'b'
INSERT INTO items (ident, label_one, label_two) VALUES ('foo', 'x', 'a')
It can't insert 'a' into label_two, Because label_one already has the value 'a'
INSERT INTO items (ident, label_one, label_two) VALUES ('foo', 'x', 'b')
It can't insert 'b' into label_two, Because label_two already has the value 'b'
This is basically what The Impaler said in a comment:
Create a secondary table
CREATE TABLE items_constraint (
ident text NOT NULL,
label text NOT NULL,
UNIQUE (ident, label)
);
create a before insert trigger on the original table (pseudo code)
insert into constraint(ident, label) values (new.ident, new.label_one);
if ( new.label_two is not null) {
insert into constraint(ident, label) values (new.ident, new.label_two);
}
Of course, all this smell of denormalized data and the problem would probably go away when the data is properly normalized.

DB2 With in clause, cache variable

According to the example of Generate aggregated Json data from Json string in java, I want to cache the result length in a variable within a function
DECLARE result CLOB;
DECLARE leng INT;
WITH BASE AS (
select id, item,
JSON_OBJECT('item' value item,
'itemScore' value itemScore,
'stage' value stage,
'reco' VALUE JSON_OBJECT('product' value product,
'url' value url ,
'score' value score
FORMAT JSON )
FORMAT JSON ABSENT ON NULL
RETURNING VARCHAR(200) FORMAT JSON) ITEM_JSON
FROM PROD_T ),
SIZE AS (SELECT COUNT(*) AS SIZ INTO leng FROM BASE),--not working
PROD_OBJS AS (
SELECT JSON_OBJECT ( KEY 'id' VALUE ID ,
KEY 'itens' VALUE
JSON_ARRAY ( LISTAGG( ITEM_JSON , ', ') WITHIN GROUP (ORDER BY ITEM) FORMAT JSON )
FORMAT JSON ) json_objects
FROM BASE GROUP BY ID )
SELECT JSON_ARRAY (select json_objects FROM PROD_OBJS format json) INTO result FROM SYSIBM.SYSDUMMY1;
I tried this one, see the line above, but it is not working
SIZE AS (SELECT COUNT(*) AS SIZ INTO leng FROM BASE)
A single INTO is allowed in a SELECT statement only.
Run it as is to check and uncomment the commented out block in your function (removing the VALUES block obviously to work with your real PROD_T table).
WITH BASE AS (
select id, item,
JSON_OBJECT('item' value item,
'itemScore' value itemScore,
'stage' value stage,
'reco' VALUE JSON_OBJECT('product' value product,
'url' value url ,
'score' value score
)
ABSENT ON NULL
RETURNING VARCHAR(200) FORMAT JSON) ITEM_JSON
FROM
(
VALUES
('id1', 'item1', 'itemScore1', 'stage1', 'product1', 'url1', 'score1')
, ('id2', 'item2', 'itemScore2', 'stage2', 'product2', 'url2', 'score2')
)
PROD_T (id, item, itemScore, stage, product, url, score)
),
PROD_OBJS AS (
SELECT JSON_OBJECT ( KEY 'id' VALUE ID ,
KEY 'itens' VALUE
JSON_ARRAY ( LISTAGG( ITEM_JSON , ', ') WITHIN GROUP (ORDER BY ITEM) FORMAT JSON )
FORMAT JSON ) json_objects
FROM BASE GROUP BY ID )
SELECT
JSON_ARRAY (select json_objects FROM PROD_OBJS format json)
, (SELECT COUNT(*) FROM BASE)
--INTO result, leng
FROM SYSIBM.SYSDUMMY1

Insert statement, returning columns from source table

For a bulk insert, I have a foreign reference value. As part of the insert, I'd like to return both the reference and corresponding id's of the newly created records in order to create a mapping record in another system.
Using "RETURNING" works fine for the target table. Other than creating a dummy column in the target table, is there anyway to achieve what I'm trying to do?
Definitely do not want to do row-by-row processing.
NOTE: Currently using version 10.7
In my sample code, I tried "returning id, source.ref", but obviously this isn't supported.
create table test( id serial primary key, name varchar);
insert into test( name)
select source.name
from ( values('refa', 'name a'), ('refb', 'name b'), ('refc', 'name c') ) source(ref, name)
returning id --, source.ref
Use CTEs:
WITH a AS (
INSERT ...
RETURNING ...
), b AS (
INSERT ...
RETURNING ...
)
SELECT ...
FROM a JOIN b ON ...
Reference back to the source, if it is unique. Try something like this:
WITH q AS ( INSERT INTO test (name)
SELECT source.name
FROM ( VALUES ('refa', 'name a'), ('refb', 'name b'), ('refc', 'name c')
) AS source (ref, name)
RETURNING * )
SELECT q.id, source.ref
FROM q
JOIN ( VALUES ('refa', 'name a'), ('refb', 'name b'), ('refc', 'name c')
) AS source (ref, name) ON q.name = source.name
But if you want to add this mapping to another table, you might consider to re-structure your queries, to something like this:
INSERT INTO mapping ( ref, id )
SELECT source.ref, ( INSERT INTO test (name) VALUES ( source.name ) RETURNING id )
FROM ( VALUES ('refa', 'name a'), ('refb', 'name b'), ('refc', 'name c')
) AS source (ref, name) )

Set empty array as default value in array_agg in postgresql

I have a view:
CREATE OR REPLACE VIEW microservice_view AS
SELECT
m.id :: BIGINT,
m.name,
m.sending_message_rate :: BIGINT,
m.max_message_size :: BIGINT,
m.prefetch_count :: BIGINT,
(SELECT COALESCE(json_agg(DISTINCT node_id), '[]')
FROM public.microservice_node
WHERE microservice_id = m.id) AS nodes,
(SELECT array_agg(DISTINCT json_build_object('id', transport_id :: INT,
'is_available', (credentials ->> 'is_available') :: BOOLEAN,
'username', credentials ->> 'username',
'password', credentials ->> 'password',
'default', (default_transport) :: BOOLEAN) :: JSONB
)
FROM transport_microservice
WHERE microservice_id = m.id) AS transports
FROM public.microservice m
GROUP BY m.id
ORDER BY m.id ASC;
Sometimes transports is null. How can I set an empty array as a default value for array_agg? This field should be an empty array or array with data. In some cases I use array_length function to filter the data.
First of all, I wouldn't mix array_agg with JSON (notice double quotes escaping; also I use select array( .. subquery ..) trick here to get an array, it's at some extent an equivalent to your array_agg(..)):
test=# select array(select '{"zz": 1}'::jsonb);
array
-----------------
{"{\"zz\": 1}"}
-- here you'll get ARRAY of JSONB, while what you really need is single JSONB value with an embedded array inside:
test=# select pg_typeof(array(select '{"zz": 1}'::jsonb));
pg_typeof
-----------
jsonb[]
(1 row)
test=# select pg_typeof('[{"zz": 1}]'::jsonb);
pg_typeof
-----------
jsonb
(1 row)
To get single jsonb value (with JSON array inside), use jsonb_agg(..) function.
To substitute NULL value by some default, as usual, you can use standard function coalesce(..):
test=# select coalesce(null::jsonb, '[]'::jsonb);
coalesce
----------
[]
(1 row)
Finally, as I see from additional comments, you need to get array length of your jsonb -- there are functions json_array_length(..) and jsonb_array_length(..) designed for this purpose, see https://www.postgresql.org/docs/current/static/functions-json.html.

PostgreSQL 9.3: ERROR: argument of WHERE must be type boolean, not type text

I have created following table with five columns.
CREATE TEMP TABLE t_Test (
colname text,
coldt timestamp,
coltm timestamp,
colaz text,
colx integer
);
Insertion:
INSERT INTO t_Test VALUES
('X','2010-01-01','1900-01-01 01:01:25', 'Green', 1), ('B','2010-01-02','1900-01-01 11:32:15', 'Red', 2)
, ('Z','2010-01-03','1900-01-01 02:01:25', 'Green', 4), ('E','2010-01-04','1900-01-01 11:01:55', 'Red', 5)
, ('G','2010-01-05','1900-01-01 03:05:25', 'Red', 7);
Note: Now I want to show the result in the pivot format of the above data for which I am using
the cross tab query as shown below.
I want to show only those records which matches the date and time condition.
CrossTab Query:
SELECT * FROM crosstab(
'SELECT colname, colaz, colx
FROM t_test
WHERE to_char(coldt,''YYYY-MM-DD'') || '' '' || to_char(coltm,''HH12:MI:SS'')
>= to_char(to_date(''01-01-2000'',''dd-mm-yyyy''), ''yyyy-mm-dd'') || ''11:50:01'''
)
AS ct ("ColName" text, "Green" int, "Red" int);
While executing getting an error:
Error Details:
ERROR: argument of WHERE must be type boolean, not type text
LINE 3: WHERE to_char(coldt,'YYYY-MM-DD') || ' ' || to_char(c...
Place both side of equation in bracket like below
SELECT * FROM crosstab(
'SELECT colname, colaz, colx
FROM t_test
WHERE (to_char(coldt,''YYYY-MM-DD'') || '' '' || to_char(coltm,''HH12:MI:SS''))
>= (to_char(to_date(''01-01-2000'',''dd-mm-yyyy''), ''yyyy-mm-dd'') || ''11:50:01''')
)
AS ct ("ColName" text, "Green" int, "Red" int);