Field name: groups
Value:
[{"GroupId": "abcd-41234", "GroupName": "testingrule"}]
How do to extract the GroupID and GroupName values as a separate fields using select statement?
These are my failed attempts:
select groups->>'GroupId' as id,
groups->>'GroupName' as name`
from table_name
select (groups::json->>'GroupId')::json->>'id' as id
from table_name
select groups::json->>'GroupId' as id
from table_name`
I assume that you are using the jsonb data type. If not, change your table definition.
If you want the values for all array elements for all rows in the table, you would use a lateral join like this:
SELECT exp.j ->> 'GroupId' AS groupid,
exp.j ->> 'GroupName' AS groupname
FROM table_name AS t
CROSS JOIN LATERAL jsonb_array_elements(t.groups) AS exp(j);
Related
I want to build a table where each row contains a string and the number of rows where that string appears as a prefix
Basically I want
select count(*) from "myTable" where tsfield ## (p||':*')::tsquery
for each value of p in an array.
How can I write a query to do this?
Unnest the array and join:
SELECT arr.p, count(*)
FROM "myTable"
JOIN unnest('{...}') AS arr(p)
ON tsfield ## (arr.p||':*')::tsquery
GROUP BY arr.p;
Given that I have a list of repeating ids that I need to fetch some additional data to populate xls spreadsheet, how can I do that. "IN" clause returns only 1 match, but I need a row for each occurrence of an Id. I looked at PIVOT, thinking I could create a select list and then do inner join.
Select m.Id, m.LegalName, m.OtherId
from MyTable m
where m.OtherId in (1,2,1,1,3,1,4,4,2,1)
You can use VALUES clause :
SELECT t.id as OtherId, m.id, m.LegalName
FROM ( VALUES (1),(2),(1),(1),(3),(1),(4),(4),(2),(1)
) t(ID) INNER JOIN
MyTable m
ON m.OtherId = t.id;
My table is somethingg like
CREATE TABLE table1
(
_id text,
name text,
data_type int,
data_value int,
data_date timestamp -- insertion time
);
Now due to a system bug, many duplicate entries are created and I need to remove those duplicated and keep only unique entries excluding data_date because it is a system generated date.
My query to do that is something like:
DELETE FROM table1 A
USING ( SELECT _id, name, data_type, data_value, MIN(data_date) min_date
FROM table1
GROUP BY _id, name, data_type, data_value
HAVING count(data_date) > 1) B
WHERE A._id = B._id
AND A.name = B.name
AND A.data_type = B.data_type
AND A.data_value = B.data_value
AND A.data_date != B.min_date;
However this query works, having millions of records in the table, I want a faster way for it. My idea is to create a new column with value as partition by [_id, name, data_type, data_value] or columns which are in group by. However, I could not find the way to create such column.
I would appretiate if any one may suggest a way to create such column.
Edit 1:
There is another thing to add, I don't want to use CTE or subquery for updating this new column because it will be same as my existing query.
The best way is simply creating a new table without duplicated records:
CREATE...
SELECT _id, name, data_type, data_value, MIN(data_date) min_date
FROM table1
GROUP BY _id, name, data_type, data_value;
Alternatively, you can create a rank and then filter, but a subquery is needed.
RANK() OVER (PARTITION BY your_variables ORDER BY data_date ASC) r
And then filter r=1.
I have a query like this:
SELECT
table1.*,
sum(table2.amount) as totalamount
FROM table1
join table2 on table1.key = table2.key
GROUP BY table1.*;
I got the error: column "table1.key" must appear in the GROUP BY clause or be used in an aggregate function.
Are there any way to group "all" field?
There is no shortcut syntax for grouping by all columns, but it's probably not necessary in the described case. If the key column is a primary key, it's enough when you use it:
GROUP BY table1.key;
You have to specify all the column names in group by that are selected and are not part of aggregate function ( SUM/COUNT etc)
select c1,c2,c4,sum(c3) FROM totalamount
group by c1,c2,c4;
A shortcut to avoid writing the columns again in group by would be to specify them as numbers.
select c1,c2,c4,sum(c3) FROM t
group by 1,2,3;
I found another way to solve, not perfect but maybe it's useful:
SELECT string_agg(column_name::character varying, ',') as columns
FROM information_schema.columns
WHERE table_schema = 'your_schema'
AND table_name = 'your_table
Then apply this select result to main query like this:
$columns = $result[0]["columns"];
SELECT
table1.*,
sum(table2.amount) as totalamount
FROM table1
join table2 on table1.key = table2.key
GROUP BY $columns;
I have two table like this:
create table product (
id serial primary key,
name text
);
create table selectedattribute (
id serial primary key,
product integer references product,
attribute text,
val text
);
and I'm creating a materialized view with this select query
select product.name,
jsonb_build_object(
'color', COALESCE(jsonb_agg(val) FILTER (WHERE attribute='color'), '[]'),
'diameter', COALESCE(jsonb_agg(val) FILTER (WHERE attribute='diameter'), '[]')
)
from product
left join selectedattribute on product.id = selectedattribute.product
group by product.id;
the problem with this select query is when I add a new attribute, I have to add it to select query in order to create an up-to-date materialized view.
Is there a way to write an aggregate expression that dynamically gets attributes without all these hard-coded attribute names?
You can try my code in SQL Fiddle: http://sqlfiddle.com/#!17/c4150/4
You need to nest the aggregation. First collect all values for an attribute then aggregate that into a JSON:
select id, name, jsonb_object_agg(attribute, vals)
from (
select p.id, p.name, a.attribute, jsonb_agg(a.val) vals
from product p
left join selectedattribute a on p.id = a.product
group by p.id, a.attribute
) t
group by id, name;
Updated SQLFiddle: http://sqlfiddle.com/#!17/c4150/5