How to make a Group By in PostgreSQL with only one field? - postgresql

SELECT table1.field1, table2.field2
FROM table1
LEFT JOIN table2 ON table1.field1, table2.field1
GROUP BY table1.field1
MySQL: ✅ All right! 😁
PostgreSQL: ❌ You must put all Select fields in the Group By! 😭
How to make a Group By in PostgreSQL with only one field?

if table2.field2 value is alpha numeric then use MIN/MAX or table2.field2 is numeric then use any aggregate function as per need for avoiding to use table2.field2 column in GROUP BY clause.
SELECT table1.field1
, MAX(table2.field2) field2
FROM table1
LEFT JOIN table2
ON table1.field1 = table2.field1
GROUP BY table1.field1

Related

Merge Row values from different columns to one column on top of each other: MySQL

I have Table1 with columns:
Fname,Sname
Table 2 with columns:
Fname,Lname
Now,in a query I want to take all the values from these two tables (First names and last names [Sname in table 1 is Lname]) and return to one columns.
Basically I want to create column to get list of participants which include everyone from these two tables.
Is it possible?
Both the tables are joined indirectly via third table.
You can use UNION ALL will give all the rows from both tables.
SELECT
Fname,
Sname,
CONCAT(Fname,Sname) AS FSname
FROM table1
UNION ALL
SELECT
Fname,
Lname,
CONCAT(Fname,Lname)
FROM table2;
The column names are taken from the first SELECT.
If you use UNION and not UNION ALL rows in table2 which are duplicates of table1 will be omitted, but it can run slower as the values have to be compared.
You can use the third table and LEFT JOIN onto both tables and use COALESCE which returns the first argument which is not null.
SELECT
COALESCE(t1.Fname,t2.Fname),
COALESCE(t1.Sname,t2.Lname),
CONCAT(
COALESCE(t1.Fname,t2.Fname),
COALESCE(t1.Sname,t2.Lname)
) AS FSname
FROM third_table t3
LEFT JOIN table1 t1
ON t1.id = t3.id
LEFT JOIN table2 t2
ON t2.id = te.id;

Select multiple non aggregated columns with group by in postgres

I'm making a query with having multiple non aggregated columns with group by clause but Postgres is throwing an error that I have to add non aggregated columns in group by or use any aggregate function on that column this is the query that I'm trying to run.
select
tb1.pipeline as pipeline_id,
tb3.pipeline_name as pipeline_name,
tb2."name" as integration_name,
cast(tb1.integration_id as VARCHAR) as integration_id,
tb1.created_at as created_at,
cast(tb1.id as VARCHAR) as batch_id,
sum(tb1.row_select) as row_select,
sum(tb1.row_insert) as row_insert,
from
table1 tb1
join
table2 tb2 on tb1.integration_id = tb2.id
join
table3 tb3 on tb1.pipeline = tb3.id
where
tb1.pipeline is not null
and tb1.is_super_parent = false
group by
tb1.pipeline
and I found one solution/hack for this error that is I added max function in all other non aggregated columns this solves my problem.
select
tb1.pipeline as pipeline_id,
max(tb3.pipeline_name) as pipeline_name,
max(tb2."name") as integration_name,
max(cast(tb1.integration_id as VARCHAR)) as integration_id,
max(tb1.created_at) as created_at,
max(cast(tb1.id as VARCHAR)) as batch_id,
sum(tb1.row_select) as row_select,
sum(tb1.row_insert) as row_insert,
from
table1 tb1
join
table2 tb2 on tb1.integration_id = tb2.id
join
table3 tb3 on tb1.pipeline = tb3.id
where
tb1.pipeline is not null
and tb1.is_super_parent = false
group by
tb1.pipeline
But I don't want to add max functions when there is no need for that second thing is that applying max to all other column query will be expensive so any other better approach that I can do to solve the above issue, thanks in advance.
Well the first thing you need is to learn to format your queries in so as to get an idea of their flow at a glance. Note due to the extra comma in row_insert, from your query will give a syntax error. With that said; How do you solve your issue?
You cannot avoid the additional aggregates or the expanded group by as long as the exist in the scope same query. You need to separate the aggregation from selection of additional columns. You basically have 2 choices:
Perform the aggregation in a CTE.
with sums (pipeline_id, row_select, row_insert) as
( select tb1.pipeline
, sum(tb1.row_select) as row_select
, sum(tb1.row_insert) as row_insert
table1 tb1
where tb1.pipeline is not null
and tb1.is_super_parent = false
group by tb1.pipeline
)
select s.pipeline_id
, tbl3.pipeline_name
, tb2."name" integration_name
, s.row_select
, s.row_insert
from sums s
join table2 tbl2 on (s.pipeline_id = tb2.id)
join table3 tbl3 on (s.pipeline_id = tb3.id);
Perform the aggregation in a sub-query.
select s.pipeline_id
, tbl3.pipeline_name
, tb2."name" integration_name
, s.row_select
, s.row_insert
from ( select tb1.pipeline
, sum(tb1.row_select) as row_select
, sum(tb1.row_insert) as row_insert
table1 tb1
where tb1.pipeline is not null
and tb1.is_super_parent = false
group by tb1.pipeline
) s
join table2 tbl2 on (s.pipeline_id = tb2.id)
join table3 tbl3 on (s.pipeline_id = tb3.id);
NOTE: Not tested as no sample data supplied.

Postgres Lateral Unnest - At Least 1 Value?

I have a table with an optional fields column of type jsonb[]. I am using a lateral unnest to break those fields out into rows, then an aggregate to combine them again in the order I want.
SELECT id, name, ARRAY_AGG(v ORDER BY v->'priority' DESC) as fields
FROM results, LATERAL UNNEST(fields) AS f(v)
GROUP BY 1, 2
But because fields is optional, not all rows have values to unnest to begin with. Is there a way to lateral unnest at least one row even if it is empty? Or is there a better way to apply an order to a jsonb[] column on the way out so I can avoid this lateral unnest all together?
use a left join lateral.
SELECT
id
, name
, ARRAY_AGG(v ORDER BY v->'priority' DESC) as fields
FROM results
LEFT JOIN LATERAL UNNEST(fields) AS f(v) ON TRUE
GROUP BY 1, 2

PostgreSQL group by all fields

I have a query like this:
SELECT
table1.*,
sum(table2.amount) as totalamount
FROM table1
join table2 on table1.key = table2.key
GROUP BY table1.*;
I got the error: column "table1.key" must appear in the GROUP BY clause or be used in an aggregate function.
Are there any way to group "all" field?
There is no shortcut syntax for grouping by all columns, but it's probably not necessary in the described case. If the key column is a primary key, it's enough when you use it:
GROUP BY table1.key;
You have to specify all the column names in group by that are selected and are not part of aggregate function ( SUM/COUNT etc)
select c1,c2,c4,sum(c3) FROM totalamount
group by c1,c2,c4;
A shortcut to avoid writing the columns again in group by would be to specify them as numbers.
select c1,c2,c4,sum(c3) FROM t
group by 1,2,3;
I found another way to solve, not perfect but maybe it's useful:
SELECT string_agg(column_name::character varying, ',') as columns
FROM information_schema.columns
WHERE table_schema = 'your_schema'
AND table_name = 'your_table
Then apply this select result to main query like this:
$columns = $result[0]["columns"];
SELECT
table1.*,
sum(table2.amount) as totalamount
FROM table1
join table2 on table1.key = table2.key
GROUP BY $columns;

How to join vertical and horizontal table together table

I have two table with one of them is vertical i.e store only key value pair with ref id from table 1. i want to join both table and dispaly key value pair as a column in select. and also perform sorting on few keys.
T1 having (id,empid,dpt)
T2 having (empid,key,value)
select
T1.*,
t21.value,
t22.value,
t23.value,
t24.value
from Table1 t1
join Table2 t21 on t1.empid = t21.empid
join Table2 t22 on t1.empid = t22.empid
join Table2 t23 on t1.empid = t23.empid
where
t21.key = 'FNAME'
and t22.key = 'LNAME'
and t23.key='AGE'
The query you demonstrate is very inefficient (another join for each additional column) and also has a potential problem: if there isn't a row in T2 for every key in the WHERE clause, the whole row is excluded.
The second problem can be avoided with LEFT [OUTER] JOIN instead of [INNER] JOIN. But don't bother, the solution to the first problem is a completely different query. "Pivot" T2 using crosstab() from the additional module tablefunc:
SELECT * FROM crosstab(
'SELECT empid, key, value FROM t2 ORDER BY 1'
, $$VALUES ('FNAME'), ('LNAME'), ('AGE')$$ -- more?
) AS ct (empid int -- use *actual* data types
, fname text
, lname text
, age text);
-- more?
Then just join to T1:
select *
from t1
JOIN (<insert query from above>) AS t2 USING (empid);
This time you may want to use [INNER] JOIN.
The USING clause conveniently removes the second instance of the empid column.
Detailed instructions:
PostgreSQL Crosstab Query