Inner join within update statement in PostgreSQL - postgresql

I have table called temp_table which consist of following rows:
cola colb result
----------------
p4 s1 0
p8 s1 0
p9 s1 0
p5 f1 0
p8 f1 0
Now I need to update result column with the count(*) of colb. For which i am trying the following query:
update tem_table
set result = x.result
from tem_table tt
inner join(select colb,count(*) as result from tem_table group by colb) x
on x.colb = tt.colb;
And selecting distinct colb and result from temp_table:
select distinct colb,result from tem_table;
Getting output:
colb result
-----------
s1 3
f1 3
But the expected output is:
colb result
-----------
s1 3
f1 2
I am not getting where I am getting wrong in my query? Please help me.Thanks

You should not repeat the table to be updated in the from clause. This will create a cartesian self join.
Quote from the manual:
Note that the target table must not appear in the from_list, unless you intend a self-join (in which case it must appear with an alias in the from_list)
(Emphasis mine)
Unfortunately UPDATE does not support explicit joins using the JOIN keyword. Something like this should work:
update tem_table
set result = x.result
from (
select colb,count(*) as result
from tem_table
group by colb
) x
where x.colb = tem_table.colb;

Related

Get the ID of a table and its modulo respect the total rows in the same table in Postgres

While trying to map some data to a table, I wanted to obtain the ID of a table and its modulo respect the total rows in the same table. For example, given this table:
id
--
1
3
10
12
I would like this result:
id | mod
---+----
1 | 1 <- 1 mod 4
3 | 3 <- 3 mod 4
10 | 2 <- 10 mod 4
12 | 0 <- 12 mod 4
Is there an easy way to achieve this dynamically (as in, not counting the rows on before hand or doing it in an atomic way)?
So far I've tried something like this:
SELECT t1.id, t1.id % COUNT(t1.id) mod FROM tbl t1, tbl t2 GROUP BY t1.id;
This works but you must have the GROUP BY and tbl t2 as otherwise it returns 0 for the mod column which makes sense because I think it works by multiplying the table by itself so each ID gets a full set of the table. I guess for small enough tables this is ok but I can see how this becomes problematic for larger tables.
Edit: Found another hack-ish way:
WITH total AS (
SELECT COUNT(*) cnt FROM tbl
)
SELECT t1.id, t1.id % t2.cnt mod FROM tbl t1, total t2
It similar to the previous query but it "collapses" the multiplication to a single row with the previous count.
You can use COUNT() window function:
SELECT id,
id % COUNT(*) OVER () mod
FROM tbl;
I'm sure that the optimizer is smart enough to calculate the result of the window function only once.
See the demo.

Express Nearest Neighbor Join in Postgresql?

I have two tables Q and T, both containing a column of float numbers.
What I want to do is, for each number in Q, I want to find a number in T that has the smallest distance to it.
For example, for T={1,7,9} and Q={2,6,10}, I want to return Q,T pairs as {(2,1),(6,7),(10,9)}.
How should I express this query with SQL?
In addition, is that possible to accelerate this join by index, e.g. add an operator class which bind "FOR ORDER BY <->" with fabs calculation?
create table t (val_t integer);
create table q (val_q integer);
insert into t values (1),(7),(9);
insert into q values (2),(6),(10);
Start with a query that cross joins the two tables and adds a rank based on the difference:
SELECT val_q, val_t, rank() OVER (PARTITION BY val_q ORDER BY abs(val_t - val_q))
FROM t
JOIN q ON true ;
Use this query in a cte or subquery and filter by rank:
WITH src AS(
SELECT val_q, val_t, rank() OVER (PARTITION BY val_q ORDER BY abs(val_t - val_q))
FROM t
JOIN q ON true )
SELECT val_q, val_t FROM src
WHERE rank = 1;
val_q | val_t
-------+-------
2 | 1
6 | 7
10 | 9
See https://www.postgresql.org/docs/12/tutorial-window.html
Given this schema:
create table t (tn float);
insert into t values (1), (7), (9);
create table q (qn float);
insert into q values (2), (6), (10);
DISTINCT ON is the most straightforward way:
select distinct on (qn) qn, tn
from q
cross join t
order by qn, abs(qn - tn);
Exploiting a numeric range may perform better depending on your data sizes. If performance is an issue, then you can create an actual temp table for the range_tn CTE and put a gist index on it:
with all_tn as (
select tn
from t
union select null
), range_tn as (
select numrange(tn::numeric, (lead(tn) over w)::numeric, '[]') as tr
from all_tn
window w as (order by tn nulls first)
)
select qn,
case
when lower_inf(tr) then upper(tr)
when upper_inf(tr) then lower(tr)
when 2 * qn - lower(tr) - upper(tr) > 0 then upper(tr)
else lower(tr)
end as tn
from q
join range_tn
on qn::numeric <# tr;
Fiddle here

how to join two tables without repetation or the cells from second table in postgresql using PLSQL

When I try to join the below two table
I am not able to get the output I want by the join.
I tried using join but it didn't work let me know if its possible with plsql
Table 1:
col1 col2
1 a
1 b
1 c
2 a
2 b
3 a
table 2:
col1 col2
1 x
1 y
2 x
2 y
3 x
3 y
The output must be:
col1 col2 col3
1 a x
1 b y
1 c
2 a x
2 b y
3 a x
3 y
If use the join I am not able to get the same output as above.
The output I am getting is
1 a x
1 a y
1 b x
1 b y
1 c x
1 c y
2 a x
.....
.....
3 a x
3 a y
What you are searching is called a FULL OUTER JOIN. The result of this join contains elements from both input-tables, matching records get combined.
You can find more information here: https://stackoverflow.com/questions/4796872/full-outer-join-in-mysql
Using Window functions, specifically ROW_NUMBER() and partitioning by the Col1 in both tables, we can get a partitioned row_number that can be used as part of the join.
In other words, it seems to me that the order that the records are in is crucial for the join and result set you are desiring. Furthermore, using #Benvorth's suggestion of a FULL OUTER JOIN to achieve the NULLs in both direction.. I believe this might work:
SELECT
COALESCE(t1.col1,t2.col1) as col1,
t1.col2,
t2.col2
FROM
(SELECT col1, col2, ROW_NUMBER() OVER (PARTITION BY col1 ORDER BY col2 ASC) as col1_row_number FROM table1) t1
FULL OUTER JOIN
(SELECT col1, col2, ROW_NUMBER() OVER (PARTITION BY col1 ORDER BY col2 ASC) as col1_row_number FROM table2) t2 ON
t1.col1 = t2.col1 AND
t1.col1_row_number = t2.col1_row_number
That ROW_NUMBER() OVER (PARTITION BY col1, ORDER BY col2 ASC) bit will create row number for each record. The row_number will restart back at 1 for each new col1 value encountered. You can think of it like a RANK for each distinct Col1 value based on Col2's value. Table1's output from the subquery SELECT col1, col2, ROW_NUMBER() OVER (PARTITION BY col1 ORDER BY col2 ASC) as col1_row_number FROM table1 will look like:
Table 1:
col1 col2 col1_row_number
1 a 1
1 b 2
1 c 3
2 a 1
2 b 2
3 a 1
So we do that with both tables, then we use that row number as part of the join along with col1.
A sqlfiddle showing this matching your desired result from the question

Retrieving the rows using join query

I have two tables like this
A B
---- -------
col1 col2 col1 col2
---------- -----------
A table contains 300k rows
B table contains 400k rows
I need to count the col1 for table A if it is matching col1 for table B
I have written a query like this:
select count(distinct ab.col1) from A ab join B bc on(ab.col1=bc.col1)
but this takes too much time
could try a group by...
Also ensure that the col1 is indexed in both tables
SELECT COUNT (col1 )
FROM
(
SELECT aa.col1
FROM A aa JOIN B bb on aa.col1 = bb.col1
GROUP BY (aa.col1)
)
It's difficult to answer without you positing more details: did you analyze the tables? Do you have an index on col1 on each table? How many rows are you counting?
That being said, there aren'y so many potential query plans for your query. You likely have two seq scans that are hash joined together, which is about the best you can do... If you've a material numbers of rows, you'll be counting a gazillion rows, and this takes time.
Perhaps you could rewrite the query differently? If every B.col1 is in A.col1, you could get the same result without the join:
select count(distinct col1) from B
If A has low cardinality, it might be faster to rely on exists():
with vals as (
select distinct A.col1 as val from A
)
select count(*) from vals
where exists(select 1 from B where B.col1 = vals.val)
Or, if you know every possible value from A.col1 and it's reasonably small, you could unnest an array without querying A at all:
select count(*) from unnest(Array[val1, val2, ...]) as vals (val)
where exists(select 1 from B where B.col1 = vals.val)
Or vice-versa, in each of the above, if every B holds the reference values.

Postgresql running sum of previous groups?

Given the following data:
sequence | amount
1 100000
1 20000
2 10000
2 10000
I'd like to write a sql query that gives me the sum of the current sequence, plus the sum of the previous sequence. Like so:
sequence | current | previous
1 120000 0
2 20000 120000
I know the solution likely involves windowing functions but I'm not too sure how to implement it without subqueries.
SQL Fiddle
select
seq,
amount,
lag(amount::int, 1, 0) over(order by seq) as previous
from (
select seq, sum(amount) as amount
from sa
group by seq
) s
order by seq
If your sequence is "sequencial" without holes you can simply do:
SELECT t1.sequence,
SUM(t1.amount),
(SELECT SUM(t2.amount) from mytable t2 WHERE t2.sequence = t1.sequence - 1)
FROM mytable t1
GROUP BY t1.sequence
ORDER BY t1.sequence
Otherwise, instead of t2.sequence = t1.sequence - 1 you could do:
SELECT t1.sequence,
SUM(t1.amount),
(SELECT SUM(t2.amount)
from mytable t2
WHERE t2.sequence = (SELECT MAX(t3.sequence)
FROM mytable t3
WHERE t3.sequence < t1.sequence))
FROM mytable t1
GROUP BY t1.sequence
ORDER BY t1.sequence;
You can see both approaches in this fiddle