I need advice on doing a JOIN with PostgreSQl. I want to take the sum (or number of times id 1 is entered) of a single id and place it into a new column in table b.
Table a
id username comment
1 Bob Hi
2 Sally Hello
1 Bob Bye
Table b
id something total_comments
1 null 2
Create a trigger for insert, update, delete on the Table a to select the sum and update in Table b
You could use SELECT INTO if table_b doesn't already exist.
SELECT
id
, NULL AS something
, COUNT(comment) AS total_comments
INTO table_B
FROM table_a
GROUP BY id
Or INSERT INTO if table_b does exist.
INSERT INTO table_b (id, something, total_comments)
SELECT
id
, NULL AS something
, COUNT(comment) AS total_comments
FROM table_a
GROUP BY id
Related
Say I have two tables:
tb1:
id name date
1 John 01/01/2012
1 John 01/02/2012
2 James 02/02/2020
tb2:
id name date
1 John 01/01/2013
1 John 01/01/2012
The uniqueness of both tb1 and tb2 comes from the combination of (id, name,date) columns. Therefore I would like to insert only values from tb2 that are new to tb1. In this case only (1,John,01/01/2013) would be inserted since the other row is already present in tb1.
My try is:
INSERT INTO tb1 (date) SELECT * FROM tb2 ON CONFLICT (id,name,date) DO NOTHING;
You did not tell us what the error is that you get. But just from a syntax check, it will result in the error:
ERROR: INSERT has more expressions than target columns
because you specify one target columns, but provide three columns from the SELECT.
Assuming you did specify a unique constraint or primary key on the three columns, adding the additional columns in the INSERT statement should work:
INSERT INTO tb1 (id,name,date)
SELECT id,name,date
FROM tb2 ON CONFLICT (id,name,date) DO NOTHING;
SQL language is a non procedural language, just a query language... You must do this with queries, like :
INSERT INTO tb1 (id,name,date)
SELECT *
FROM tb2
WHERE NOT EXISTS(SELECT *
FROM tb1 INNER JOIN tb2
ON ROW (tb1.id, tb1.name, tb1.date) =
ROW (tb2.id, tb2.name, tb2.date));
I'm trying to split data from Username column in a table like this:
Table1
ID Username
1 UserA,UserB,UserC
and I want to insert it to another table. the result will be like this:
Table2
ID Username
1 UserA
1 UserB
1 UserC
is this possible to do this in postgresql?
thanks in advance
You can split the value and then unnest it:
insert into table2 (id, username)
select t1.id, ut.username
from table1 t1
cross join unnest(string_to_array(t1.username), ',')) as ut(username)
I have two tables:
table_a with fields item_id,rank, and 50 other fields.
table_b with fields item_id, and the same 50 fields as table_a
I need to write a SELECT query that adds the rows of table_b to table_a but with rank set to a specific value, let's say 4.
Currently I have:
SELECT * FROM table_a
UNION
SELECT item_id, 4 rank, field_1, field_2, ...
How can I join the two tables together without writing out all of the fields and without using an INSERT query?
EDIT:
My idea is to join table_b to table_a somehow with the rank field remaining empty, then simply replace the null rank fields. The rank field is never null, but item_id can be duplicated and table_a may have item_id values that are not in table_b, and vice-versa.
I am not sure I understand why you need this, but you can use jsonb functions:
select (jsonb_populate_record(null::table_a, row)).*
from (
select to_jsonb(a) as row
from table_a a
union
select to_jsonb(b) || '{"rank": 4}'
from table_b b
) s
order by item_id;
Working example in rextester.
I'm pretty sure I've got it. The predefined rank column can be inserted into table_b by joining to the subset of itself with only the columns left of the column behind which you want to insert.
WITH
_leftcols AS ( SELECT item_id, 4 rank FROM table_b ),
_combined AS ( SELECT * FROM table_b JOIN _leftcols USING (item_id) )
SELECT * FROM _combined
UNION
SELECT * FROM table_a
I am using Postgresql and need to query two tables like this:
Table1
ID Bill
A 1
B 2
B 3
C 4
Table2
ID
A
B
I want a table with all the columns in Table1 but keeping only the records with IDs that are available in Table2 (A and B in this case). Also, Table2's ID is unique.
ID Bill
A 1
B 2
B 3
Which join I should use or if I can use WHERE statement?
Thanks!
SELECT Table1.*
FROM Table1
INNER JOIN Table2 USING (ID);
or
SELECT *
FROM Table1
WHERE ID IN (SELECT ID FROM Table2);
but the first one is better for performance reason.
SELECT *
FROM Table1
WHERE EXISTS (
SELECT 1 FROM Table2 WHERE Table2.ID = Table1.ID LIMIT 1
)
I am using postgres 9.X.
I have two tables
Table A
(
id integer
);
Table B
(
id integer,
Value integer
);
Both table are indexed on id.
Table A can have duplicate ID's
Example:
Table A
ID
1
1
1
2
1
I intend to insert number of occurrences of ID into table B (This table has all the ID's that are in Table A, but value is 0 initially)
Table B
ID Value
1 4
2 1
3 0
4 0
Here is my SQL statement
update tableB set value = value + sq.total
from
( select id, count(*) as total from TableA group by id ) as sq
where sq.id = tableB.id;
With 3-10 Million entries in TableA, it is taking an awful amount of time. Is there a way I can optimize this query?
Do you need tableB to be initially populated? An INSERT...SELECT from tableA into an empty tableB (with no indexes on tableB) should be faster:
insert into tableb (id, value)
select id, count(*)
from tablea
group by id
and then add your indexes to tableB once the data is there.