Postgres - fill in missing data in new table - postgresql

Given two tables, A and B:
A B
----- -----
id id
high high
low low
bId
I want to find rows in table A where bId is null, create an entry in B based off the data in A, and update the row in A to reference the newly created row. I can create the rows but I'm having trouble updating table A with the reference to the new row:
begin transaction;
with rows as (
insert into B (high, low)
select high, low
from A a
where a.bId is null
returning id as bId, a.id as aId
)
update A
set bId=(select bId from rows where id=rows.aId)
where id=rows.aId;
--commit;
rollback;
However, this fails with a cryptic error: ERROR: missing FROM-clause entry for table a.
Using a Postgres query, how can I achieve this?

either
update "A"
set "bId"=(select "bId" from rows where id=rows."aId")
without the where clause or
update "A"
set "bId"=(select "bId" from rows where id=rows."aId")
FROM rows
where "A".id=rows.aId;
I dont know if your tables realy have that names, as mentioned in the comments try to avoid uppercase tables and fieldnames and try to avoid reserved keynames.

I found a way to get it to work but I feel like it's not the most efficient.
begin transaction;
do $body$
declare
newId int4;
tempB record;
begin
create temp table TempAB (
High float8,
Low float8,
AID int4
);
insert into TempAB (High, Low, AId)
select high, low, id
from A
where bId is null;
for tempB in (select * from TempAB)
loop
insert into B (high, low)
values (tempB.high, tempB.low)
returning id into newId;
update A
set bId=newId
where id=tempB.AId;
end loop;
end $body$;
rollback;
--commit;

Related

SQL if else in proc isn't working properly - beginner friendly

i'm pretty new to SQL and struggling with the last piece on my final project. Logically, I know how to do it (i think) I just can't get the syntax correct.
I have a teaches table and an instructor table. The teaches tables tells you what instructor has taught what class, when and for how many credits. The instructor table is obviously a table with IDs and instructor names. Finally, I have a table instructor_taught which really serves no purpose other than for us to learn. this table holds instructor's IDs and how many total credits they have taught.
My challenge is to create a proc that takes an instructor's ID and adds or updates their total credits taught in the instructor_taught table.
First: check if the ID already exists in the instructor_taught table...if it does not exist then I need to add it to the table. If it does exist then I need to update the entry and add the additional 3 credits. so if instructor 101 is already in the table with 3 total credits then I need to update the 3 to 6 instead of creating a new row.
I was able to create a proc that adds the instructor to the instructor_taught table but the update part is failing...so if i enter id:101 10x it will add that instructor 10x instead of updating.
Here is my code:
CREATE OR REPLACE PROCEDURE Day_21_monthlyPayment (IN id VARCHAR(30), INOUT d_count INTEGER DEFAULT 0)
LANGUAGE plpgsql
AS
$$
BEGIN
BEGIN
SELECT teaches.id, instructor.name, COUNT(*) AS d_count
FROM teaches NATURAL JOIN instructor
WHERE teaches.id = Day_21_monthlyPayment.id
GROUP BY teaches.id, instructor.name;
END
IF EXISTS(SELECT teaches.id FROM teaches)
THEN UPDATE instructor_course_nums SET tot_courses = d_count
WHERE teaches.id = Day_21_monthlyPayment.id
ELSE INSERT INTO instructor_course_nums (id, name, tot_courses)
END IF;
END;
$$
I"m pretty sure this is a simple if else statement but i can't get the syntax right. Thanks in advance!
DDL:
CREATE TABLE instructor(
ID VARCHAR(12),
name VARCHAR(30),
section VARCHAR(30),
salary NUMERIC(20)
)
CREATE TABLE teaches(
ID VARCHAR(12),
course VARCHAR(30),
count VARCHAR(30),
term VARCHAR(20),
year NUMERIC(20)
)
CREATE TABLE instructor_course_nums(
ID VARCHAR(12),
name VARCHAR(30),
tot_courses NUMERIC(2)
)
Even if you are forced to do a procedure, I think you can still do this in (nearly) one fell swoop with a single DML statement. Assuming you have the proper constraints (primary key), I would think something like this would work:
CREATE OR REPLACE PROCEDURE Day_21_monthlyPayment (idx VARCHAR(30), INOUT d_count INTEGER DEFAULT 0)
LANGUAGE plpgsql
AS
$BODY$
BEGIN
SELECT COUNT(*)
into d_count
FROM teaches t JOIN instructor i on t.id = i.id
WHERE t.id = idx;
insert into instructor_course_nums
select t.id, t.name, d_count
from instructor t
where t.id = idx
on conflict (id) do
update
set tot_courses = d_count;
END;
$BODY$

How to reflect updates in a loop?

I have a migration run. At each iteration i can update several rows. But how can i skip already updated rows? Changes inside the loop are not visible to the outside (yes I know - stable result) - but how can i change that? I want, that if a person already has a graph_id --> SKIP
DROP TABLE IF EXISTS persons_2_persons, persons, graph;
CREATE TABLE graph (id SERIAL PRIMARY KEY);
CREATE TABLE persons (id SERIAL PRIMARY KEY, graph_id INTEGER REFERENCES graph(id));
CREATE TABLE persons_2_persons("from" INTEGER NOT NULL REFERENCES persons(id), "to" INTEGER NOT NULL REFERENCES persons(id));
INSERT INTO persons (graph_id) VALUES (NULL), (NULL), (NULL);
INSERT INTO persons_2_persons VALUES
(1,2),(2,1),
(2,3),(3,2);
-- Table has about 18 Mio records. 30-40 persons are connected --> goal: give each graph/cluster an graph_id
-- I also tried with CURSORs
DO $x$
DECLARE
var_record RECORD;
var_graph_id INTEGER;
BEGIN
FOR var_record IN SELECT * FROM persons -- Reevaluate after each iteration
LOOP
IF var_record.graph_id IS NULL THEN -- this should be false at the 2nd and 3nd iteration because we updated the data in 1st iteration
INSERT INTO graph DEFAULT VALUES RETURNING id INTO var_graph_id;
RAISE NOTICE '%', var_graph_id;
UPDATE persons SET graph_id = var_graph_id WHERE id IN (1,2,3); -- (1,2,3) is found by a RECURESIVE CTE. This are normally 30-40 persons
END IF;
END LOOP;
END;
$x$;
SELECT * FROM persons;
Make the first command in the loop refresh that individual record:
select * from persons into var_record where id=var_record.id;

Run a stored procedure using select columns as input parameters?

I have a select query that returns a dataset with "n" records in one column. I would like to use this column as the parameter in a stored procedure. Below a reduced example of my case.
The query:
SELECT code FROM rawproducts
The dataset:
CODE
1
2
3
The stored procedure:
ALTER PROCEDURE [dbo].[MyInsertSP]
(#code INT)
AS
BEGIN
INSERT INTO PRODUCTS description, price, stock
SELECT description, price, stock
FROM INVENTORY I
WHERE I.icode = #code
END
I already have the actual query and stored procedure done; I just am not sure how to put them both together.
I would appreciate any assistance here! Thank you!
PS: of course the stored procedure is not as simple as above. I just choose to use a very silly example to keep things small here. :)
Here's two methods for you, one using a loop without a cursor:
DECLARE #code_list TABLE (code INT);
INSERT INTO #code_list SELECT code, ROW_NUMBER() OVER (ORDER BY code) AS row_id FROM rawproducts;
DECLARE #count INT;
SELECT #count = COUNT(*) FROM #code_list;
WHILE #count > 0
BEGIN
DECLARE #code INT;
SELECT #code = code FROM #code_list WHERE row_id = #count;
EXEC MyInsertSP #code;
DELETE FROM #code_list WHERE row_id = #count;
SELECT #count = COUNT(*) FROM #code_list;
END;
This works by putting the codes into a table variable, and assigning a number from 1..n to each row. Then we loop through them, one at a time, deleting them as they are processed, until there is nothing left in the table variable.
But here's what I would consider a better method:
CREATE TYPE dbo.code_list AS TABLE (code INT);
GO
CREATE PROCEDURE MyInsertSP (
#code_list dbo.code_list)
AS
BEGIN
INSERT INTO PRODUCTS (
[description],
price,
stock)
SELECT
i.[description],
i.price,
i.stock
FROM
INVENTORY i
INNER JOIN #code_list cl ON cl.code = i.code;
END;
GO
DECLARE #code_list dbo.code_list;
INSERT INTO #code_list SELECT code FROM rawproducts;
EXEC MyInsertSP #code_list = #code_list;
To get this to work I create a user-defined table type, then use this to pass a list of codes into the stored procedure. It means slightly rewriting your stored procedure, but the actual code to do the work is much smaller.
(how to) Run a stored procedure using select columns as input
parameters?
What you are looking for is APPLY; APPLY is how you use columns as input parameters. The only thing unclear is how/where the input column is populated. Let's start with sample data:
IF OBJECT_ID('dbo.Products', 'U') IS NOT NULL DROP TABLE dbo.Products;
IF OBJECT_ID('dbo.Inventory','U') IS NOT NULL DROP TABLE dbo.Inventory;
IF OBJECT_ID('dbo.Code','U') IS NOT NULL DROP TABLE dbo.Code;
CREATE TABLE dbo.Products
(
[description] VARCHAR(1000) NULL,
price DECIMAL(10,2) NOT NULL,
stock INT NOT NULL
);
CREATE TABLE dbo.Inventory
(
icode INT NOT NULL,
[description] VARCHAR(1000) NULL,
price DECIMAL(10,2) NOT NULL,
stock INT NOT NULL
);
CREATE TABLE dbo.Code(icode INT NOT NULL);
INSERT dbo.Inventory
VALUES (10,'',20.10,3),(11,'',40.10,3),(11,'',25.23,3),(11,'',55.23,3),(12,'',50.23,3),
(15,'',33.10,3),(15,'',19.16,5),(18,'',75.00,3),(21,'',88.00,3),(21,'',100.99,3);
CREATE CLUSTERED INDEX uq_inventory ON dbo.Inventory(icode);
The function:
CREATE FUNCTION dbo.fnInventory(#code INT)
RETURNS TABLE AS RETURN
SELECT i.[description], i.price, i.stock
FROM dbo.Inventory I
WHERE I.icode = #code;
USE:
DECLARE #code TABLE (icode INT);
INSERT #code VALUES (10),(11);
SELECT f.[description], f.price, f.stock
FROM #code AS c
CROSS APPLY dbo.fnInventory(c.icode) AS f;
Results:
description price stock
-------------- -------- -----------
20.10 3
40.10 3
Updated Proc (note my comments):
ALTER PROC dbo.MyInsertSP -- (1) Lose the input param
AS
-- (2) Code that populates the "code" table
INSERT dbo.Code VALUES (10),(11);
-- (3) Use CROSS APPLY to pass the values from dbo.code to your function
INSERT dbo.Products ([description], price, stock)
SELECT f.[description], f.price, f.stock
FROM dbo.code AS c
CROSS APPLY dbo.fnInventory(c.icode) AS f;
This ^^^ is how it's done.

Function taking forever to run for large number of records

I have created the following function in Postgres 9.3.5:
CREATE OR REPLACE FUNCTION get_result(val1 text, val2 text)
RETURNS text AS
$BODY
$Declare
result text;
BEGIN
select min(id) into result from table
where id_used is null and id_type = val2;
update table set
id_used = 'Y',
col1 = val1,
id_used_date = now()
where id_type = val2
and id = result;
RETURN result;
END;
$BODY$
LANGUAGE plpgsql VOLATILE COST 100;
When I run this function in a loop of over a 1000 or more records it just does freezing and just says "query is running". When I check my table nothing is being updated. When I run it for one or two records it runs fine.
Example of the function when being run:
select get_result('123','idtype');
table columns:
id character varying(200),
col1 character varying(200),
id_used character varying(1),
id_used_date timestamp without time zone,
id_type character(200)
id is the table index.
Can someone help?
Most probably you are running into race conditions. When you run your function a 1000 times in quick succession in separate transactions, something like this happens:
T1 T2 T3 ...
SELECT max(id) -- id 1
SELECT max(id) -- id 1
SELECT max(id) -- id 1
...
Row id 1 locked, wait ...
Row id 1 locked, wait ...
UPDATE id 1
...
COMMIT
Wake up, UPDATE id 1 again!
COMMIT
Wake up, UPDATE id 1 again!
COMMIT
...
Largely rewritten and simplified as SQL function:
CREATE OR REPLACE FUNCTION get_result(val1 text, val2 text)
RETURNS text AS
$func$
UPDATE table t
SET id_used = 'Y'
, col1 = val1
, id_used_date = now()
FROM (
SELECT id
FROM table
WHERE id_used IS NULL
AND id_type = val2
ORDER BY id
LIMIT 1
FOR UPDATE -- lock to avoid race condition! see below ...
) t1
WHERE t.id_type = val2
-- AND t.id_used IS NULL -- repeat condition (not if row is locked)
AND t.id = t1.id
RETURNING id;
$func$ LANGUAGE sql;
Related question with a lot more explanation:
Atomic UPDATE .. SELECT in Postgres
Explain
Don't run two separate SQL statements. That is more expensive and widens the time frame for race conditions. One UPDATE with a subquery is much better.
You don't need PL/pgSQL for the simple task. You still can use PL/pgSQL, the UPDATE stays the same.
You need to lock the selected row to defend against race conditions. But you cannot do this with the aggregate function you head because, per documentation:
The locking clauses cannot be used in contexts where returned rows
cannot be clearly identified with individual table rows; for example
they cannot be used with aggregation.
Bold emphasis mine. Luckily, you can replace min(id) easily with the equivalent ORDER BY / LIMIT 1 I provided above. Can use an index just as well.
If the table is big, you need an index on id at least. Assuming that id is indexed already as PRIMARY KEY, that would help. But this additional partial multicolumn index would probably help a lot more:
CREATE INDEX foo_idx ON table (id_type, id)
WHERE id_used IS NULL;
Alternative solutions
Advisory locks May be the superior approach here:
Postgres UPDATE ... LIMIT 1
Or you may want to lock many rows at once:
How to mark certain nr of rows in table on concurrent access

Get columns that differ between 2 rows

I have a table company with 60 columns. The goal is to create a tool to find, compare and eliminate duplicates in this table.
Example: I find 2 companies that potentially are the same, but I need to know which values (columns) differ between these 2 rows in order to continue.
I think it is possible to compare column by column x 60, but I search for a simpler and more generic solution.
Something like:
SELECT * FROM company where co_id=22
SHOW DIFFERENCE
SELECT * FROM company where co_id=33
The result should be the column names that differ.
For this you may use an intermediate key/value representation of the rows, with JSON functions or alternatively with the hstore extension (now only of historical interest). JSON comes built-in with every reasonably recent version of PostgreSQL, whereas hstore must be installed in the database with CREATE EXTENSION.
Demo:
CREATE TABLE table1 (id int primary key, t1 text, t2 text, t3 text);
Let's insert two rows that differ by the primary key and one other column (t3).
INSERT INTO table1 VALUES
(1,'foo','bar','baz'),
(2,'foo','bar','biz');
Solution with json
First with get a key/value representation of the rows with the original row number, then we pair the rows based on their original row number and
filter out those with the same "value" column
WITH rowcols AS (
select rn, key, value
from (select row_number() over () as rn,
row_to_json(table1.*) as r from table1) AS s
cross join lateral json_each_text(s.r)
)
select r1.key from rowcols r1 join rowcols r2
on (r1.rn=r2.rn-1 and r1.key = r2.key)
where r1.value <> r2.value;
Sample result:
key
-----
id
t3
Solution with hstore
SELECT skeys(h1-h2) from
(select hstore(t.*) as h1 from table1 t where id=1) h1
CROSS JOIN
(select hstore(t.*) as h2 from table1 t where id=2) h2;
h1-h2 computes the difference key by key and skeys() outputs the result as a set.
Result:
skeys
-------
id
t3
The select-list might be refined with skeys((h1-h2)-'id'::text) to always remove id which, as the primary key, will obviously always differ between rows.
Here's a stored procedure that should get you most of the way...
While this should work "as is", it has no error checking, which you should add.
It gets all the columns in the table, and loops over them. A difference is when the count of the distinct items is more than one.
Also, the output is:
The count of the number of differences
Messages for each column where there is a difference
It might be more useful to return a rowset of the columns with the differences. Anyway, good luck!
Usage:
SELECT showdifference('public','company','co_id',22,33)
CREATE OR REPLACE FUNCTION showdifference(p_schema text, p_tablename text,p_idcolumn text,p_firstid integer, p_secondid integer)
RETURNS INTEGER AS
$BODY$
DECLARE
l_diffcount INTEGER;
l_column text;
l_dupcount integer;
column_cursor CURSOR FOR select column_name from information_schema.columns where table_name = p_tablename and table_schema = p_schema and column_name <> p_idcolumn;
BEGIN
-- need error checking here, to ensure the table and schema exist and the columns exist
-- Should also check that the records ids exist.
-- Should also check that the column type of the id field is integer
-- Set the number of differences to zero.
l_diffcount := 0;
-- use a cursor to iterate over the columns found in information_schema.columns
-- open the cursor
OPEN column_cursor;
LOOP
FETCH column_cursor INTO l_column;
EXIT WHEN NOT FOUND;
-- build a query to see if there is a difference between the columns. If there is raise a notice
EXECUTE 'select count(distinct ' || quote_ident(l_column) || ' ) from ' || quote_ident(p_schema) || '.' || quote_ident(p_tablename) || ' where ' || quote_ident(p_idcolumn) || ' in ('|| p_firstid || ',' || p_secondid ||')'
INTO l_dupcount;
IF l_dupcount > 1 THEN
-- increment the counter
l_diffcount := l_diffcount +1;
RAISE NOTICE '% has % differences', l_column, l_dupcount ; -- for "real" you might want to return a rowset and could do something here
END IF;
END LOOP;
-- close the cursor
CLOSE column_cursor;
RETURN l_diffcount;
END;
$BODY$
LANGUAGE plpgsql VOLATILE STRICT
COST 100;