Accidentally detected all data from a table, insert dummy data to table using loop psql - postgresql

I had accidentally deleted most of the rows in my Postgres table (data is not important its in my test environment, but I need a dummy data to be insert in to these table).
Let us take three tables,
MAIN_TABLE(main_table_id, main_fields)
ADDRESS_TABLE(address_table_id, main_table_id, address_type, other_fielsds)
CHAID_TABLE(chaid_table_id,main_table_id, shipping_address_id, chaild_fields)
I had accidentally deleted most of the data from ADDRESS_TABLE.
ADDRESS_TABLE has a foreign key from MAIN_TABLE ,i.e. main_table_id. for each row in MAIN_TABLE there is two entries in ADDRESS_TABLE, in which one entry is its address_type is "billing/default" and other entry is for address_type "shipping".
CHAID_TABLE has two foreign keys one from MAIN_TABLE, i.e. main_table_id and other from ADDRESS_TABLE i.e., shipping_address_id. this shipping_address_id is address id of ADDRESS_TABLE, its address_type is shipping and ADDRESS_TABLE.main_table_id = CHAID_TABLE.main_table_id.
These are the things that I needed.
I need to create two dummy address entries for each raw in MAIN_TABLE one is of address type "billing/default" and other is of type "shipping".
I need to insert address_table_id to the CHAID_TABLE whose ADDRESS_TABLE.main_table_id = CHAID_TABLE.main_table_id. and addres_type ="shipping"
if first is done I know how to insert second, because it is a simple insert query. I guess.
it can be done like,
UPDATE CHAID_TABLE
SET shipping_address_id = ADDRESS_TABLE.address_table_id
FROM ADDRESS_TABLE
WHERE ADDRESS_TABLE.main_table_id = CHAID_TABLE.main_table_id
AND ADDRESS_TABLE.addres_type ='shipping';
for doing first one i can use loop in psql, ie loop through all the entries in MAIN_TABLE and insert two dummy rows for each rows. But I don't know how to do these please help me to solve this.

I hope your solution is this, Create a function that loop through all the rows in MAIN_TABLE, inside the loop do the action you want, here two insert statement, one issue of this solution is you have same data in all address.
CREATE OR REPLACE FUNCTION get_all_MAIN_TABLE () RETURNS SETOF MAIN_TABLE AS
$BODY$
DECLARE
r MAIN_TABLE %rowtype;
BEGIN
FOR r IN
SELECT * FROM MAIN_TABLE
LOOP
-- can do some processing here
INSERT INTO ADDRESS_TABLE ( main_table_id, address_type, other_fielsds)
VALUES('shipping', r.main_table_id,'NAME','','other_fielsds');
INSERT INTO ADDRESS_TABLE ( main_table_id, address_type, other_fielsds)
VALUES('billing/default',r.main_table_id,'NAME','','other_fielsds');
END LOOP;
RETURN;
END
$BODY$
LANGUAGE plpgsql;
SELECT * FROM get_all_MAIN_TABLE ();

Related

Import CSV into Postgres: Update & Insert at the same time

So I´m fairly new to Postgresql and started working with it by testing out some stuff with pgadmin4 on Postgres9.6.
The problem:
I have a table: test(id, text)
In this table I have 10 rows of data.
Now I want to import a CSV which has 12 rows to update the test table. Some text changed for the first 10 rows AND I want to insert the 2 additional rows from the CSV.
I know that you can truncate all the data from a table and just import everything again from the CSV, but that´s not a nice way to do this. I want to Update my existing data & Insert the new data with one query.
I already found a function which should solve this by using a temporary table. This updates the existing rows correct, but the 2 additional rows do not get inserted
CREATE OR REPLACE FUNCTION upsert_test(integer,varchar) RETURNS VOID AS $$
DECLARE
BEGIN
UPDATE test
SET id = tmp_table.id,
text = tmp_table.text
FROM tmp_table
WHERE test.id = tmp_table.id;
IF NOT FOUND THEN
INSERT INTO test(id,text) values
(tmp_table.id,tmp_table.text);
END IF;
END;
$$ Language 'plpgsql';
DO $$ BEGIN
PERFORM upsert_test(id,text) FROM test;
END $$;
So what do I need to change to also get the insert to work?
Assuming you have a primary or unique constraint on the id column you can do this with a single statement, no function required:
insert into test (id, text)
select id, text
from tmp_table
on conflict (id)
do update set text = excluded.text;

record "new" has no field "cure" postgreSQL

so here's the thing,I have two tables: apointments(with a single p) and medical_folder and i get this
ERROR: record "new" has no field "cure"
CONTEXT: SQL statement "insert into medical_folder(id,"patient_AMKA",cure,drug_id)
values(new.id,new."patient_AMKA",new.cure,new.drug_id)"
PL/pgSQL function new_medical() line 3 at SQL statement
create trigger example_trigger after insert on apointments
for each row execute procedure new_medical();
create or replace function new_medical()
returns trigger as $$
begin
if apointments.diagnosis is not null then
insert into medical_folder(id,"patient_AMKA",cure,drug_id)
values(new.id,new."patient_AMKA",new.cure,new.drug_id);
return new;
end if;
end;
$$ language plpgsql;
insert into apointments(id,time,"patient_AMKA","doctor_AMKA",diagnosis)
values('30000','2017-05-24 0
07:42:15','4017954515276','6304745877947815701','M3504');
I have checked multiple times and all of my tables and columns are existing
Please help!Thank you!
Table structures are:
create table medical_folder (
id bigInt,
patient bigInt,
cure text,
drug_id bigInt);
create table apointments (
id bigint,
time timestamp without time zone,
"patient_AMKA" bigInt,
"doctor_AMKA" bigInt);
I was facing the same issue.
Change:
values(new.id,new."patient_AMKA",new.cure,new.drug_id);
to:
values(new.id,new."patient_AMKA",new."cure",new."drug_id");
This error means the table apointments (with 1 p) doesn't have a field named cure. The trigger occurs when inserting an apointment, so "new" is an apointment row. Maybe it is part of the diagnosis object?
The values for the second table are not available in the "new" row. You need a way to get and insert them, and using a trigger is not the easiest/clean way to go.
You can have your application do two inserts, one by table, and wrap them in a transaction to ensure they are both committed/rolled back. Another option, which lets you better enforce the data integrity, is to create a stored procedure that takes the values to be inserted in both tables and do the two inserts. You can go as far as forbidding user to write to the tables, effectively leaving the stored procedure the only way to insert the data.

INSERT a number in a column based on other columns OLD INSERTs

In PostgreSQL I have this table... (there is a primary key in the most left side "timestamp02" which is not shown in this image, please don't bother, its not important for the purpose of this question)
in the table above, all columns are entered via queries, except the "time_index" which I would like to be filled automatically via a trigger each time each row is filled.
This is the code to create the same table (without any value) so everyone could create it using the Postgre SQL query panel.
CREATE TABLE table_ebscb_spa_log02
(
pcnum smallint,
timestamp02 timestamp with time zone NOT NULL DEFAULT now(),
fn_name character varying,
"time" time without time zone,
time_elapse character varying,
time_type character varying,
time_index real,
CONSTRAINT table_ebscb_spa_log02_pkey PRIMARY KEY (timestamp02)
)
WITH (
OIDS=FALSE
);
ALTER TABLE table_ebscb_spa_log02
OWNER TO postgres;
What I would like the trigger to do is:
INSERT a number in the "time_index" column based on the INSERTed values of the "fn_name" and "time_type" columns in each row.
If both ("fn_name" and "time_type") do a combination (eg. Check Mails - Start) that doesn't exist in any row before (above), then INSERT 1 in the "time_index" column,
Elif both ("fn_name" and "time_type") do a combination that does exist in some row before (above), then INSERT the number following the one before(above) in the "time_index" column.
(pls look at the example table image, this trigger will produce every red highlighted square on it)
I have watch many, PostgreSQL tutorial videos, read many manuals, including these
http://www.postgresql.org/docs/9.4/static/sql-createtrigger.html
http://www.postgresql.org/docs/9.4/static/plpgsql-trigger.html
without any result.
I have tried so far this to create the function:
CREATE OR REPLACE FUNCTION on_ai_myTable() RETURNS TRIGGER AS $$
DECLARE
t_ix real;
n int;
BEGIN
IF NEW.time_type = 'Start' THEN
SELECT t.time_index FROM table_ebscb_spa_log02 t WHERE t.fn_name = NEW.fn_name AND t.time_type = 'Start' ORDER BY t.timestamp02 DESC LIMIT 1 INTO t_ix;
GET DIAGNOSTICS n = ROW_COUNT;
IF (n = 0) THEN
t_ix = 1;
ELSE
t_ix = t_ix + 1;
END IF;
END IF;
NEW.time_index = t_ix;
return NEW;
END
$$
LANGUAGE plpgsql;
And this to create the query:
CREATE TRIGGER on_ai_myTable
AFTER INSERT ON table_ebscb_spa_log02
FOR EACH ROW
EXECUTE PROCEDURE on_ai_myTable();
Then when I manually insert the values in the table, nothing change (no error message) time_index column just remain empty, what am I doing wrong???
Please some good PostgreSQL fellow programmer could give me a hand, I really have come to a death point in this task, I have any more ideas.
Thanks in advance
In an AFTER INSERT trigger, any changes you make to NEW.time_index will be ignored. The record is already inserted at this point; it's too late to modify it.
Create the trigger as BEFORE INSERT instead.

How can i get my table to populate using a trigger function?

i have a function:
CREATE OR REPLACE FUNCTION delete_student()
RETURNS TRIGGER AS
$BODY$
BEGIN
IF (TG_OP = 'DELETE')
THEN
INSERT INTO cancel(eno, excode,sno,cdate,cuser)
VALUES ((SELECT entry.eno FROM entry
JOIN student ON (entry.sno = student.sno)
WHERE entry.sno = OLD.sno),(SELECT entry.excode FROM entry
JOIN student ON (entry.sno = student.sno)
WHERE entry.sno = OLD.sno),
OLD.sno,current_timestamp,current_user);
END IF;
RETURN OLD;
END; $BODY$ LANGUAGE plpgsql;
and i also have the trigger:
CREATE TRIGGER delete_student
BEFORE DELETE
on student
FOR EACH ROW
EXECUTE PROCEDURE delete_student();
the idea is when i delete a student from the student relation then the entry in the entry relation also delete and my cancel relation updates.
this is what i put into my student relation:
INSERT INTO
student(sno, sname, semail) VALUES (1, 'a. adedeji', 'ayooladedeji#live.com');
and this is what i put into my entry relation:
INSERT INTO
entry(excode, sno, egrade) VALUES (1, 1, 98.56);
when i execute the command
DELETE FROM student WHERE sno = 1;
it deletes the student and also the corresponding entry and the query returns with no errors however when i run a select on my cancel table the table shows up empty?
You do not show how the corresponding entry is deleted. If the entry is deleted before the student record then that causes the problem because then the INSERT in the trigger will fail because the SELECT statement will not provide any values to insert. Is the corresponding entry deleted through a CASCADING delete on student?
Also, your trigger can be much simpler:
CREATE OR REPLACE FUNCTION delete_student() RETURNS trigger AS $BODY$
BEGIN
INSERT INTO cancel(eno, excode, sno, cdate, cuser)
VALUES (SELECT eno, excode, sno, current_timestamp, current_user
FROM entry
WHERE sno = OLD.sno);
RETURN OLD;
END; $BODY$ LANGUAGE plpgsql;
First of all, the function only fires on a DELETE trigger, so you do not have to test for TG_OP. Secondly, in the INSERT statement you never access any data from the student relation so there is no need to JOIN to that relation; the sno does come from the student relation, but through the OLD implicit parameter.
You didn't post your DB schema and it's not very clear what your problem is, but it looks like a cascade delete is interfering somewhere. Specifically:
Before deleting the student, you insert something into cancel that references it.
Postgres proceeds to delete the row in student.
Postgres proceeds to honors all applicable cascade deletes.
Postgres deletes rows in entry and ... cancel (including the one you just inserted).
A few remarks:
Firstly, and as a rule of thumb, before triggers should never have side-effects on anything but the row itself. Inserting row in a before delete trigger is a big no no: besides introducing potential problems related such as Postgres reporting an incorrect FOUND value or incorrect row counts upon completing the query, consider the case where a separate before trigger cancels the delete altogether by returning NULL. As such, your trigger function should be running on an after trigger -- only at that point can you be sure that the row is indeed deleted.
Secondly, you don't need these inefficient, redundant, and ugly-as-sin sub-select statements. Use the insert ... select ... variety of inserts instead:
INSERT INTO cancel(eno, excode,sno,cdate,cuser)
SELECT entry.eno entry.excode, OLD.sno, current_timestamp, current_user
FROM entry
WHERE entry.sno = OLD.sno;
Thirdly, your trigger should probably be running on the entry table, like so:
INSERT INTO cancel(eno, excode,sno,cdate,cuser)
SELECT OLD.eno OLD.excode, OLD.sno, current_timestamp, current_user;
Lastly, there might be a few problems in your schema. If there is a unique row in entry for each row in student, and you need information in entry to make your trigger work in order to fill in cancel, it probably means the two tables (student and entry) ought to be merged. Whether you merge them or not, you might also need to remove (or manually manage) some cascade deletes where applicable, in order to enforce the business logic in the order you need it to run.

Postgresql function not working as expected with INSERT INTO

I have function to insert data from one table to another
$BODY$
BEGIN
INSERT INTO backups.calls2 (uid,queue_id,connected,callerid2)
SELECT distinct (c.uid) ,c.queue_id,c.connected,c.callerid2
FROM public.calls c
WHERE c.connected is not null;
RETURN;
EXCEPTION WHEN unique_violation THEN NULL;
END;
$BODY$
And structure of table:
CREATE TABLE backups.nc_calls_id
(
uid character(30) NOT NULL,
queue_id integer,
callerid2 text,
connected timestamp without time zone,
id serial NOT NULL,
CONSTRAINT calls2_pkey PRIMARY KEY (uid)
)
WITH (
OIDS=FALSE
);
When I have first executed this query, everything went ok, 200000 rows was inserted to new table with unique Id.
But now, when I executing it again, no rows are being inserted
From the rather minimalist description given (no PostgreSQL version, no CREATE FUNCTION statement showing params etc, no other table structure, no function invocation) I'm guessing that you're attempting to do a merge, where you insert a row only if it doesn't exist by skipping rows if they already exist.
What the above function will do is skip all rows if any row already exists.
You need to either use a loop to do the insert within individual BEGIN ... EXCEPTION blocks (slow) or LOCK the table and do an INSERT INTO ... SELECT ... FROM newtable WHERE NOT EXISTS (SELECT 1 FROM oldtable where oldtable.key = newtable.key).
The INSERT INTO ... SELECT ... WHERE NOT EXISTS method will perform a lot better but will fail if more than one runs concurrently or if anything else inserts into the destination table at the same time. LOCKing the destination table before running it will make sure it's safe.
The PL/PgSQL looping BEGIN ... EXCEPTION method sounds nice and safe at first glance. Then you think about what happens when you run two of them at once. One will insert some keys first, one will insert other keys first, so they have a split of the values between them. That's OK, together they make up the full set. But what if only one of them commits and the other fails for some reason? You'll have an interesting sparsely inserted result. For that reason it's probably best to lock the destination table if using this approach too ... in which case you might as well use the vastly more efficient single pass INSERT with subquery-based uniqueness violation check.