PostgreSQL: Select several queries into same table - postgresql

Good day,
I have a table with some records that should be deleted. I would like to keep track of the deleted records and place such records in a new table. I would like to do the following:
SELECT * INTO TEMP FROM TABLE WHERE criteria < 1;
And then delete these records with DELETE query. Later on I would like to do a new SELECT query:
SELECT * INTO TEMP FROM TABLE WHERE new_criteria > 2;
And then delete those records as well. I will be working from only one table and just place the selected records into the same, new table (just for reference).
Thanks!

INSERT INTO temp (SELECT * FROM tbl WHERE criteria < 1);

Does you temp table have the same structure as the original. if the temp does not exist you might want to do this.
create table temp as select * from TABLE where criteria <1

If you are using Postgresql 9.3 You could also do everything in one command:
WITH deleted_rows AS (
DELETE FROM tbl
WHERE criteria<1
RETURNING *
)
INSERT INTO temp
SELECT * FROM deleted_rows;
(see http://www.postgresql.org/docs/9.3/static/queries-with.html#QUERIES-WITH-MODIFYING )

Related

Insert records from one table to another and then delete inserted records

I am using this query to insert record from one table into another table:
insert into watched_url_queue_work select * from watched_url_queue
on conflict do nothing
The unique constraints on the target table mean not all are inserted.
What I want to now do is delete all of the records that I just inserted but I am not sure of syntax.
I want something like (query not working just my guess at it):
delete from watched_url_queue q
where q.target_domain_record_id in
(
insert into watched_url_queue_work select * from watched_url_queue
on conflict do nothing
returning watched_url_queue_work.target_domain_record_id
)
You can do this with a CTE:
with inserted as (
insert into watched_url_queue_work
select * from watched_url_queue
on conflict do nothing
returning watched_url_queue_work.target_domain_record_id
)
delete from watched_url_queue q
using inserted
where q.target_domain_record_id = inserted.target_domain_record_id;
(The q.target_domain_record_id in (select … from inserted) approach works as well.)

Insert a group of rows to a table in PostgreSQL

I have a table with a lot of rows and I want to create a new table and copy just a bunch of rows (like 30) in my new table....
- the table name is account (code,code_activation,email,password) and I'm using PostgreSQL.
You can use insert command:
INSERT INTO AccountCopy (SELECT code,code_activation,email,password FROM account LIMIT 30);
You can put a WHERE clause and select those rows that you want. LIMIT n select the first n rows of a table.

DB2 - REPLACE INTO SELECT from table

Is there a way in db2 where I can replace the entire table with just selected rows from the same table ?
Something like REPLACE into tableName select * from tableName where col1='a';
(I can export the selected rows, delete the entire table and load/import again, but I want to avoid these steps and use a single query).
Original table
col1 col2
a 0 <-- replace all rows and replace with just col1 = 'a'
a 1 <-- col1='a'
b 2
c 3
Desired resultant table
col1 col2
a 0
a 1
Any help appreciated !
Thanks.
This is a duplicate of my answer to your duplicate question:
You can't do this in a single step. The locking required to truncate the table precludes you querying the table at the same time.
The best option you would have is to declare a global temporary table (DGTT) and insert the rows you want into it, truncate the source table, and then insert the rows from the DGTT back into the source table. Something like:
declare global temporary table t1
as (select * from schema.tableName where ...)
with no data
on commit preserve rows
not logged;
insert into session.t1 select * from schema.tableName;
truncate table schema.tableName immediate;
insert into schema.tableName select * from session.t1;
I know of no way to do what you're asking in one step...
You'd have to select out to a temporary table then copy back.
But I don't understand why you'd need to do this in the first place. Lets assume there was a REPLACE TABLE command...
REPLACE TABLE mytbl WITH (
SELECT * FROM mytbl
WHERE col1 = 'a' AND <...>
)
Why not simply delete the inverse set of rows...
DELETE FROM mytbl
WHERE NOT (col1 = 'a' AND <...>)
Note the comparisons done in the WHERE clause are the exact same. You just wrap them in a NOT ( ) to delete the ones you don't want to keep.

Efficiency help on inserting and deleting rows from a large DB table ~100M rows

I am inserting rows from a large DB table into an archive table and then deleting the inserted rows. My code is as follows:
-- insert here
insert into DEST_DB.dbo.ARCHIVE_TABLE
select SRC_DB.dbo.ORIG_TABLE.*
from SRC_DB.dbo.ORIG_TABLE
where SRC_DB.dbo.ORIG_TABLE.ORDER_ID
IN ( select #tmp_table.order_id from #tmp_table )
-- delete here
delete from SRC_DB.dbo.ORIG_TABLE
where SRC_DB.dbo.ORIG_TABLE.ORDER_ID
IN ( select #tmp_table.order_id from #tmp_table )
The size of the #tmp_table.order_id table is currently set to 10K rows and the temp table will be filled and cleared in a loop, which means it will be used for my insertion and deletion operations within each loop iteration.
I have UNIQUE UNCLUSTERED indexes on the ORDER_ID column for my SRC_DB.dbo.ORIG_TABLE
My problem is when I try my stored procedure, it just seems to halt on processing this table.
I understand I may not have the most efficient solutions and would like to hear criticism and suggestions on how I can improve my stored procedure.
Thanks
I would try a PK on #tmp_table and dropping the count down
insert into DEST_DB.dbo.ARCHIVE_TABLE
select SRC_DB.dbo.ORIG_TABLE.*
from SRC_DB.dbo.ORIG_TABLE
join #tmp_table on #tmp_table.order_id = SRC_DB.dbo.ORIG_TABLE.order_id
order by clustered index on DEST_DB.dbo.ARCHIVE_TABLE
delete SRC_DB.dbo.ORIG_TABLE
join #tmp_table on #tmp_table.order_id = SRC_DB.dbo.ORIG_TABLE.order_id
Stored procedure is executing the code you provided?
Does this "it just seems to halt on processing this table" you never saw this SP finish - it's that slow?
Try smaller #tmp_table.order_id - 100 or 1000 rows.
Try changing WHERE clause like this:
-- insert here
insert into DEST_DB.dbo.ARCHIVE_TABLE
select SRC_DB.dbo.ORIG_TABLE.*
from SRC_DB.dbo.ORIG_TABLE
where exists
( select #tmp_table.order_id from #tmp_table where #tmp_table.order_id=SRC_DB.dbo.ORIG_TABLE.ORDER_ID)
-- delete here
delete from SRC_DB.dbo.ORIG_TABLE
where exists
( select #tmp_table.order_id from #tmp_table where SRC_DB.dbo.ORIG_TABLE.ORDER_ID=#tmp_table.order_id)

select data from 8 tabels into one temp table

I need to insert data from several tables with all the same field names into one temp table, i know i can use cursor/loop to do this, i wanted to know is there a quicker way of doing this.
select from table 1, table 2, table 3, into #temptable.
select * into #temptable from table1
insert into #temptable select * from table2
insert into #temptable select * from table3
The first query creates the temp table on insert, the rest just keep adding data.