I am trying to run 2 statements - oracle10g

truncate table abc;
Insert into table abc;
truncate table xyz;
these are large database and takes large time to truncate, before truncate only the data us been getting inserted into database

Related

Postgresql - How to truncate table and insert rows in same table within a single transaction

I want to truncate a table in PostgreSQL 13 and then insert rows to the same table within a single transaction, so that when I execute the SQL command it will TRUNCATE the table and if during INSERTS anything fails truncated data should be rollback.
Below is the command which I want in a single transaction.
TRUNCATE TABLE public.truninsdemo;
INSERT INTO public.truninsdemo (id, name)
VALUES (1, 'Scott'), (2, 'John');
UPDATE
I don't want to execute TRUNCATE and INSERT statements as 2 separate commands, separated by a semicolon. I want to achieve results using a single command may be CTE, but not sure how.

How to remove columns for real in postgresql?

I have a large system, and table schema updates quite offtenly, I noticed that after times of removing and recreating new cloumn, limitation "tables can have at most 1600 columns" is shown, but still there are few columns in information_schema.columns.
I've tried vacuum full analyze, still not working, any way to avoid this limitation?
DO $$
declare tbname varchar(1024);
BEGIN
FOR i IN 1..1599 LOOP
tbname := 'alter table vacuum_test add column test' || CAST(i AS varchar(8)) ||' int';
EXECUTE tbname;
END LOOP;
END $$;
alter table vacuum_test drop column test1;
VACUUM FULL ANALYZE vacuum_test;
alter table vacuum_test add column test1 int;
result:
alter table vacuum_test add column test1 int
> ERROR: tables can have at most 1600 columns
> 时间: 0.054s
Unfortunately vacuum full does not remove dropped columns from the table (i.e. entries that have attisdropped = true in `pg_attribute). I would have expected that, but apparently this does not happen.
The only way to get rid of the hidden columns is to create a brand new table and copy the data to the new table.
Something along the lines:
create table new_table (like old_table including all);
insert into new_table
select *
from old_table;
Then drop the old table and rename the new one to the old name. Constraint and index names will be named differently, so you might want to rename them as well.
You will have the re-create all foreign keys (incoming and outgoing) manually as they are not included when using CREATE TABLE (LIKE ...).
Another option is to use pg_repack which does this transparently in the background without locking the table.

Is there a method to do an ALTER Column in postgres 12 on an huge table without waiting a lifetime?

Is there a method to do an ALTER COLUMN in postgres 12 on an huge table without waiting a lifetime?
I try to convert a field from bigint to smallint :
ALTER TABLE huge ALTER COLUMN result_code TYPE SMALLINT;
It takes 28 hours, is there a smarter method?
The table has sequences, keys and foreign keys
The table has to be rewritten, and you have to wait.
If you have several columns whose data type you want to change, you can use several ALTER COLUMN clauses in a single ALTER TABLE statement and save time that way.
An alternative idea would be to use logical replication: set up an empty copy of the database (pg_dump -s), where your large table is defined with smallint columns. Replicate your database to that database, and switch over as soon as replication has caught up.

Efficient way to move large number of rows from one table to another new table using postgres

I am using PostgreSQL database for live project. In which, I have one table with 8 columns.
This table contains millions of rows, so to make search faster from table, I want to delete and store old entries from this table to new another table.
To do so, I know one approach:
first select some rows
create new table
store this rows in that table
than delete from main table.
But it takes too much time and it is not efficient.
So I want to know what is the best possible approach to perform this in postgresql database?
Postgresql version: 9.4.2.
Approx number of rows: 8000000
I want to move rows: 2000000
You can use CTE (common table expressions) to move rows in a single SQL statement (more in the documentation):
with delta as (
delete from one_table where ...
returning *
)
insert into another_table
select * from delta;
But think carefully whether you actually need it. Like a_horse_with_no_name said in the comment, tuning your queries might be enough.
This is a sample code for copying data between two table of same.
Here i used different DB, one is my production DB and other is my testing DB
INSERT INTO "Table2"
select * from dblink('dbname=DB1 dbname=DB2 user=postgres password=root',
'select "col1","Col2" from "Table1"')
as t1(a character varying,b character varying);

How can I ensure synchronous DDL operations on a table that is being replaced?

I have multiple processes which are continually refreshing data in Redshift. They start a transaction, create a new table, COPY all the data from S3 into the new table, then drop the old table and rename the new table to the old table.
pseudocode:
start transaction;
create table foo_temp;
copy into foo_temp from S3;
drop table foo;
rename table foo_temp to foo;
commit;
I have several dozen tables that I update in this way. This works well but I would like to have multiple processes performing these table updates for redundancy purposes and to ensure that data is fairly fresh (different processes can update the data for different tables concurrently).
It works fine unless one process attempts to refresh a table that another process is working on. In that case the second process gets blocked by the first until it commits, and when it commits the second process gets the error:
ERROR: table 12345 dropped by concurrent transaction
Is there a simple way for me to guarantee that only one of my processes is refreshing a table so that the second process doesn't get into this situation?
I considered creating a special lock table for each of my real tables. The process would LOCK the special lock table before working on the companion real table. I think that will work but I would like to avoid creating a special lock table for each of my tables.
you need to protect readers from seeing the drop, do this by:
begin transaction
rename main table to old_main_table
rename tmp table to main table
commit
drop table old_main_table
Conn #1 Conn #2
-------------- ------------------------------------------
> create table bar (id int,id2 int,id3 int);
CREATE TABLE
> begin;
BEGIN
> begin;
BEGIN
> alter table bar rename to bar2;
ALTER TABLE
> select * from bar;
> create table bar (id int,id2 int,id3 int,id4 int);
CREATE TABLE
> commit; drop table bar2;
COMMIT
id | id2 | id3
----+-----+-----
(0 rows)
> commit;
COMMIT
DROP TABLE