Postgres select into temp table with fucntion - postgresql

In Postgres, you can select into a temporary table, is it possible to select into temporary table using a function, sucha as
Select * into temporary table myTempTable from someFucntionThatReturnsATable(1);
Thanks!

Related

Postgresql database backup within tables

Is there any way to create a backup of particular row within a table in postgresql?
COPY (SELECT * FROM mytable WHERE id = 42)
TO STDOUT (FORMAT 'csv');
You can also just backup data to another table with:
create table backup as select * from mytable where id=24;

Is the output of sqlite in "mode .insert" correct?

Consider the table I create in SQLite database with CREATE TABLE tbl(x); which has the following data: INSERT INTO tbl VALUES(1); INSERT INTO tbl VALUES(2);. Now I wish to create a SQL file of this schema and data that I wish to import into PostgreSQL and I do the following:
.mode insert
.output "tbl.sql"
.schema tbl
select * from tbl order by x;
.output stdout
And the output is:
CREATE TABLE tbl(x);
INSERT INTO table VALUES(1);
INSERT INTO table VALUES(2);
Shouldn't the output of the insert statements be INSERT INTO tbl VALUES(1); INSERT INTO tbl VALUES(2); ?
This is not really a problem because I can easily do a find/repalce to fix this but that might potentially introduce unforeseen problems (like changing data inside the insert statement).
From the fine SQLite manual:
When specifying insert mode, you have to give an extra argument which is the name of the table to be inserted into. For example:
sqlite> .mode insert new_table
sqlite> select * from tbl1;
INSERT INTO "new_table" VALUES('hello',10);
INSERT INTO "new_table" VALUES('goodbye',20);
sqlite>
So saying .mode insert leaves SQLite to use the default table name which is apparently table. You should be saying:
.mode insert tbl

How to change table schema after created in Redshift?

Postgre supports this operation as below:
ALTER TABLE name
SET SCHEMA new_schema
The operation won't work in Redshift. Is there any way to do that?
I tried to update pg_class to set relnamespace(schema id) for the table, which needs superuser account and usecatupd is true in pg_shadow table. But I got permission denied error. The only account who can modify pg system table is rdsdb.
server=# select * from pg_user;
usename | usesysid | usecreatedb | usesuper | usecatupd | passwd | valuntil | useconfig
------------+----------+-------------+----------+-----------+----------+----------+----------------------------------
rdsdb | 1 | t | t | t | ******** | |
myuser | 100 | t | t | f | ******** | |
So really redshift gives no permission for that?
Quickest way to do this now is as follows:
CREATE TABLE my_new_schema.my_table (LIKE my_old_schema.my_table);
ALTER TABLE my_new_schema.my_table APPEND FROM my_old_schema.my_table;
DROP TABLE my_old_schema.my_table;
The data for my_old_schema.my_table is simply remapped to belong to my_new_schema.my_table in this case. Much faster than doing an INSERT INTO.
Important note: "After data is successfully appended to the target table, the source table is empty" (from AWS docs on ALTER TABLE APPEND), so be careful to run the ALTER statement only once!
Note that you may have to drop and recreate any views that depend on my_old_schema.my_table. UPDATE: If you do this regularly you should create your views using WITH NO SCHEMA BINDING and they will continue to point at the correct table without having to be recreated.
The best way to do that is to create a new table with the desired schema, and after that do an INSERT .... SELECT with the data from the old table.
Then drop your current table and rename the new one with ALTER TABLE.
You can create a new table with
CREATE TABLE schema1.tableName( LIKE schema2.tableName INCLUDING DEFAULTS ) ;
and then copy the contents of table from one schema to another using the INSERT INTO statement
followed by DROP TABLE to delete the table.
This is how i do it.
-- Drop if you have already one backup
DROP TABLE IF EXISTS TABLE_NAME_BKP CASCADE;
-- Create two back up, one to work and will be deleted at the end, and one more is real backup
SELECT * INTO TABLE_NAME_BKP FROM TABLE_NAME;
SELECT * INTO TABLE_NAME_4_WORK FROM TABLE_NAME;
--We can do also do the below ALTER, but this holds primary key constraint name, hence you cant create new table with same constraints names
ALTER TABLE TABLE_NAME RENAME TO TABLE_NAME_4_WORK;
-- Ensure you have copied
SELECT COUNT(*) FROM TABLE_NAME;
SELECT COUNT(*) FROM TABLE_NAME_4_WORK;
-- Create the new table schema
DROP TABLE IF EXISTS TABLE_NAME CASCADE;
CREATE TABLE TABLE_NAME (
ID varchar(36) NOT NULL,
OLD_COLUMN varchar(36),
NEW COLUMN_1 varchar(36)
)
compound sortkey (ID, OLD_COLUMN, NEW COLUMN_1);
ALTER TABLE TABLE_NAME
ADD CONSTRAINT PK__TAB_NAME__ID
PRIMARY KEY (id);
-- copy data from old to new
INSERT INTO TABLE_NAME (
id,
OLD_COLUMN)
(SELECT
id,
OLD_COLUMN FROM TABLE_NAME_4_WORK)
-- Drop the work table TABLE_NAME_4_WORK
DROP TABLE TABLE_NAME_4_WORK;
-- COMPARE BKP AND NEW TABLE ROWS, AND KEEP BKP TABLE FOR SOMETIME.
SELECT COUNT(*) FROM TABLE_NAME_BKP;
SELECT COUNT(*) FROM TABLE_NAME;

Copy Postgres table while maintaining primary key autoincrement

I am trying to copy a table with this postgres command however the primary key autoincrement feature does not copy over. Is there any quick and simple way to accomplish this? Thanks!
CREATE TABLE table2 AS TABLE table;
Here's what I'd do:
BEGIN;
LOCK TABLE oldtable;
CREATE TABLE newtable (LIKE oldtable INCLUDING ALL);
INSERT INTO newtable SELECT * FROM oldtable;
SELECT setval('the_seq_name', (SELECT max(id) FROM oldtable)+1);
COMMIT;
... though this is a moderately unusual thing to need to do and I'd be interested in what problem you're trying to solve.

Flip flopping data tables in Postgres

I have a table of several million records which I am running a query against and inserting the results into another table which clients will query. This process takes about 20 seconds.
How can I run this query, building this new table without impacting any of the clients that might be running queries against the target table?
For instance. I'm running
BEGIN;
DROP TABLE target_table;
SELECT blah, blahX, blahY
INTO target_table
FROM source_table
GROUP BY blahX, blahY
COMMIT;
Which is then blocking queries to:
SELECT SUM(blah)
FROM target_table
WHERE blahX > x
In the days of working with some SQL Server DBA's I recall them creating temporary tables, and then flipping these in over the current table. Is this doable/practical in Postgres?
What you want here is to minimize the lock time, which of course if you include a query (that takes a while) in your transaction is not going to work.
In this case, I assume you're in fact refreshing that 'target_table' which contains the positions of the "blah" objects when you run your script is that correct ?
BEGIN;
CREATE TEMP TABLE temptable AS
SELECT blah, blahX, blahY
FROM source_table
GROUP BY blahX, blahY
COMMIT;
BEGIN;
TRUNCATE TABLE target_table
INSERT INTO target_table(blah,blahX,blahY)
SELECT blah,blahX,blahY FROM temptable;
DROP TABLE temptable;
COMMIT;
As mentioned in the comments, it will be faster to drop the index's before truncating and create them anew just after loading the data to avoid the unneeded index changes.
For the full details of what is and is not possible with postgreSQL in that regard :
http://postgresql.1045698.n5.nabble.com/ALTER-TABLE-REPLACE-WITH-td3305036i40.html
There's ALTER TABLE ... RENAME TO ...:
ALTER TABLE name
RENAME TO new_name
Perhaps you could select into an intermediate table and then drop target_table and rename the intermediate table to target_table.
I have no idea how this would interact with any queries that may be running against target_table when you try to do the rename.
You can create a table, drop a table, and rename a table in every version of SQL I've ever used.
BEGIN;
SELECT blah, blahX, blahY
INTO new_table
FROM source_table
GROUP BY blahX, blahY;
DROP TABLE target_table;
ALTER TABLE new_table RENAME TO target_table;
COMMIT;
I'm not sure off the top of my head whether you could use a temporary table for this in PostgreSQL. PostgreSQL creates temp tables in a special schema; you don't get to pick the schema. But you might be able to create it as a temporary table, drop the existing table, and move it with SET SCHEMA.
At some point, any of these will require a table lock. (Duh.) You might be able to speed things up a lot by putting the swappable table on a SSD.