I created a temp table in my PostgreSQL DB using the following query
SELECT * INTO TEMP TABLE tempdata FROM data WHERE id=2004;
Now I want to create a backup of this temp table tempdata.
So i use the following command line execution
"C:\Program Files\PostgreSQL\9.0\bin\pg_dump.exe" -F t -a -U my_admin -t tempdata myDB >"e:\mydump.backup"
I get a message saying
pg_dump: No matching tables were found
Is it possible to create a dump of temp tables?
Am I doing it correctly?
P.S. : I would also want to restore the same.I don't want to use any extra components.
TIA.
I don't think you'll be able to use pg_dump for that temporary table. The problem is that temporary tables only exist within the session where they were created:
PostgreSQL instead requires each session to issue its own CREATE TEMPORARY TABLE command for each temporary table to be used. This allows different sessions to use the same temporary table name for different purposes, whereas the standard's approach constrains all instances of a given temporary table name to have the same table structure.
So you'd create the temporary table in one session but pg_dump would be using a different session that doesn't have your temporary table.
However, COPY should work:
COPY moves data between PostgreSQL tables and standard file-system files.
but you'll either be copying the data to the standard output or a file on the database server (which requires superuser access):
COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file must be accessible to the server and the name must be specified from the viewpoint of the server.
[...]
COPY naming a file is only allowed to database superusers, since it allows reading or writing any file that the server has privileges to access.
So using COPY to dump the temporary table straight to a file might not be an option. You can COPY to the standard output though but how well that will work depends on how you're accessing the database.
You might have better luck if you didn't use temporary tables. You would, of course, have to manage unique table names to avoid conflicts with other sessions and you'd have to take care to ensure that your non-temporary temporary tables were dropped when you were done with them.
Related
Is it common to get differing create statements from pg_dump vs. PgAdmin?
Pg_dump creates a bare table, then uses extra commands later to create the indexes, constraints, etc., even when it would be possible to do it so at creation time. This is more efficient to bulk load before those things are created, and can also avoid dependency order problems.
I'm running pg_dump -F custom for database backups, with --exclude-table-data for a very large audit table. I'm then exporting that table data in a separate dump file. It isn't referentially integral with the main dump.
As part of my restore strategy, I'd like to be able to restore the main dump, bring my app online and continue using the database immediately, then bring the audit data back in behind it. The trouble is, as soon as new audit data comes in at sequence 1, the import of the audit data fails as soon as it tries to insert over the top of the new data.
Is it possible to include the setting of the sequence in the main dump without including the table data?
I have considered removing the primary key, but there are other tables I'd also like to do this with, and they definitely do need the PK.
I'm using postgresql 13.
Instead of a sequence, which can build with a rownumber use uuids and a timestamp, so you have unique values and the order of insert doesn't matter. Uuids are a bit slower the ints.
Another possibility that you save th last audit Id in another table and set the sequence new like https://www.postgresql.org/docs/9.1/sql-altersequence.html
In few other DB engines I can easily extract (part of) table to single file.
Then if needed I can 'mount' this file as regular table. Querying is obviously slow but this is very useful
I wonder if similar stuff is possible with psql ?
I know COPY FROM/TO function - but for bigger tables I need to wait ages in order to copy records from CSV
Yes, you can use file_fdw to access (read) a CSV file on the database server as if it were a table.
I have a CSV file whose data is to be imported to Postgres database , I did it using import function in pgadmin III but the problem is my CSV file changes frequently so how to import the data overwriting the already existing data in database from CSV file ?
You can save WAL logging through an optimization between TRUNCATE/COPY in the same transaction. The basic idea is to wipe the database table with TRUNCATE and reimport the data with COPY. This doesn't need to be done manually with pgAdmin each time. It can be scripted with something like:
BEGIN;
-- The CSV file is 'mydata.csv' and the table is 'mydata'.
TRUNCATE mydata;
COPY mydata FROM 'mydata.csv' WITH (FORMAT csv);
COMMIT;
Note that it requires superuser access to work. The COPY command also takes various arguments, so you can adjust for different settings for null and headers etc.
Finally it should be noted that you ideally want these both to be in the same transaction. I'm not going to over-complicate this example here though as this level of care isn't needed in many of the real-world sorts of cases where one is copying in a CSV file. If you think your situation needs it, it's not too hard to track down.
I'm currently working on dumping one of our customer's database in a way that allows us to create new databases from this customer's basic structure, but without bringing along their private data.
So far, I've had success with pg_dump combined with the --exclude_table and exclude-table-data commands, which allowed me to bring only the data I'll effectively need for this task.
However, there are a few tables that mix lines which references some of the data I left behind with other lines that references data that I had to bring, and this is causing me a few issues during the restore operation. Specifically, when the dump tries to enforce FOREIGN KEY constraints for certain columns on these tables, it fails because there are some lines with keys that have no matching data on the respective foreign table - because I chose to not bring this table's data!
I know I can log into the database after the dump is complete, delete any rows that reference data that no longer exists and create the constraint myself, but I'd like to automate the process as much as possible. Is there a way to tell pg_dump or pg_restore (or any other program) to not bring rows from table A if they reference table B if and table B's data was excluded from the backup? Or to tell Postgres that I'd like to have that specific foreign key to be active before importing the table's data?
For reference, I'm working with PostgreSQL 9.2 on a HREL 7 server.
What if you disable foreign key checking when you restore your database dump? And after that remove lonely rows from the referring table.
By the way, I recommend you to fix you database schema so there is no chance wrong tuples being inserted into your database.