Dumping only certain part of table with pg_dump - postgresql

I want to dump a table of a PostgreSQL database (on Heroku) but want to only get the rows of the table matching certain a criteria, e.g.
created_at > "2016-01-01".
Is that even possible using the pg_dump utility?

pg_dump cannot do that. You can use COPY to extract data from a single table with a condition:
COPY (SELECT * FROM tab WHERE created_at > '2016-01-01') TO '/data/dumpfile';

Related

Is there a way to use pg_dump with a query filter on the table?

I would like to export a set of INSERT commands with pg_dump. I do not want to dump the full table though. I want to filter the table for specific rows with:
SELECT * FROM table where col=4
Is there a way to pass this information to pg_dump something like:
pg_dump db_name --column-inserts --data-only --table='(SELECT * FROM table WHERE col=5)' > test.sql
The above is a pseudo example. It does not really work. But is there such functionality like this without having to create a secondary table with the data out of that query?

Is possible to create a script sql from Postgresql, but select only some columns?

I have to insert a lot of lines (over 100) from a Postgresql db, to Oracle db.
I know a lot of solutions, writing on Oracle using oracle_fdw, create a csv file then using sqlloader, but I want a very fast solution, a sql script.
I know is possible create a sql script with this command
pg_dump --table=mytable --data-only --column-inserts mydb > data.sql
then import on Oracle db is easy.
I need something like this but with a difference, I want to export on data.sql, only some columns starting after a column id, i know is possible but is csv format
psql -c "copy(SELECT columns1,col2,col3... FROM mytable offset 3226 rows fetch first 100 rows only) to stdout" > dump.csv
is possible something like this but with sql format?
Solution found.
A nice way is to create a view
CREATE view foo2 AS
SELECT col1,col2,col4 FROM mytable
offset 3226 rows fetch first 129 rows only;
you export the view and voilĂ ...empty file!
This is because pg_dump man said
It will not dump the contents of views or materialized views, and the
contents of foreign tables will only be dumped if the corresponding foreign server is specified with
--include-foreign-data.
So we create a temporary table
CREATE table foo2 AS
SELECT col1,col2,col4 FROM mytable
offset 3226 rows fetch first 129 rows only;
and export to sql script with pg_dump
pg_dump --table=mytable --data-only --column-inserts mydb > mydata.sql
after import to other db (first control values)
we can drop the temporary table
drop table foo2

Copy table data from one database to another

I have two databases on the same server and need to copy data from a table in the first db to a table in the second. A few caveats:
Both tables already exist (ie: I must not drop the 'copy-to' table first. I need to just add the data to the existing table)
The column names differ. So I need to specify exactly which columns to copy, and what their names are in the new table
After some digging I have only been able to find this:
pg_dump -t tablename dbname | psql otherdbname
But the above command doesn't take into account the two caveats I listed.
For a table t, with columns a and b in the source database, and x and y in the target:
psql -d sourcedb -c "copy t(a,b) to stdout" | psql -d targetdb -c "copy t(x,y) from stdin"
I'd use an ETL tool for this. There are free tools available, they can help you change column names and they are widely used and tested. Most tools allow external schedulers like the windows task scheduler or cron to run transformations based on whatever time schedule you need.
I personally have used Pentaho PDI for similar tasks in the past and it has always worked well for me. For your requirement I'd create a single transformation that first loads the table data from the source database, modify the column names in a "Select Values"-step and then insert the values into the target table using the "truncate" option to remove the existing rows from the target table. If your table is too big to be re-filled each time, you'd need to figure out a delta load procedure.

dump subset of table

I want to dump a subset of a table of my postgres database. Is there a way to dump a SELECT statement without creating a view?
I need to copy a part of the table to an other postgres database.
Use COPY to dump it directly to disk.
Example (from the fine manual) using a SELECT:
COPY
(SELECT * FROM country WHERE country_name LIKE 'A%')
TO '/usr1/proj/bray/sql/a_list_countries.copy';

Postgres : pg_restore/pg_dump everything EXCEPT the table id's for a table

Currently I'm doing something like:
pg_dump -a -O -t my_table my_db > my_data_to_import.sql
What I really want is to be able to import/export just the data without causing conflicts with my autoid field or overwriting existing data.
Maybe I'm thinking about the whole process wrong?
You can use COPY with column list to dump and restore just data from one table. For example:
COPY my_table (column1, column2, ...) TO 'yourdumpfilepath';
COPY my_table (column1, column2, ...) FROM 'yourdumpfilepath';
OID is one of the system columns. For example it is not included in SELECT * FROM my_table (you need to use SELECT oid,* FROM my_table). OID is not the same as ordinary id column created along with other columns in CREATE TABLE. Not every table has OID column. Check default_with_oids option. If it's set to off, then probalby you don't have OID column in your table, but even if so, then you can still create table with OID using WITH OIDS option. It's recommended not to use OID as table's column (that's why default_with_oids is set to off prior to PostgreSQL 8.1).
pg_dump --insert -t TABLENAME DBNAME > fc.sql
cat fc.sql | sed -e 's/VALUES [(][0-9][0-9],/VALUES (/g'|sed -e 's/[(]id,/(/g' > fce.sql
psql -f fce.sql DBNAME
This dumps the table with columns into fc.sql then uses sed to remove the id, and the value associated with it