Is it possible to ignore specific columns when dumping a PostgreSQL database using pg_dump? I have a table with columns X and Y. Is it possible to dump only column X with pg_dump?
Not directly; however, you could
CREATE TABLE b AS SELECT x FROM A;
and then pg_dump that.
Related
I am looking to create a union all on tables with same names in different schema.
Is there a way to do this in redshift other than a brute force method of naming individual tables and columns in the union all statement.
example:
schema z table a,
schema y table a,
schema x table a
schema z table b,
schema y table b
schema y table c,
schema x table c
tables a columns - d, e, f,g , h
table b columns - d,e,g,h,i
table c columns - d,e,f
I'd go with writing a Lambda function (or just code on your computer) to inspect the system tables for the existence of the tables in question and then find the columns of each table. Then this code would compose the SQL.
You could also likely do this in a stored procedure but this is likely more work to get running. If you need many people to be able to use from inside the DB then it is worth the effort otherwise I'd go simple.
I've created a Heroku Postgres database, with one of the columns being db.Integer - storing integers. However, I realize that I should actually store floating point numbers instead - i.e., db.Float.
Can I change the type of a PostgreSQL database column - after it has been created with data?
Yes, you can. Use following query to do that:
ALTER TABLE $TABLE_NAME ALTER COLUMN $COLUMN_NAME TYPE <float type, e.g. float(64)>;
We have a table with columns X, Y and Z. We ran pg_dump to get a backup file and now we need to exclude column Y while restoring or if pg_dump itself can exclude column Y, that would help too. I appreciate if someone can give appropriate solution for this!
Thanks in advance.
I don't think you can use pg_dump directly to do that. But you could use it to dump and restore the table as it is and afterwards remove the column with
ALTER TABLE .. DROP ..
pg_dump --table=export_view --data-only --column-inserts mydb > export_view.sql
pg_dump (PostgreSQL) 10.7 (Ubuntu 10.7-1.pgdg18.04+1)
Export specific rows from a PostgreSQL table as INSERT SQL script and the postgresql documentation (https://www.postgresql.org/docs/10/app-pgdump.html) suggest it is possible to pg_dump from a view with the --table flag. If I export from the table directly I get the expected result (ie, data is exported). If I select from the view in psql I get the expected result. However whether I create a view or a materialized view and then try and pg_dump, I get only the normal pg_dump headers and no data. A commenter (https://stackoverflow.com/users/2036135/poshest) also appears to have faced the same issue in the above SO question, with no solution given.
If I CREATE TABLE blah AS SELECT x, y, z FROM MYTABLE then I can export fine. If I CREATE VIEW blah AS SELECT x, y, z FROM MYTABLE then the export is empty.
What am I doing wrong?
As #bugmenot points out, version 13 (and above?) - the current at the time this answer is written - indeed has clarification on what gets dumped:
As well as tables, this option can be used to dump the definition
of matching views, materialized views, foreign tables, and sequences.
It will not dump the contents of views or materialized views, and the contents of foreign tables will only be dumped if the
corresponding foreign server is specified with --include-foreign-data.
(emphasis added).
So the answer (to myself) is: "You are not doing anything wrong, except that you incorrectly interpreted the documentation for Postgres <=12. What you want to do is not possible."
Views do not store data, they provide a dynamic view onto it. When you include views in your dump, you will only get the view definition.
I have two databases on the same server and need to copy data from a table in the first db to a table in the second. A few caveats:
Both tables already exist (ie: I must not drop the 'copy-to' table first. I need to just add the data to the existing table)
The column names differ. So I need to specify exactly which columns to copy, and what their names are in the new table
After some digging I have only been able to find this:
pg_dump -t tablename dbname | psql otherdbname
But the above command doesn't take into account the two caveats I listed.
For a table t, with columns a and b in the source database, and x and y in the target:
psql -d sourcedb -c "copy t(a,b) to stdout" | psql -d targetdb -c "copy t(x,y) from stdin"
I'd use an ETL tool for this. There are free tools available, they can help you change column names and they are widely used and tested. Most tools allow external schedulers like the windows task scheduler or cron to run transformations based on whatever time schedule you need.
I personally have used Pentaho PDI for similar tasks in the past and it has always worked well for me. For your requirement I'd create a single transformation that first loads the table data from the source database, modify the column names in a "Select Values"-step and then insert the values into the target table using the "truncate" option to remove the existing rows from the target table. If your table is too big to be re-filled each time, you'd need to figure out a delta load procedure.