How can I generate the DDL of a table programmatically on Postgresql? Is there a system query or command to do it? Googling the issue returned no pointers.
Use pg_dump with this options:
pg_dump -U user_name -h host database -s -t table_or_view_names -f table_or_view_names.sql
Description:
-s or --schema-only : Dump only ddl / the object definitions (schema), without data.
-t or --table Dump : Dump only tables (or views or sequences) matching table
Examples:
-- dump each ddl table elon build.
$ pg_dump -U elon -h localhost -s -t spacex -t tesla -t solarcity -t boring > companies.sql
Sorry if out of topic. Just wanna help who googling "psql dump ddl" and got this thread.
You can use the pg_dump command to dump the contents of the database (both schema and data). The --schema-only switch will dump only the DDL for your table(s).
Why would shelling out to psql not count as "programmatically?" It'll dump the entire schema very nicely.
Anyhow, you can get data types (and much more) from the information_schema (8.4 docs referenced here, but this is not a new feature):
=# select column_name, data_type from information_schema.columns
-# where table_name = 'config';
column_name | data_type
--------------------+-----------
id | integer
default_printer_id | integer
master_host_enable | boolean
(3 rows)
The answer is to check the source code for pg_dump and follow the switches it uses to generate the DDL. Somewhere inside the code there's a number of queries used to retrieve the metadata used to generate the DDL.
Here is a good article on how to get the meta information from information schema,
http://www.alberton.info/postgresql_meta_info.html.
I saved 4 functions to mock up pg_dump -s behaviour partially. Based on \d+ metacommand. The usage would be smth alike:
\pset format unaligned
select get_ddl_t(schemaname,tablename) as "--" from pg_tables where tableowner <> 'postgres';
Of course you have to create functions prior.
Working sample here at rextester
Related
I am dumping a large Postgres table like this:
pg_dump -h myserver -U mt_user --table=mytable -Fc -Z 9 --file mytable.dump mydb
The above creates a mytable.dump file. I now want to restore this dump into a new table called mytable_restored.
How can I use the pg_restore command to do this?
There is no pg_restore option to rename tables.
I would do it like this:
-- create a table that is defined like the original
CREATE TABLE mytable_restored (LIKE mytable INCLUDING ALL);
-- copy the table contents
COPY mytable TO '/tmp/mytable.dmp';
COPY mytable_restored FROM '/tmp/mytable.dmp';
you can append sed to your pg_dump so that while pg_dump outputs the table name, sed will replace it with the right one.
pg_dump mytable mydb | sed 's/mytable/mytable_restored/g' > mytable_restored.dump
My answer may not the exact answer for the question but for my case (directory dump with many tables), I have improved Laurenz's answer, it may be useful.
In my case I have just needed two tables to restore to another location.
First I created the tables in new location.
I find the table data in the toc.dat file.
pg_restore -Fd <dir_name> -l | grep <table_name>
This will probably return more than single line. Then we look for "TABLE DATA" line. After getting [file_number].dat.gz file and use it in copy command like below in psql.
COPY <table_name> from PROGRAM 'zcat /<path>/<file_number>.dat.gz'
In my database, I have the master tables starting with m_* and other. What I want to take the back of tables with following scenario.
Backup schema + data for master tables i.e table names starting with m_*
Backuo schema structure for the rest of the tables.
I did read the following command somewhere
pg_dump -U "postgres" -h "local" -p "5432"
-d dbName -F c -b -v -f c:\uti\backup.dmp
--exclude-table-data '*.table_name_pattern_*'
--exclude-table-data 'some_schema.another_*_pattern_*'
But I have so many tables and I find it tedious to put each table name in it. Any tidy way to get around it?
Using Linux:
File foo.sh (adjust filtering conditions):
psql <connection and other parameters> -c "copy (select format('--exclude-table-data=%s.%s', schemaname, tablename) from pg_tables where schemaname in ('public', 'foo') and tablename<>'t') to stdout;"
Command (note about backticks):
pg_dump <connection and other parameters> `./foo.sh`
Note that it is very flexible approach.
I am having a problem upgrading my Postgresql 9.2 database to 9.4. The problem I have is that I have these tables that are incompatible for an upgrade:
public.pg_ts_dict.dict_init
public.pg_ts_dict.dict_lexize
public.pg_ts_parser.prs_start
public.pg_ts_parser.prs_nexttoken
public.pg_ts_parser.prs_end
public.pg_ts_parser.prs_headline
public.pg_ts_parser.prs_lextype
Postgresql says that I should delete these tables and the upgrade should work. I am currently trying to figure out how to do this.
Postgresql has pg_catalog.pg_ts_parser and pg_catalog.pg_ts_dict. These are system catalogs and can absolutely not be removed, nor do I want to. I want to remove the public.pg_ts_* tables.
More specifically I want to dump the tables, upgrade the database, and then restore both public.pg_ts_parser and public.pg_ts_dict. However every time I try to dump or drop the tables, it defaults to the system catalog. Which I don't want. How can I specify these exact tables? Thanks for any help in advance.
-------EDIT------
Here are the commands I am running to dump the tables.
pg_dump -Fc -t public.pg_ts_dict -t public.pg_ts_parser > file.dump
pg_dump: No matching tables were found
Here is a variation
pg_dump -Fc -t pg_ts_dict -t pg_ts_parser > file.dump
The second variation contains the dump of the system catalog pg_ts_dict and parser not the public version. However, it is very confusing because the contents of the file.dump contains these lines of code among # signs and ^ signs.
DROP TABLE pg_catalog.pg_ts_dict;
^#^#^#pg_catalog^#^#^#^#^#^#^H^#^#^#postgres^#^D^#^#^#true^A^A^#^#^#^C^#^#^#^#^#^#^#^#^# ^G^#^#^#^#^#^#^#^#^A^#^#^#0^#^A^#^#^#0^#
^#^#^#pg_ts_dict^#^C^#^#^#ACL^#^A^#^#^#^#<86>^#^#^#REVOKE ALL ON TABLE pg_ts_dict FROM PUBLIC;
REVOKE ALL ON TABLE pg_ts_dict FROM postgres;
GRANT SELECT ON TABLE pg_ts_dict TO PUBLIC;
^#^#^#^#^#^A^A^#^#^#^#
^#^#^#pg_catalog^A^A^#^#^#^#^H^#^#^#postgres^#^E^#^#^#false^#^B^#^#^#54^A^A^#^#^#^C^#^#^#^#^#^#^#^#^#7^#^#^#^#^#^#^#^#^#^D^#^#^#1259^#^D^#^#^#3601^#^L^#^#^#pg_ts_parser^#^E^#^#^#TABLE^#^B^#^#^#^#ö^#^#^#CREATE TABLE pg_ts_parser (
prsname name NOT NULL,
prsnamespace oid NOT NULL,
prsstart regproc NOT NULL,
prstoken regproc NOT NULL,
prsend regproc NOT NULL,
prsheadline regproc NOT NULL,
prslextype regproc NOT NULL
Not sure what to make of this.
Your call should actually work as is:
pg_dump -Fc -t public.pg_ts_dict -t public.pg_ts_parser > file.dump
You can use a wildcard to include all tables starting with pg_ts_.
pg_dump -Fc -t 'public.pg_ts_*' > file.dump
On the Linux shell, you may need the extra quotes. (Related question on dba.SE.) Remove the quotes in Windows.
To make it abundantly clear you could exclude the same tables from pg_catalog explicitly. Normally, this is not necessary, but something seems to be abnormal in your case.
pg_dump -Fc -t 'public.pg_ts_*' -T 'pg_catalog.pg_ts_*' > file.dump
The documentation:
Also, you must write something like -t sch.tab to select a table in a
particular schema, rather than the old locution of -n sch -t tab.
I have a Postgres RDS instance for one of my apps.
I need to copy 3 tables from it to a similar clone of the database.
I see mytable_id_seq tables also, which now I know are called sequences in postgres terminology.
When I created a dump of those three tables, and restore them, do I have to do anything with the _id_seq sequences ?
Do I have to restore them too, for the dump data to work as it did in the original table?
When you restore the entire database from a dump, it contains CREATE SEQUENCE statements by default. These statements initialize sequences to the proper state. But if you make a partial dump, with only selected tables, you must set sequences manualy.
Assuming, that your table's name is "clip", you can check the current value using this query:
SELECT last_value FROM clip_id_seq
And if you want to update the sequence after restore, you can do it with this simple query:
SELECT SETVAL('clip_id_seq', SELECT MAX(id) FROM clip)
pg_dump -d database_name -t mg_cnd -F c > database_backup.sql
pg_restore -U database_user --data-only -d database_name -t mg_cnd -F c <file_location>
-d = database name
-t = table name
-F = Format
c = plain
-U = User
--data-only = transfer only table data
pg_dump documentation
pg_restore documentation
I have taken a dump of a database named temp1, by using the follwing command
$ pg_dump -i -h localhost -U postgres -F c -b -v -f pub.backup temp1
Now I want to restore the dump in a different database called "db_temp" , but in that I just want that all the tables should be created in a "temp_schema" ( not the default schema which is in the fms temp1 database ) which is in the "db_temp" database.
Is there any way to do this using pg_restore command?
Any other method also be appreciated!
A quick and dirty way:
1) rename default schema:
alter schema public rename to public_save;
2) create new schema as default schema:
create schema public;
3) restore data
pg_restore -f pub.backup db_temp [and whatever other options]
4) rename schemas according to need:
alter schema public rename to temp_schema;
alter schema public_save rename to public;
There is a simple solution:
Create your backup dump in plain SQL format (format "p" using the parameter --format=p or -F p)
Edit your pub.backup.sql dump with your favorite editor and add the following two lines at the top of your file:
create schema myschema;
SET search_path TO myschema;
Now you can restore your backup dump with the command
psql -f pub.backup.sql
The set search_path to <schema> command will set myschema as the default, so that new tables and other objects are created in this schema, independently of the "default" schema where they lived before.
There's no way in pg_restore itself. What you can do is use pg_restore to generate SQL output, and then send this through for example a sed script to change it. You need to be careful about how you write that sed script though, so it doesn't match and change things inside your data.
Probably the easiest method would be to simply rename the schema after restore, ie with the following SQL:
ALTER SCHEMA my_schema RENAME TO temp_schema
I believe that because you're using the compressed archive format for the output of pg_dump you can't alter it before restoring. The option would be to use the default output and do a search and replace on the schema name, but that would be risky and could perhaps cause data to be corrupted if you were not careful.
If you only have a few tables then you can restore one table at a time, pg_restore accepts -d database when you specify -t tablename. Of course, you'll have to set up the schema before restoring the tables and then sort out the indexes and constraints when you're done restoring the tables.
Alternatively, set up another server on a different port, restore using the new PostgreSQL server, rename the schema, dump it, and restore into your original database. This is a bit of a kludge of course but it will get the job done.
If you're adventurous you might be able to change the database name in the dump file using a hex editor. I think it is only mentioned in one place in the dump and as long as the new and old database names are the same it should work. YMMV, don't do anything like this in a production environment, don't blame me if this blows up and levels your home town, and all the rest of the usual disclaimers.
Rename the schema in a temporary database.
Export the schema:
pg_dump --schema-only --schema=prod > prod.sql
Create a new database. Restore the export:
psql -f prod.sql
ALTER SCHEMA prod RENAME TO somethingelse;
pg_dump --schema-only --schema=somethingelse > somethingelse.sql
(delete the database)
For the data you can just modify the set search_path at the top.
As noted, there's no direct support in pg_dump, psql or pg_restore to change the schema name during a dump/restore process. But it's fairly straightforward to export using "plain" format then modify the .sql file. This Bash script does the basics:
rename_schema () {
# Change search path so by default everything will go into the specified schema
perl -pi -e "s/SET search_path = $2, pg_catalog/SET search_path = $3, pg_catalog, $2;/" "$1"
# Change 'ALTER FUNCTION foo.' to 'ALTER FUNCTION bar.'
perl -pi -e 's/^([A-Z]+ [A-Z]+) '$2'\./$1 '$3'./' "$1"
# Change the final GRANT ALL ON SCHEMA foo TO PUBLIC
perl -pi -e 's/SCHEMA '$2'/SCHEMA '$3'/' "$1"
}
Usage:
pg_dump --format plain --schema=foo --file dump.sql MYDB
rename_schema dump.sql foo bar
psql -d MYDB -c 'CREATE SCHEMA bar;'
psql -d MYDB -f dumpsql
The question is pretty old, but maybe can help some one.
Streaming the output of pg_restore to sed and replace the schema name in order to import the dump to a different schema.
Something like:
pg_restore ${dumpfile} | \
sed -e "s/OWNER TO ${source_owner}/OWNER TO ${target_owner}/" \
-e "s/${source_schema}/${target_schema}/" | \
psql -h ${pgserver} -d ${dbname} -U ${pguser}