PostgreSql , extract schema objects DDL to separate SQL file - postgresql

I want to export all objects DDL to separate file example (table_a_create.sql, view_b_create.sql, trigger_c_create.sql, table_contraints.sql ...)
I was trying with pg_dump but it only exports to one file for the whole schema.
I read some questions about this on stackoverflow but still not enough for my requirement
Ex: How to dump PostgreSQL database structure (each object in separate file)
Is there any way to do it? I'm using Windows

If you are on the client machine, you can put this in a SQL script (e.g. export_plpgsql.sql) :
\pset tuples_only on
\pset footer off
\set QUIET on
\pset format unaligned
\set QUIET off
SELECT '\echo ''* Export '||(CASE proKind WHEN 'f' THEN 'Function' ELSE 'Procedure' END)||' : '||proName||''''
||chr(10)||'\copy (SELECT pg_get_functiondef('||p.oid||')) TO '''||:'export_path'||'/'||upper(proName)
||(CASE proKind WHEN 'f' THEN '.fct' ELSE '.prc' END)||''' WITH CSV;' as export_routine
FROM pg_proc p
WHERE proNamespace = (SELECT oid FROM pg_namespace WHERE nspName = lower(:'schema_name'))
ORDER BY proName;
and call it using 2 arguments : schema_name and export_path, for example :
psql -U my_ -d my_db -v schema_name=my_schema -v export_path=C:/temp/export_PG -f export_plpgsql.sql > C:\temp\export_plpgsql.gen.sql
This will generate a script containing all the exports command for your plpgsql routines, e.g.
\copy (SELECT pg_get_functiondef(51296)) TO 'C:/temp/export_PG/my_procedure.prc' WITH CSV;
Last step : run the generated script
psql -U my_ -d my_db -f C:\temp\export_plpgsql.gen.sql
It will generate a .prc file for each procedure and a .fct file for each function.
NB: You may have to refine the script as you can have other kind of functions (proKind) in pg_proc view.

Related

\COPY TO Postgres statement working in bash but not with python psycopg2

I am trying to use psycopg2 in my script to export data from a Postgres database to a file.
I can successfully run the following from my terminal and it works, no problem:
psql -h myhost -p myport -U myuser -d mydbname -c "\COPY ( SELECT member_id FROM member_reward_transaction LIMIT 5) TO ~/Desktop/testexport.txt (FORMAT csv, DELIMITER '|', HEADER 0)"
I could presumably call the above using subprocess, but I would like to know why the following is not working for me:
import configparser
config = configparser.ConfigParser()
config.read('config/qa_config.ini')
dbname=config['postgres-rewards']['db_name']
host=config['postgres-rewards']['host']
port=config['postgres-rewards']['port']
user=config['postgres-rewards']['user']
password=config['postgres-rewards']['password']
con = psycopg2.connect(database=dbname,user=user,password=password,host=host,port=port)
cur = con.cursor()
f = open('exports/test_export.csv')
cur.copy_to(f, 'member_reward_transaction', columns=('member_id', 'sponsor_id'), sep=",")
con.commit()
con.close()
The error when I run the script:
File "tests2.py", line 17, in <module>
cur.copy_to(f, 'member_reward_transaction', columns=('member_id', 'sponsor_id'), sep=",")
psycopg2.errors.WrongObjectType: cannot copy from partitioned table "member_reward_transaction"
HINT: Try the COPY (SELECT ...) TO variant.
using Python 3.6.5, PostgreSQL 11.5
Like the error message says, you have to use
COPY (SELECT ... FROM partitioned_table) TO STDOUT;
if you want to use a partitioned table.
Your psql command does that, but psycopg2's copy_to uses plain old
COPY partitioned_table TO STDOUT;
which doesn't work.
Use copy_expert which allows you to submit your own COPY statement.

echo queries psql from a file as they are run

I have a bash script that opens a file and executes a bunch of psql queries.
I want these queries to be echoed/print as and when they run.
How do I do the same ?
I have tried using \echo for inserts & inside stored procedures too, but it doesn't seem to work. How do I do it ?
Use psql --echo-all.
$ psql --echo-all -c "SELECT 1;"
SELECT 1;
?column?
----------
1
(1 row)
The only way that I know, that you can echo anything during the execution of a PostgreSQL function (named stored procedure), is with raise. This command is used to trown exceptions, but you can throw a NOTICE level exception, that will not interfere on the function execution.
Maybe it is not exactly what you want, but is a good workaround. The way that PostgreSQL execute their procedures, don't allow runtime echos (like Sybase ou Ms SQL Server). See this examples (It will only work inside functions):
raise notice 'Some message';
It will output:
NOTICE: Some message
Or passing vars to the debug:
raise notice 'Inserting '%' in '%'.',var_value,var_table;
When var_table = 'customers' and var_value = 'Joe Doe', it will output:
NOTICE: Inserting 'Joe Doe' in 'customers'
--echo-queries (for shell script)
PGPASS='passwd'
su -c "PGPASSWORD=${PGPASS} psql -d postgres --echo-queries -qc "\pset border 2;" -c "show data_directory;"" postgres
From the Postgres documentation page ( please note that the syntax of psql has remained largely unchanged over versions ), it is clearer with an example of a DDL.
There are several ways to echo. -e to echo just the queries only.
$ psql -ec "create table t1 ( c1 int ) " ;
create table t1 ( c1 int )
CREATE TABLE
If you do not want the "CREATE TABLE" message add a "-q" flag as well
$ psql -eqc "create table t1 ( c1 int ) " ;
create table t1 ( c1 int )
You can add header and footer to your script file:
\set origin_ECHO :ECHO
\set ECHO all
--****** YOUR SCRIPT TEXT *****
--.........
--*****************************
\set ECHO :origin_ECHO

How to export the result set of a postgres query in a format that is importable with psql?

Using psql is there a way to do a select statement where the output is a list of insert statements so that I can execute those insert statements somewhere else.
SELECT * FROM foo where some_fk=123;
Should output
INSERT INTO foo
(column1,column2,...) VALUES
('abc','xyyz',...),
('aaa','cccc',...),
.... ;
That I can redicet to a file say export.sql which I can then import with psql -f export.sql
My goal is to move export the result of a select statement in a format that I can import into another database instance with exactly the same table structure.
Have a look at the --inserts option of pg_dump
pg_dump -t your_table --inserts -f somefile.txt your_db
Edit the resulting file if necessary.
For a subset, as IgorRomanchenko mentioned, you can use COPY with a SELECT statement.
Example of COPYing as CSV.
COPY (select * from table where foo='bar') TO '/path/to/file.csv' CSV HEADER

psql - save results of command to a file

I'm using psql's \dt to list all tables in a database and I need to save the results.
What is the syntax to export the results of a psql command to a file?
From psql's help (\?):
\o [FILE] send all query results to file or |pipe
The sequence of commands will look like this:
[wist#scifres ~]$ psql db
Welcome to psql 8.3.6, the PostgreSQL interactive terminal
db=>\o out.txt
db=>\dt
Then any db operation output will be written to out.txt.
Enter '\o' to revert the output back to console.
db=>\o
The psql \o command was already described by jhwist.
An alternative approach is using the COPY TO command to write directly to a file on the server. This has the advantage that it's dumped in an easy-to-parse format of your choice -- rather than psql's tabulated format. It's also very easy to import to another table/database using COPY FROM.
NB! This requires superuser or pg_write_server_files privileges and will write to a file on the server.
Example: COPY (SELECT foo, bar FROM baz) TO '/tmp/query.csv' (format csv, delimiter ';')
Creates a CSV file with ';' as the field separator.
As always, see the documentation for details
Use o parameter of pgsql command.
-o, --output=FILENAME send query results to file (or |pipe)
psql -d DatabaseName -U UserName -c "SELECT * FROM TABLE" -o /root/Desktop/file.txt
\copy which is a postgres command can work for any user. Don't know if it works for \dt or not, but general syntax is reproduced from the following link Postgres SQL copy syntax
\copy (select * from tempTable limit 100) to 'filenameinquotes' with header delimiter as ','
The above will save the output of the select query in the filename provided as a csv file
EDIT:
For my psql server the following command works this is an older version v8.5
copy (select * from table1) to 'full_path_filename' csv header;
Use the below query to store the result in a CSV file
\copy (your query) to 'file path' csv header;
Example
\copy (select name,date_order from purchase_order) to '/home/ankit/Desktop/result.csv' cvs header;
Hope this helps you.
If you got the following error
ufgtoolspg=> COPY (SELECT foo, bar FROM baz) TO '/tmp/query.csv' (format csv, delimiter ';');
ERROR: must be superuser to COPY to or from a file
HINT: Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.
you can run it in this way:
psql somepsqllink_or_credentials -c "COPY (SELECT foo, bar FROM baz) TO STDOUT (format csv, delimiter ';')" > baz.csv
COPY tablename TO '/tmp/output.csv' DELIMITER ',' CSV HEADER;
this command is used to store the entire table as csv
I assume that there exist some internal psql command for this, but you could also run the script command from util-linux-ng package:
DESCRIPTION
Script makes a typescript of everything printed on your terminal.
This approach will work with any psql command from the simplest to the most complex without requiring any changes or adjustments to the original command.
NOTE: For Linux servers.
Save the contents of your command to a file
MODEL
read -r -d '' FILE_CONTENT << 'HEREDOC'
[COMMAND_CONTENT]
HEREDOC
echo -n "$FILE_CONTENT" > sqlcmd
EXAMPLE
read -r -d '' FILE_CONTENT << 'HEREDOC'
DO $f$
declare
curid INT := 0;
vdata BYTEA;
badid VARCHAR;
loc VARCHAR;
begin
FOR badid IN SELECT some_field FROM public.some_base LOOP
begin
select 'ctid - '||ctid||'pagenumber - '||(ctid::text::point) [0]::bigint
into loc
from public.some_base where some_field = badid;
SELECT file||' '
INTO vdata
FROM public.some_base where some_field = badid;
exception
when others then
raise notice 'Block/PageNumber - % ',loc;
raise notice 'Corrupted id - % ', badid;
--return;
end;
end loop;
end;
$f$;
HEREDOC
echo -n "$FILE_CONTENT" > sqlcmd
Run the command
MODEL
sudo -u postgres psql [some_db] -c "$(cat sqlcmd)" >>sqlop 2>&1
EXAMPLE
sudo -u postgres psql some_db -c "$(cat sqlcmd)" >>sqlop 2>&1
View/track your command output
cat sqlop
Done! Thanks! =D
Approach for docker
via psql command
docker exec -i %containerid% psql -U %user% -c '\dt' > tables.txt
or query from sql file
docker exec -i %containerid% psql -U %user% < file.sql > data.txt

Generate DDL programmatically on Postgresql

How can I generate the DDL of a table programmatically on Postgresql? Is there a system query or command to do it? Googling the issue returned no pointers.
Use pg_dump with this options:
pg_dump -U user_name -h host database -s -t table_or_view_names -f table_or_view_names.sql
Description:
-s or --schema-only : Dump only ddl / the object definitions (schema), without data.
-t or --table Dump : Dump only tables (or views or sequences) matching table
Examples:
-- dump each ddl table elon build.
$ pg_dump -U elon -h localhost -s -t spacex -t tesla -t solarcity -t boring > companies.sql
Sorry if out of topic. Just wanna help who googling "psql dump ddl" and got this thread.
You can use the pg_dump command to dump the contents of the database (both schema and data). The --schema-only switch will dump only the DDL for your table(s).
Why would shelling out to psql not count as "programmatically?" It'll dump the entire schema very nicely.
Anyhow, you can get data types (and much more) from the information_schema (8.4 docs referenced here, but this is not a new feature):
=# select column_name, data_type from information_schema.columns
-# where table_name = 'config';
column_name | data_type
--------------------+-----------
id | integer
default_printer_id | integer
master_host_enable | boolean
(3 rows)
The answer is to check the source code for pg_dump and follow the switches it uses to generate the DDL. Somewhere inside the code there's a number of queries used to retrieve the metadata used to generate the DDL.
Here is a good article on how to get the meta information from information schema,
http://www.alberton.info/postgresql_meta_info.html.
I saved 4 functions to mock up pg_dump -s behaviour partially. Based on \d+ metacommand. The usage would be smth alike:
\pset format unaligned
select get_ddl_t(schemaname,tablename) as "--" from pg_tables where tableowner <> 'postgres';
Of course you have to create functions prior.
Working sample here at rextester