DDL change in multiple postgresql instances at the same time? - postgresql

There are 4 postgresql instances running on 1 server. I want to make ddl change in all of them at same time.
How can I do this?

Write a shell script (if you're on *nix) or .cmd batch file / .vbs (if you're on Windows) to do it. Have the script invoke psql -f /path/to/ddl.sql and the IP/port, database name, etc.
Alternately, write a script in a language like Python that has proper PostgreSQL bindings. Have the script loop through the databases and run the DDL for each. In Python, for example, the following (untested) script should do the trick:
import psycopg2
conn_definitions = [
"dbname=db1 port=5432 host=127.0.0.1",
"dbname=db2 port=5432 host=127.0.0.1",
"dbname=db3 port=5432 host=127.0.0.1",
"dbname=db4 port=5432 host=127.0.0.1",
]
ddl = """
CREATE TABLE blah (
blah integer
);
CREATE INDEX blah_blah_idx ON blah(blah);
"""
connections = []
cursors = []
for conn_info in conn_definitions:
conn = psycopg2.connect(conn_info)
curs = conn.cursor()
cursors.append(curs)
connections.append(conn)
for curs in cursors:
curs.execute("BEGIN;")
for curs in cursors:
curs.execute(ddl)
for curs in cursors:
curs.execute("COMMIT;")
for conn in connections:
conn.close()
Enhance if desired by doing things like splitting the DDL into an array of statements you loop over so you can do per-statement error handling.
You could also generate the connections dynamically by connecting to one DB for each host and running select datname from pg_database to get a listing of other DBs.

Related

PostgreSQL pg_dump creates sql script, but it is not a sql script: is there a way to get pg_dump to create a standard sql script?

I'm running pg_dump to create a script to automate the creation of a system like this:
pg_dump --dbname=postgresql://postgres:ohdsi#127.0.0.1:5432/OHDSI -t webapi.* > webapi.sql
This creates a sql script, but it is not really a sql script as it has code in it like what is shown below.
When this script is run as a sql script, it fails giving the error shown below.
Is there a way to get pg_dump to create a sql script that is standard sql and can be executed as a sql script?
Code sample from sql generated by pg_dump:
COPY webapi.cohort_version (asset_id, comment, description, version, asset_json, archived, created_by_id, created_date) FROM stdin;
\.
--
-- Data for Name: concept_of_interest; Type: TABLE DATA; Schema: webapi; Owner: ohdsi_admin_user
--
COPY webapi.concept_of_interest (id, concept_id, concept_of_interest_id) FROM stdin;
1 4329847 4185932
2 4329847 77670
3 192671 4247120
4 192671 201340
Error seen when running the script generated by pg_dump:
--
-- Name: penelope_laertes_uni_pivot id; Type: DEFAULT; Schema: webapi; Owner: ohdsi_admin_user
--
ALTER TABLE ONLY webapi.penelope_laertes_uni_pivot ALTER COLUMN id SET DEFAULT nextval('webapi.penelope_laertes_uni_pivot_id_seq'::regclass)
--
-- Data for Name: achilles_cache; Type: TABLE DATA; Schema: webapi; Owner: ohdsi_admin_user
--
COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
Error executing: COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
. Cause: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
Exception in thread "main" java.lang.RuntimeException: org.apache.ibatis.jdbc.RuntimeSqlException: Error executing: COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
. Cause: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
at org.yaorma.database.Database.executeSqlScript(Database.java:344)
at org.yaorma.database.Database.executeSqlScript(Database.java:332)
at org.nachc.tools.fhirtoomop.tools.build.postgres.build.A04_CreateAtlasWebApiTables.exec(A04_CreateAtlasWebApiTables.java:29)
at org.nachc.tools.fhirtoomop.tools.build.postgres.build.A04_CreateAtlasWebApiTables.main(A04_CreateAtlasWebApiTables.java:19)
Caused by: org.apache.ibatis.jdbc.RuntimeSqlException: Error executing: COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
. Cause: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
at org.apache.ibatis.jdbc.ScriptRunner.executeLineByLine(ScriptRunner.java:109)
at org.apache.ibatis.jdbc.ScriptRunner.runScript(ScriptRunner.java:71)
at org.yaorma.database.Database.executeSqlScript(Database.java:342)
... 3 more
Caused by: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:329)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:315)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:291)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:286)
at org.apache.ibatis.jdbc.ScriptRunner.executeStatement(ScriptRunner.java:190)
at org.apache.ibatis.jdbc.ScriptRunner.handleLine(ScriptRunner.java:165)
at org.apache.ibatis.jdbc.ScriptRunner.executeLineByLine(ScriptRunner.java:102)
... 5 more
--- EDIT ------------------------------------
The --inserts method in the accepted answer gave me exactly what I needed.
I ended up doing this:
pg_dump --inserts --dbname=postgresql://postgres:ohdsi#127.0.0.1:5432/OHDSI -t webapi.* > webapi.sql
The client tool you are using to restore the dump cannot deal with the data from the (nonstandard) COPY command being mixed into the script. You need psql to restore such a dump.
You can use the --inserts option of pg_dump to create a dump that contains INSERT statements rather than COPY. That will be slower to restore, but will work with more client tools.
However, your wish to get a standard SQL script is hopeless. PostgreSQL extends the standard in many ways, so a database cannot be dumped with a standard SQL script. Note, for example, that indexes are not defined by the SQL standard. If you are looking to transfer a PostgreSQL dump to a different RDBMS, you will be disappointed. That is more difficult.

How to create a function in PostgreSQL like SQL Server BACKUP DATABASE TO DISK

I'm trying without success to create a function in Postgres that save a table or database taking one or two parameters. In this case I was trying to create it with only one parameter(name of the table or database) and backup this table/db
--SELECT backup_table(sports)
CREATE FUNCTION backup_table(TEXT) RETURNS BOOLEAN AS
$$
DECLARE
table_x ALIAS FOR $1;
BEGIN
COPY table_x FROM 'C:/path/backup_db' WITH (FORMAT CSV);
RAISE NOTICE 'Saved correctly the table %',$1;
RETURN BOOLEAN;
END;
$$ LANGUAGE plpgsql;
I've always receive the error when I try to execute the function SELECT backup_table(sports):
"The column sports doesnt exists."
SQL state: 42703
Character: 21
The idea is to create the function like the equivalent of SQL Server BACKUP DATABASE TO DISK, or equivalent to pg_dump command
pg_dump -U -W -F t sports > C:/path/backup_db;
I know about SQL but now I'm just stuck with this error.

why can I not parse variable parsed from outside?

I am trying to make a drop database script, which I would have to trigger using psql.
psql ... -f reset-database.sql -v dbname=$database
The problem is that I am not able to access the variable :dbname in my script.
DO
$$
BEGIN
IF EXISTS (SELECT EXISTS( SELECT datname FROM pg_catalog.pg_database WHERE datname = :dbname)) THEN
UPDATE pg_database SET datallowconn = 'false' WHERE datname = :dbname;
SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE pg_stat_activity.datname = :dbname;
DROP DATABASE IF EXISTS :dbname;
END IF;
END
$$
This is executed as is, and :dbname is not replaced with the database name parsed in as as variable? Why? And how do I parse it?
The variable substitution will not work in a DO statement, because the statement body is a (dollar quoted) string literal. Otherwise, it should work fine, but you have to use single quotes in your metadata query. Besides, you cannot run DROP DATABASE inside a DO statement, since you cannot run it inside a transaction.
Also, don't update catalog tables.
You can use psql's \if for conditional processing:
SELECT EXISTS (
SELECT 1 FROM pg_catalog.pg_database
WHERE datname = :'dbname'
) AS have_db \gset
\if :have_db
ALTER DATABASE :dbname ALLOW_CONNECTIONS FALSE;
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = :'dbname';
DROP DATABASE :dbname;
\endif
From PostgreSQL v13 on, this is much simpler:
DROP DATABASE IF EXISTS :dbname WITH (FORCE);

mysql connector python execute multiple statement

I would like to execute multiple statements by mysql connector. The code is as below,
import mysql.connector
conn = mysql.connector.connect(user='root', password='******',
database='dimensionless_ideal')
cursor = conn.cursor()
sql = ("Select * From conditions_ld; "
"Select * From conditions_fw; "
"Select * From results")
cursor.execute(sql, multi=True)
conn.close()
When I run this python script, it is keep running for a very long time unless I kill it. No output and error info. What is the problem?
You should take a look here : https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html
It seems that when you use the Multi=True option, the cursor.execute returns an iterator. There is an example on how to use it.

How to log an error into a Table from SQLPLus

I am very new to Oracle so please base with me if this is covered else where.
I have a MS SQL box running Jobs calling batch files running scripts in SQLPLUS to ETL to an Oracle 10G database.
I have an intermittent issue with a script that is causing the ETL to fail which at the minute without error logging is something of an unknown. The current solution highlights the load failure based on rowcounts for before and after the script has finsihed.
I'd like to be able to insert any errors encoutered whilst running the offending script into an error log table on the same database receiving the data loads.
There's nothing too technical about the script, at a high level is performs the following steps all via SQL code and no procedural calls.
Updates a table with Date and current row counts
Pulls data from a remote source into a staging table
Merges the Staging table into an intermediate staging table
Performs some transformational actions
Merges the intermediate staging table into the final Fact table
Updates a table with new row counts
Please advise whether it is possible to pass error messages, codes, line number etc etc via SQLPLUS into a Database table? and if so the easiest method to achieve this.
A first few lines of the script are shown below to give a flavour
/*set echo off*/
set heading off
set feedback off
set sqlblanklines on
/* ID 1 BATCH START TIME */
INSERT INTO CITSDMI.CITSD_TIMETABLE_ORDERLINE TGT
(TGT.BATCH_START_TIME)
(SELECT SYSDATE FROM DUAL);
COMMIT;
insert into CITSDMI.CITSD_TIMETABLE_ALL_LOADS
(LOAD_NAME, LOAD_CRITICALITY,LOAD_TYPE,BATCH_START_TIME)
values
('ORDERLINE','HIGH','SMART',(SELECT SYSDATE FROM DUAL));
commit;
/* Clear the Staging Tables */
TRUNCATE TABLE STAGE_SMART_ORDERLINE;
Commit;
TRUNCATE TABLE TRANSF_FACT_ORDERLINE;
Commit;
and so it goes on with the rest of the steps.
Any assistant will be greatly appreciated.
Whilst not fully understanding your requirement, a couple of pointers.
The WHENEVER command will help you control what sqlplus should do when an error occurs, e.g.
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
WHENEVER OSERROR EXIT FAILURE ROLLBACK
INSERT ...
INSERT ...
This will cause sqlplus to exit with error status 1 if any of the following statements fail.
You can also have WHENEVER SQLERROR CONTINUE ...
Since the WHENEVER ... EXIT FAILURE/SUCCESS controls the exit status, the calling script/program will know if it worked failed.
Logging
use SPOOL to spool the out to a file.
Logging to table.
Best way is to wrap your statements into PLSQL anonymous blocks and use exception hanlders to log errors.
So, putting the above together, using a UNIX shell as the invoker:
sqlplus -S /nolog <<EOF
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
CONNECT ${USRPWD}
SPOOL ${SPLFILE}
BEGIN
INSERT INTO the_table ( c1, c1 ) VALUES ( '${V1}', '${V2}' );
EXCEPTION
WHEN OTHERS THEN
INSERT INTO the_error_tab ( name, errno, errm ) VALUES ( 'the_script', SQLCODE, SQLERRM );
COMMIT;
END;
/
SPOOL OFF
QUIT
EOF
if [ ${?} -eq 0 ]
then
echo "Success!"
else
echo "Oh dear!! See ${SPLFILE}"
fi