How do I use savepoints in SQL Workbench/J with Redshift? - amazon-redshift

Is it possible to recreate the following in redshift sql workbench?
create table test as
select top 10 * from core_data;
savepoint sv;
delete from test
where name like 'A%';
savepoint sv2;
delete from test
where name like 'B%';
rollback to sv;

Redshift doesn't support Rollback to Savepoints at all by any means. Hence you can't do with workbench as well.
See here list of all the unsupported PostgreSQL functions by Redshift. It includes savepoints as well.
If you execute rollback to savepoint query to Redshift like,
rollback to savepointexample;
You will see following error.
ERROR: SQL command "rollback to savepoint sv3;" not supported.

Related

How to run transactional SQL on Redshift using boto3

I'm trying to use boto3 redshift-data client to execute transactional SQL for external table (Redshift spectrum) with following statement,
ALTER TABLE schema.table ADD IF NOT EXISTS
PARTITION(key=value)
LOCATION 's3://bucket/prefix';
After submit using execute_statement, I received error "ALTER EXTERNAL TABLE cannot run inside a transaction block".
I tried use VACUUM and COMMIT commands before the statement, but it will just mention that VACUUM or COMMIT cannot run inside a transaction block.
How may I successfully execute such statement?
This has to do with the settings of your bench. You have an open transaction at the start of every statement you run. Just add “END;” before the statement that needs to run outside of a transaction and things should work. Just make sure you launch both commands at the same time from your bench.
Like this:
END; VACUUM;
It seems not quite easy to run transactional SQL through boto3. However, I found a workaround using the redshift_connector library.
import redshift_connector
connection = redshift_connector.connect(
host=host, port=port, database=database, user=user, password=password
)
connection.autocommit = True
connection.cursor.execute(transactional_sql)
connection.autocommit = False
Reference - https://docs.aws.amazon.com/redshift/latest/mgmt/python-connect-examples.html#python-connect-enable-autocommit

How to create multiple databases with Postgres in pgAdmin4

I am trying to run the following query in pgAdmin:
CREATE DATABASE abc;
CREATE DATABASE xyz;
And I get the following error:
ERROR: current transaction is aborted, commands ignored until end of transaction block
SQL state: 25P02
I'm relatively new to postgres.
With SQL Server it's possible to create multiple databases in a single query with the "GO" statement in between if necessary.
I've tried to google this error, and most answers are to simply run each line separately.
That would work, but I'm curious why this doesn't work.
It may also be a setting in pgAdmin.
The "autocommit" is currently on. I've tried it off, and same result.
I'm using postgres 14.5 (in aws)

SQLCODE=-104, SQLSTATE=42601, SQLERRMC=table;reorg ;JOIN <joined_table>

when running this query on db2 on DBeaver :
reorg table departments
i got this error (just on external channel):
DB2 SQL Error: SQLCODE=-104, SQLSTATE=42601, SQLERRMC=table;reorg ;JOIN <joined_table>, DRIVER=4.19.49
what does this query mean?
how can I fix the error?
appricicate any help.
Try call sysproc.admin_cmd('reorg table db2inst1.departments')
as you are using DBeaver which is a jdbc application.
If you do not qualify the table name (for example, with db2inst1) then Db2 will assume that the qualifier (schema name) is the same as the userid name you used when connecting to the database.
DBeaver runs SQL statements, but it cannot directly run commands of Db2 - instead, any jdbc app can run Db2-commands indirectly via a stored-procedure that you CALL. The CALL is an SQL statement.
The reorg table is a command, it is not an SQL statement, so it needs to be run via the admin_cmd stored-procedure, or it can be run from the operating system command line (or db2 clp) after connecting.
So if you have db2cmd.exe on MS-Windows, or bash on linux/unix, you can connect to the database, and run commands via the db2 command.

How to log all DDLs on a gcloud SQL instance?

I would like to log all DDL queries that are run on SQL instance. I tried looking into plugins but they are not allowed.
Edit: It's a MySQL instance.
You may set the Cloud SQL Flag "log_statement" to value "mod" to log all Data definition language (DDL) statements.
The reference link :
https://cloud.google.com/sql/docs/postgres/flags#postgres-l

Flyway doesn't mark statement in schema_version as fail in DB2

If some script fails during migration , flyway won't add record to schema_version in DB2 db for failed statement.
Do you have any idea how to avoid this situation?
I did a migration, 4th script failed, i expect this script will have status ABORTED/FAILED
One explanation for flyway behavior difference that you observe is the way Oracle handles DDL (implicit commit before/after each DDL) as compared with how Db2 handles DDL (implements DDL under transaction control by default). So with Db2 it's possible to arrange for each migration to be atomic and to rollback upon failure - meaning that there is nothing to repair, and therefore no repair action required as the flyway Oracle implementation may need.