Reverse sql that undo a query - postgresql

query:
UPDATE "table_name"
SET properties = properties || jsonb_build_object('$ip', ip)
WHERE ip IS NOT NULL;
I am running a Django migration, I need the reverse sql that undo the results of executing this query and restores the table in the previous state.
Because when I run the django migration test for the following operation:
operations = [
# migrations.RunPython(migrate_event_ip_to_property, rollback),
migrations.RunSQL(
"""
UPDATE "table_name"
SET properties = properties || jsonb_build_object('$ip', ip)
WHERE ip IS NOT NULL;
""",
None
)
]
I get IrreversibleError. I think if I provide the reverse sql, instead of None, it might work

I think you are out of luck looking for a raw solution from postgresql as SQL transaction was already commited by Django migration. See Postgresql ROLLBACK
But in this case you can revert to last working migration using:
./manage.py migrate app XXXX_last_working_migration
As stated in Django's django-admin docs migrate section here

Related

flyway migration failing for update sql statement with join

I created the update statement with joins in flywaydb with 8.x version. the update statement is below and the actual database is postgres 14.x. It is showing syntax error.
SQL Query :
update ss s set s_name = csc.name from c_s_category csc where csc.uid
= s.fk_csc_id and s.fk_csc_id is not null;
Can you please suggest, how can i proceed further with flyway script migration.

How to run transactional SQL on Redshift using boto3

I'm trying to use boto3 redshift-data client to execute transactional SQL for external table (Redshift spectrum) with following statement,
ALTER TABLE schema.table ADD IF NOT EXISTS
PARTITION(key=value)
LOCATION 's3://bucket/prefix';
After submit using execute_statement, I received error "ALTER EXTERNAL TABLE cannot run inside a transaction block".
I tried use VACUUM and COMMIT commands before the statement, but it will just mention that VACUUM or COMMIT cannot run inside a transaction block.
How may I successfully execute such statement?
This has to do with the settings of your bench. You have an open transaction at the start of every statement you run. Just add “END;” before the statement that needs to run outside of a transaction and things should work. Just make sure you launch both commands at the same time from your bench.
Like this:
END; VACUUM;
It seems not quite easy to run transactional SQL through boto3. However, I found a workaround using the redshift_connector library.
import redshift_connector
connection = redshift_connector.connect(
host=host, port=port, database=database, user=user, password=password
)
connection.autocommit = True
connection.cursor.execute(transactional_sql)
connection.autocommit = False
Reference - https://docs.aws.amazon.com/redshift/latest/mgmt/python-connect-examples.html#python-connect-enable-autocommit

How to create multiple databases with Postgres in pgAdmin4

I am trying to run the following query in pgAdmin:
CREATE DATABASE abc;
CREATE DATABASE xyz;
And I get the following error:
ERROR: current transaction is aborted, commands ignored until end of transaction block
SQL state: 25P02
I'm relatively new to postgres.
With SQL Server it's possible to create multiple databases in a single query with the "GO" statement in between if necessary.
I've tried to google this error, and most answers are to simply run each line separately.
That would work, but I'm curious why this doesn't work.
It may also be a setting in pgAdmin.
The "autocommit" is currently on. I've tried it off, and same result.
I'm using postgres 14.5 (in aws)

PostgreSQL 11.16 cannot execute CREATE TABLE in a read-only transaction

I have a PostgreSQL database running on an Azure machine. When I try to create a table on a database, I get an error "cannot execute CREATE TABLE in a read-only transaction". The SQL query is being executed by a python script using a sqlalchemy engine. But I tried a similar query in PGAdmin installed on my machine and I get the same error. And I noticed that I do not have this issue if I connect to the database from a colleague's machine.
After further research, I found that if I execute SELECT pg_is_in_recovery(); in my PGAdmin, it returns true. And false on my colleague's machine.
Let me know if there is any way to correct this
SELECT pg_is_in_recovery() - returned true = Database has only Read Acces
can you check your permission?
you can check postgresql.conf file and atribute default_transaction_read_only
or try this:
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
The issue was that our posgtresql machine is a HA machine, and that I was connecting to an IP address rather than the domain.

TypeORM migration entries lost from DB, `migration:run` re-runs them, then fails with "relation already exists"

I have a NestJS app with TypeORM, dockerized. I have synchronize turned off, using migrations instead. In the container entry point, I do yarn typeorm migration:run. It works well the first time around, and according to the logs it inserts records into the migrations table.
I noticed that when I start the project the next time it often tries to re-run migrations and fails (as expected) due to "relation already exists". At this point I can verify that entries are indeed missing from the migrations table via docker-compose exec db psql -U postgres -c 'SELECT * FROM "migrations" "migrations". The DB schema is up to date. When I insert a new record manually it gets an incremental ID after the missing records. So the records were there at some point.
I can't figure out what might cause entries in the migrations table to disappear (be rolled back?). This happens on the project linked above. It's a straightforward example project. I don't have an entity accidentally named "migrations". :)
As a workaround I currently insert into the migrations table manually:
docker-compose exec db psql -U postgres -c "INSERT INTO migrations (timestamp, name) VALUES ('1619623728180', 'AddTable1619623728180');"
Running specs that synchronized the DB was the issue.
I had a .env.test to use a different DB, but as it turns out that is not supported by dotenv. There are a few ways to make it work. I chose dotenv-flow/config and added it to my test script:
jest --collect-coverage --setupFiles dotenv-flow/config