Drop DB on AWS RDS instance - Postgres - postgresql

Trying to drop a db from AWS RDS (postgres) is blocked only for superuser (which is reserved to AWS only).
trying to run this command on my instance:
DROP DATABASE "dbname"
results in error:
psycopg2.InternalError: DROP DATABASE cannot run inside a transaction block
I saw this issue in AWS forums, which seems to have been active by a lot of people, but no AWS representative giving a valid solution.
How can I drop my db without taking down the whole instance and raising it again?

Worked for me:
Connect to postgres DB, disconnect from DB you want to DROP!
select pg_terminate_backend(pid) from pg_stat_activity where datname='yourdb';
drop database 'yourdb';
Step 2 and 3 execute fast in order to prevent user rdsadmin reconnect

Try using ISOLATION_LEVEL_AUTOCOMMIT, a psycopg2 extensions:
No transaction is started when command are issued and no commit() or
rollback() is required.
The connection must be in autocommit mode. You can way can set it using psycopg2 is through the autocommit attribute:
import psycopg2
con = psycopg2.connect(...)
con.autocommit = True
cur = con.cursor()
cur.execute('DROP DATABASE db;')

Related

How to create multiple databases with Postgres in pgAdmin4

I am trying to run the following query in pgAdmin:
CREATE DATABASE abc;
CREATE DATABASE xyz;
And I get the following error:
ERROR: current transaction is aborted, commands ignored until end of transaction block
SQL state: 25P02
I'm relatively new to postgres.
With SQL Server it's possible to create multiple databases in a single query with the "GO" statement in between if necessary.
I've tried to google this error, and most answers are to simply run each line separately.
That would work, but I'm curious why this doesn't work.
It may also be a setting in pgAdmin.
The "autocommit" is currently on. I've tried it off, and same result.
I'm using postgres 14.5 (in aws)

PostgreSQL 11.16 cannot execute CREATE TABLE in a read-only transaction

I have a PostgreSQL database running on an Azure machine. When I try to create a table on a database, I get an error "cannot execute CREATE TABLE in a read-only transaction". The SQL query is being executed by a python script using a sqlalchemy engine. But I tried a similar query in PGAdmin installed on my machine and I get the same error. And I noticed that I do not have this issue if I connect to the database from a colleague's machine.
After further research, I found that if I execute SELECT pg_is_in_recovery(); in my PGAdmin, it returns true. And false on my colleague's machine.
Let me know if there is any way to correct this
SELECT pg_is_in_recovery() - returned true = Database has only Read Acces
can you check your permission?
you can check postgresql.conf file and atribute default_transaction_read_only
or try this:
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
The issue was that our posgtresql machine is a HA machine, and that I was connecting to an IP address rather than the domain.

testing replication from Citus to my RDS Aurora Postgres on subscriber no data is coming

I am testing replication from Citus(Cloud Hosted) to my RDS Aurora Postgres with ref https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Replication.Logical.html#AuroraPostgreSQL.Replication.Logical.Configure
Everything ran successful but on subscribers, no data is coming so what may be wrong? how do I troubleshoot
SELECT count(*) FROM LogicalReplicationTest;
Note: on Citus DB I can See Publication:
But in my RDS can't see Subscriptions in the list in spite of CREATE SUBSCRIPTION testsub CONNECTION was successful:
I observed that when I run SELECT count(*) FROM LogicalReplicationTest; there is no data but only column name showing Lock symbol as shown in this screenshot! any idea what this lock symbol means apart from the Read-only Column? why it's not listing data inside that table which is in publisher DB Is there any permission I have to delegate to the table when creating a publication?

Unknown INTERNAL_ERROR trying to remove Google Cloud SQL postgres database

I have a Google SQL instance created and some databases. I need to delete one database but, for some reason, currently I get an unknown error.
Here is the debug output:
$ gcloud sql databases delete myDatabase -i myInstance --verbosity debug
DEBUG: Running [gcloud.sql.databases.delete] with arguments: [--
instance: "myInstance", --verbosity: "debug", DATABASE: "myDatabase"]
The database will be deleted. Any data stored in the database will be
destroyed. You cannot undo this action.
Do you want to continue (Y/n)?
Deleting Cloud SQL database...failed.
DEBUG: (gcloud.sql.databases.delete) INTERNAL_ERROR
Traceback (most recent call last):
File "~/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 797, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "~/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 757, in Run
resources = command_instance.Run(args)
File "~/google-cloud-sdk/lib/surface/sql/databases/delete.py", line 84, in Run
'Deleting Cloud SQL database')
File "~/google-cloud-sdk/lib/googlecloudsdk/api_lib/sql/operations.py", line 81, in WaitForOperation
sleep_ms=_BaseOperations._INITIAL_SLEEP_MS)
File "~/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 226, in RetryOnResult
if not should_retry(result, state):
File "~/google-cloud-sdk/lib/googlecloudsdk/api_lib/sql/operations.py", line 65, in ShouldRetryFunc
raise result
OperationError: INTERNAL_ERROR
ERROR: (gcloud.sql.databases.delete) INTERNAL_ERROR
In Google doc there isn't any entry about it. Does anyone has this error?
Any help will be greatly appreciated.
Thanks!
My hunch is that the database still has open connections which are preventing you from deleting it.
I think this issue is more common with postgres databases than MySQL databases because with postgres there is more likely to be lagging connections (this is a postgres issue rather than GCP issue).
To test this, connect to the shell of the Cloud SQL instance and run the following command in order to prevent any future connections to the database:
REVOKE CONNECT ON DATABASE DATABASE_NAME FROM public;
Then, connect to the database you would like to delete, and terminate all connections to this database apart from your current one by issuing the following command:
SELECT pid, pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = current_database() AND pid <> pg_backend_pid();
Exit the postgres shell.
I would also suggest disabling automatic backups. If an automatic backup is taking place it can produce an exception if you simultaneously try to delete the database.
Now try the gcloud sql databases delete command again.
I had the same issue.
My problem was that database didn't belong to postgres user.
In order to be able to connect to the database with database owner user, you need to create database in cloud sql "databases" tab. Name the database just like your username.
Then you should be able to login:
gcloud sql connect <instance name> --user=<owner of database> --quiet
now just perform
DROP DATABASE <database name>;
Response should be if successful:
DROP DATABASE
I had the same problem with deleting a MysQL database on cloud SQL in GCP. It turns out I needed to disabled beforehand some of the configurations of the db before deleting it as explained here.

ERROR: cannot execute CREATE TABLE in a read-only transaction

I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.