ALTER command takes too much time in AWS RDS - postgresql

I have one instance of Postgresql running as AWS RDS instance.
I logged in as admin user in the terminal and tried to add a column to a table whose owner is admin.
The table only has 23 rows. It's a dev machine. So, no connections to it apart from mine. It takes forever to respond. I do not know what is going on.

Related

Jobs is not shown in pgAdmin 4

I have a problem with pgAdmin 4. Jobs are not displayed even if I am the superuser.
Jobs give you the opportunity to link tasks to a PostgreSQL database. Unlike triggers, which are executed only after actions on the database, these jobs can be executed, for example, according to a point in time.
With the pgAgent addon from Stack Builder this should be possible in pgAdmin.
Does anyone have a solution to my misery?
Tryed to make a new super user, reconnected to the server, looked inside the proerties of pgAdmin.

"error: too many connections for database 'postgres'" when trying to connect to any Postgres 13 instance

My team and I are currently experiencing an issue where we can't connect to Cloud SQL's Postgres instance(s) from anything other than the psql cli tool. We get a too many connections for database "postgres" error (in PGAdmin, DBeaver, and our node typeorm/pg backend). It initially happened on our (only) Postgres database instance. After restarting, stopping and starting again, increasing machine CPU/memory proved to do nothing, I deleted the database instance entirely and created a new one from scratch.
However, after a few hours the problem came back. I know that we're not actually having too many connections as I am able to query pg_stat_activity from psql command line and see the following:
Only one of those (postgres username) connections is ours.
My coworker also can't connect at all - not even from psql cli.
If it matters, we are using PostgreSQL 13, europe-west2 (London), single zone availability, db-g1-small instance with 1.7GB memory, 10GB HDD, and we have public IP enabled and the correct IP addresses whitelisted.
I'd really appreciate if anyone has any insights into what's causing this.
EDIT: I further increased the instance size (to no longer be a shared core), and I managed to successfully connect my backend to it. However my psql cli no longer works - it appears that only the first client to connect is allowed to connect after a restart (even if it disconnects, other clients can't connect...).
From the error message, it is clear that the database "postgres" has a custom connection limit (set, for example, by ALTER DATABASE postgres CONNECTION LIMIT 1). And apparently, it is quite small. Why is everyone try to connect to that database anyway? Usually 'postgres' database is reserved for maintenance operations, and you should create other databases for daily use.
You can see the setting with:
select datconnlimit from pg_database where datname='postgres';
I don't know if the low setting is something you did, or maybe Google does it on its own for their cloud offering.
#jjanes had the right idea/mention.
I created another database within the Cloud SQL instance that wasn't named postgres and then it was fine.
It wasn't anything to do with maximum connection settings (as this was within Google Cloud SQL) or not closing connections (as TypeORM/pg does this already).

ERROR: cannot execute CREATE TABLE in a read-only transaction

I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.

Rename the Amazon RDS master username

Changing the password is easily done through the console. Is there any way to change the master username after creation on RDS for PostgreSQL? If so, how?
You can't change username. You can check the following links that describe how to change master password and if Amazon adds the ability to change username you will find there:
Try to find at AWS CLI for RDS:
modify-db-instance --db-instance-identifier <value> --master-user-password (string)
--master-user-password (string)
The new password for the DB instance master user. Can be any printable
ASCII character except "/", """, or "#".
Changing this parameter does not result in an outage and the change is
asynchronously applied as soon as possible. Between the time of the
request and the completion of the request, the MasterUserPassword
element exists in the PendingModifiedValues element of the operation
response. Default: Uses existing setting
Constraints: Must be 8 to 41 alphanumeric characters (MySQL, MariaDB,
and Amazon Aurora), 8 to 30 alphanumeric characters (Oracle), or 8 to
128 alphanumeric characters (SQL Server).
The Amazon RDS Command Line Interface (CLI) has been deprecated. Instead, use the AWS CLI for RDS.
Via the AWS Management Console, choose the instance you need to reset the password for, click ‘Modify’ then choose a new master password.
If
you don’t want to use the AWS Console, you can use the
rds-modify-db-instance command (as per Amazon’s documentation for RDS)
to reset it directly, given the AWS command line tools:
rds-modify-db-instance instance-name --master-user-password
examplepassword
No. As of April 2019 one cannot reset the 'master username'.
You cannot do it directly. However you can use the database migration service from AWS:
https://aws.amazon.com/dms/
Essentially you define the current database instance as your source and the new database with the correct username as your target of the migration.
This way you migrate the data from one to another database instance. As such you can change all properties including the username.
This approach has some drawbacks:
You need to configure the migration. Which takes a bit of time.
The data is migrated. This may lead unexpected behavior since not everything is eventually migrated (e.g. views etc.)
It depends how you setup everything you may experience a downtime.
Though this may not be ideal for every use-case, I did find a workaround that allows for changing the username of the master user of an AWS RDS DB.
I am using PgAdmin4 with PostgreSQL 14 at the time of writing this answer.
Login with the master user you want to change the name of
Create a new user with the following privileges and membership
Privileges and Membership
Can login - yes
Superuser - no (not possible with a managed AWS RDS DB instance, if you need complete superuser access DO NOT use a managed AWS RDS DB)
Create roles - yes
Create databases - yes
Inherit rights from the parent roles - yes
Can initiate streaming replication and backups - no (again, not possible directly without superuser permission)
Be sure to note the password used, as you will need to access this new account at least 1 time to complete the name change
Register a server with the credentials created in step 2. Disconnect from the server but do NOT remove it! Connect to the new server created
Expand Login/Group Roles and click on the master user whom you are changing the name
Click the edit icon, edit the name, and save.
Right click the server with the master username, select Properties
Update the name under the General tab if desired
Update the username under the Connection tab to whatever you changed the master username above
Save and reconnect to the server with the master user
You have successfully updated the master user's name on a managed AWS RDS DB instance, proud of you!
As #tdubs's answer states, it is possible to change the master username for a Postgres DB instance in AWS RDS. Whether it is advisable – probably not.
Here are the SQL commands you need to issue:
Create a temporary user with the CREATEROLE privilege (while being logged in with the old master user)
CREATE ROLE temp_master PASSWORD '<temporary password>' LOGIN CREATEROLE;
Now connect to the database with the temp_master user
ALTER ROLE "<old_master_username>" RENAME TO "<new_master_username>";
-- NOTICE: MD5 password cleared because of role rename
ALTER ROLE "<new_master_username>" PASSWORD '<new password>';
Now connect to the database with the <new_master_username> user in order to clean up the temporary role
DROP ROLE temp_master;
And you're done!
Warning
AWS RDS does not know that the master username has been changed, so it will keep displaying the old one and assumes that is still the master username.
This means that if you use the AWS CLI or website to update the master password, it will have no effect.
And when connecting to the database with psql you'll see:
WARNING: role "<old_master_username>" does not exist

Running Heroku Postgres with least privilege

Can I connect to a Heroku Postgres database via an web/application without the risk of dropping a table?
I'm building a Heroku application for a third party which uses Heroku Postgres for the backend. The third party are very security sensitive so I'm looking at applying "Layered security" throughout the application. So for example checking for SQL injection attacks at the web/application layer. Applying a "Layered security" approach I should also secure the database in case a potential SQL injection attack is missed, which might drop a database table.
In other systems I have built there would be a minimum of two users in the database. Firstly the database administrator who creates/drops tables, index, triggers, etc and the application user who would run with less privileges than the database administrator who could only insert and update records for example.
Within the Heroku Postgres setup there doesn't appear to be a way to create another user with less privileges (without the “drop table” option). So the application must connect with the default Heroku Postgres user and therefore the risk of a “drop table” might exist.
I'm running the Heroku Postgres Crane add-on.
Has anyone come up against this or got any creative work arounds for this scenario?
With Heroku Postgres you do only have a single account to connect with. One option that does exist for this type of functionality is to create a follower on Heroku Postgres. A follower is asynchronously kept up to date (usually only a second or so behind) and is read only. This would allow you to grant access to the follower to those that need it while not providing them with the details for the leader db.