Does Amazon RDS have default setting on their postgresql? - postgresql

Background
I have a laravel application that uses migration/seeding scripts that have always worked fine when ran on my localhost (OSX) and even on an AWS ec2 instance when the db is local.
Problem
The music started when I decided to connect the ec2 instance into a separate RDS instance (we're talking staging here), I ran my migration scripts as always, but then I got this rude error:
SQLSTATE[42704]: Undefined object: 7 ERROR: constraint
"users_type_check" of relation "users" does not exist (SQL: ALTER
TABLE users DROP CONSTRAINT users_type_check;)
not sure why this is only happening on RDS, ideas?

Agree with Vao's comment.
There is hardly anything RDS specific in that error message.
It seems you are under the impression that the user_type_check constraint is there, when in reality it may not.
You mentioned you are populating the database by running migrations, hence are you sure these have been ran and have populated the database correctly in the first place?

Related

"error: too many connections for database 'postgres'" when trying to connect to any Postgres 13 instance

My team and I are currently experiencing an issue where we can't connect to Cloud SQL's Postgres instance(s) from anything other than the psql cli tool. We get a too many connections for database "postgres" error (in PGAdmin, DBeaver, and our node typeorm/pg backend). It initially happened on our (only) Postgres database instance. After restarting, stopping and starting again, increasing machine CPU/memory proved to do nothing, I deleted the database instance entirely and created a new one from scratch.
However, after a few hours the problem came back. I know that we're not actually having too many connections as I am able to query pg_stat_activity from psql command line and see the following:
Only one of those (postgres username) connections is ours.
My coworker also can't connect at all - not even from psql cli.
If it matters, we are using PostgreSQL 13, europe-west2 (London), single zone availability, db-g1-small instance with 1.7GB memory, 10GB HDD, and we have public IP enabled and the correct IP addresses whitelisted.
I'd really appreciate if anyone has any insights into what's causing this.
EDIT: I further increased the instance size (to no longer be a shared core), and I managed to successfully connect my backend to it. However my psql cli no longer works - it appears that only the first client to connect is allowed to connect after a restart (even if it disconnects, other clients can't connect...).
From the error message, it is clear that the database "postgres" has a custom connection limit (set, for example, by ALTER DATABASE postgres CONNECTION LIMIT 1). And apparently, it is quite small. Why is everyone try to connect to that database anyway? Usually 'postgres' database is reserved for maintenance operations, and you should create other databases for daily use.
You can see the setting with:
select datconnlimit from pg_database where datname='postgres';
I don't know if the low setting is something you did, or maybe Google does it on its own for their cloud offering.
#jjanes had the right idea/mention.
I created another database within the Cloud SQL instance that wasn't named postgres and then it was fine.
It wasn't anything to do with maximum connection settings (as this was within Google Cloud SQL) or not closing connections (as TypeORM/pg does this already).

Anyway in GORM to preview the schema DDL that will run during migration before running it?

We are using Golang and GORM for ORM and database schema migrations. Using the defautl AUtoMigrate is nice to add tables/columns/indexes that are net new. One big issue with Postgres, specifically AWS AUrora Postgres is it seems even a simple alter table add column with no default and no not null constraints still creates a full on AccessExclusiveLock. This caused us downtime in production for what is otherwise a deployable infrastructure anytime with zero downtime.
Postgres claims this wont cause lock, but think Aurora has some innodb engine magic that does cause one, probably for read replica consistency ?
Regardless, in our deploy shell script trying to see if we can ask GORM to give us the DDL it would run, like a preview, without running it. Anyway to do that?

ERROR: cannot execute CREATE TABLE in a read-only transaction

I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.

PostgreSQL: After using CreateLang, I still get an error "42704: language "plpgsql" does not exist"

I am using pgProvider on a MVC3 Mono application and had no issues on Windows Postgres 9.2. I am migrating to my production Linux/Mono environment, and received an error when attempting to access the provider:
"42704: language "plpgsql" does not exist"
I am using postgreSQL 8.4 on CentOS 6.3
I looked at some info, and ran the following:
createlang -d dbname plpgsql
When I run it again, it affirms that the database is already there.
I restarted postgresql and my app. However, I still get the above error.
Does anyone have any info on why I would still receive an "plpgsql does not exist -- after it is installed?"
In general this sort of error is an indication that something is not as you expect and you need to challenge your assumptions.
Please note that languages do not require restarting the database server to take effect on any version of PostgreSQL. Also note that what languages are allowed in PostgreSQL is a per-db catalog entry (languages, unlike users and roles, are not cluster-global).
So in troubleshooting this, you need to start with:
Is this the right db? The right server?
Is my application connecting to the same db/server I think it is?
Given that createlang affirms that the language exists after you run it, this can't be an issue with PostgreSQL raising a phony error. Instead it must be a problem with the changes you made not affecting the database your application is using.

Cloudfoundry relation "user" does not exist

I'm currently trying to deploy my application in cloudfoundry. My web is built on spring+postgre+hibernate. The problem I'm having is whenever I try to access the database, I'm having an error relation "user" does not exist.
I am sure that it is properly connecting to a postgres database. But my problem is that , it seems like it is not connecting to my local database but instead to somewhere else that's why it can't access the user table I defined.
I've tried to execute some query like "select * from pg_tables", and it executes well but the result is not the same with my local database. I've also seen that the table owner is vcap. I wonder why I'm having this error and how I could solve it.
I've been trying to fix these a couple of days. But to no avail that's why I've already posted here.
I hope someone could help me here.
I'll appreciate it a lot!
As a follow-on to ebottard's comment suggestion that addresses provisioning a service,
a way to prepopulate the database is to use a tunnel which connects your postgres client to the cloudfoundry postgres database, as described in Tunneling to a Cloudfoundry Service.