AWS Postgres DB "does not exist" when connecting with PG - postgresql

I can't seem to connect to my DB instance in AWS. I'm using the pg package and following the examples from the website is not working.
A search for "aws postgres database does not exist" really isn't returning anything helpful. Going through the open/closed issues on the PG github isnt helpful either.
Running $nc <RDS endpoint> <port number> returns a success message so it's definitely there. Every value placed in the Client config is copy/pasted from my DB instance.
I'm starting to wonder if the databases have a different name than what it shows in the "Instances" section of RDS on AWS?
const client = new Client({
host : '<<RDS ENDPOINT>>',
database : '<<RDS NAME>>', // maybe this isnt the real name?
user : '<<username>>',
password : '<<password>>',
port : <<port>>
});
client.connect()
.then(data => {
console.log('connected');
})
.catch(err => {
console.log(err);
})

I ran into this issue as well. It looks like the DB Instance Name and the actual DB name are two different things and even when you add a name when you create your DB, it defaults to 'postgres'. When I put in the name of my DB it gave me the same error. However, when I just put in 'postgres' it worked fine. Try that and see if it works for you.

The initial configuration of RDS instances is quite messy, since the parameter "database name" is only the name of the instance, not the proper name of the database. If you want AWS to create a database at the moment you create the db instance, you have to select "Additional configuration" and explicitly add a parameter called "Initial database name". Check the screenshot I attach here.

Try adding postgres as dbname. It worked for me!

After connecting with postgres as db name, you can type \l to list all database on that PSQL cluster, that will return a bunch of default dbs and also the one you created (the name) so you can connect to it

I ran into the same issue after creating a DB instance on AWS RDS. I wanted to test the connection of my database using PostBird, and I used my actual DB instance name but it could not work.
But I used "postgres in field of DB_name and it worked. That means that my default username was posgres and db_name was also "posgres.
I hope it will help you too.

Try this if the above answer does not work.
Remove the:5439/lab ending so that the Host value ends with: .com

Related

Prisma DB Can't connect to AWS RDS

I have a nextjs project that's using prismaDB for the ORM. I'm able to connect just fine to my local postgres db but I'm getting this error when running npx prisma migrate.
Error: P1001: Can't reach database server at db-name.*.us-west-2.rds.amazonaws.com:5432.
schema.prisma:
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
//url = "postgresql://master_username:master_password#aws_host:5432/db_name"
}
The RDS db is currently public and I'm positive that I've copied over the RDS credentials correctly. There doesn't seem to be anything I should be including for the connection to work but I'm not getting any other info as to why I can't reach the db server.
Seems like you have to replace db-name.*.us-west-2.rds.amazonaws.com with the name of your actual database, unless you replaced it for the purpose of asking this question. Specifically the part where it says db-name.*.
Docs: https://www.prisma.io/docs/reference/api-reference/error-reference#common
P1001 indicates that it couldn't find the database given the connection string, NOT necessarily that the credentials you provided were wrong. Make sure you're specifying the correct database name/host and whatever else you need to make it work for AWS.
Somehow I was able to connect to RDS after deleting and creating a new DB for the third time. I confirmed connection through pgAdmin then tried it again my app deployed to vercel.

How do I connect DBeaver to CockroachDB Serverless?

How do I connect DBeaver with my CockroachDB Serverless database? I get errors that look like this:
FATAL: codeParamsRoutingFailed: missing cluster name in connection string
Make sure to include the cluster name in the database field.
The database should be something like: cluster-name-1234.databasename.
Here is a screenshot of a working configuration:
To find the cluster name, including the tenant id use the Connect modal from the CockroachCloud Database Page:
Most importantly, include .defaultdb to the end of the name of your database in the "Database" field. For example, databasename-dev-369.defaultdb
This was the least intuitive config detail, easily killed an hour of my time :/ Hope this helps.

"Null value for 'password'" - PostgreSQL in DataGrip

I have browsed the web but didn't find any answers/solution to my problem, so i hope somebody here can help me:
I have downloaded DataGrip because i wanted to replace pgAdmin 4. When i try to create a PostgreSQL data source and host it locally, i get the following error: "Null value for 'password'". In pgAdmin 4 i do remember creating a password to access my local database when i first set it up, and it worked fine. However, i don't remember creating a password when installing DataGrip. Is there some default password that im not aware of? And what should be written under "User"?
DataGrip can only connect to the server which already exists (and therefore has a password). "Data source" is the term of DataGrip.
In other words, you need to already have a working database on the server with the known credentials.

cannot connect to mongo cluster - user is not allowed to do action [getLog] on [admin.]

I have created a user and added my IP to whitelist.
when trying to connect to a cluster through mongo shell, i am required to enter the following line: mongo "mongodb+srv://cluster0.****.mongodb.net/" --username --password
I have filled in credentials for username and password and replaced dbname with my database name(tried using non-existing one as well in case that was the problem). it connects to the shell, but then crashes with the following error:
Error while trying to show server startup warnings: user is not allowed to do action [getLog] on [admin.]
MongoDB Enterprise atlas-7cwf8s-shard-0:PRIMARY>
tried googling and youtubing the issue, but cannot find the match on how to fix it.
Many thanks
That message says that the shell is unable to show you server startup warnings. It's expected in Atlas environment.
Supposing that's your own cluster, then:
Check the user in Atlas > Database Access
Check the MongoDB Roles header in the table.
If it's not atlas Admin, you can't issue this command:
db.adminCommand({getLog:"startupWarnings"})
Or any admin command, which is issued or tested automatically in the connection, hence the error.
Edit MongoDB Roles to the highest privileges (atlas Admin)
But you can still work anyways.
If you're accessing someone else's cluster, then there isn't much to do.

ERROR: cannot execute CREATE TABLE in a read-only transaction

I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.