Postgres, Prisma Working Fine One Day, 'P1001 Error: Can't Reach Database' the next - postgresql

For this project, I am using a prisma / Postgres database. I have made no changes to my code, and I have pulled a coworkers working version of the code to no avail. I am unable to do anything with the database, I cannot migrate, I cannot run mutations, and I cannot even open the psql console, as every command is met with
P1001: Can't reach database server at localhost:5432
Please make sure your database server is running at localhost:5432
I am not sure what I could have possibly done, I don't know enough about ports or even the contents of app.json well enough to have messed anything up. Now, no mutations can go through.
Interestingly enough, this all happened after I ran npx primsa migrate deploy on the deployed database which is on a EC2 VM from AWS. Since then, the native app associated with the database refuses to work, though it is worth nothing that the webapp connects to the deployed database just fine. This being said, nothing works locally, as the database / Port / Server don't exist anymore according to my machine, which makes no sense. I have no idea how to try to re-spin it, or why every single query / mutation from my Native App now ONLy returns Response not successful: Received status code 400 despite it having the same exact syntax it did when it worked, as well as the WebApp having the same syntax and server (ExpressJS). Does anyone have any ideas what could be causing this?

The error code 400 refers to a bad request coming from the client: too large request, malformed syntax, invalid request message framing, etc.
First step: make sure that your database server is indeed running. Try connecting to it with other SQL Clients or Libraries. Sometimes Prisma is just being difficult.
Second thing: are you hosting the database on the local server? I can assume you are because of the localhost. Make sure no other programs are using this port or maybe waiting for it.
Sorry if this doesn't help. Good luck!

Related

CloudRun Suddenly got `Improper path /cloudsql/{SQL_CONNECTION_NAME} to connect to Postgres Cloud SQL instance "{SQL_CONNECTION_NAME}"`

We have been running a service using NestJS and TypeORM on fully managed CloudRun without issues for several months. Yesterday PM we started getting Improper path /cloudsql/{SQL_CONNECTION_NAME} to connect to Postgres Cloud SQL instance "{SQL_CONNECTION_NAME}" errors in our logs.
We didn't make any server/SQL changes around this timestamp. Currently there is no impact to the service so we are not sure if this is a serious issue.
This error is not from our code, and our third party modules shouldn't know if we use Cloud SQL, so I have no idea where this errors come from.
My assumption is Cloud SQL Proxy or any SQL client used in Cloud Run is making this error. We use --add-cloudsql-instances flag when deploying with "gcloud run deploy" CLI command.
Link to the issue here
This log was recently added in the Cloud Run data path to provide more context for debugging CloudSQL connectivity issues. However, the original logic was overly aggressive, emitting this message even for properly working CloudSQL connections. Your application is working correctly and should not receive this warning.
Thank you for reporting this issue. The fix is ready and should roll out soon. You should not see this message anymore after the fix is out.

Postgres db constantly flooded with connections, mysterious roles -- hacked?

I wrote a simple finance tracking application in 2017 that uses a Heroku backend with a Postgres db. The application suddenly stopped working, and I traced the problem to the database.
I was unable to connect to the database, seeing this error:
psql: FATAL: too many connections for role
I thought maybe the app had a connection leak so I shut the frontend down (Im the only one that uses it) and reset all the db connections. I was then able to login to the db, and noticed all these strange roles (hundreds?) that I dont recognize.
When I logged out of psql, I tried logging back in and again was denied with the "too many connections" error. The only way I can log back in is if I kill all connections again and immediately login. If I wait 2-3 minutes the error comes back. I don't think my heroku app is establishing all these connections with the db, because Im tailing the logs and it's just sitting there.
Does anyone have any theories about what might be going on here? Have I been hacked maybe? How would you debug this further, and how might I fix the problem?
Thanks!
The database has obviously been hacked.
Shut it down and delete it right away.
Restore to a different cluster from a known good backup.
From now on, choose good passwords and use a restrictive pg_hba.conf that for example doesn't allow remote access for superusers.
Never, ever, operate your application with a superuser.
Examine your application for SQL injection vulnerabilities.
this may be because of a bot(made by hackers) that is scanning the internet and trying CVE exploit (N-day exploit) to see if it is vulnerable, and then launching that type of attack or it may be because someone on the VNAT with you trying to something weird, but one thing for sure it is a bot because you can not launch that many connections by hand.

Heroku App Only Working On Local Machine

Have something really odd going on with Heroku.
I have an application build in React/JS/Node with Mongo.
If I pull up the link to my app on my local machine: https://obscure-crag-61417.herokuapp.com/, I can see a version of my website, but it is not updating for any changes that I push to Heroku.
Even more strange, is that on a non-local machine, if i visit the aforementioned link, I get the boilerplate 'Express' page.
I've tried clearing the cache, exiting browsers on both PC's but same old story.
I have the MongoDB config set in Heroku.
Not sure what could be going on here.
Any ideas?
PS--here's my code: https://github.com/pythoncreate/twit-stocks
Okay i figured this one out. I'm pretty sure it was how I was setting the ports on the backend. Heroku has some specific rules about this:
Heroku + node.js error (Web process failed to bind to $PORT within 60 seconds of launch)

MongoDB server keeps timing out

I have recently moved to a Mongo server that is running on a CentOS Google Cloud machine I've setup myself (the Mongo service started with systemct). Previously I've been running my mongo DB either locally, or via a server hosted by mlab.
Everything is working fine, except my client keeps getting StopIterator exceptions errors on any non-trivial query. I never encountered these previously, either running local or with the mlab server. Is there a timeout setting on the server I should be setting? (the client timeout settings don't seem to effect the issue)
So I have (sort of) answered my question. The reason my client app was dying was I was running from the Visual Studio debugger. That was catching the StopIterator and asserting, even though (I think) the StopIterator exception was being handled by the pymongo library which is re-trying and continuing successfully. If I disable that StopIterator exception in the "Python Exceptions" section of the VisualStudio "Exception Settings" panel, then my client code will continue and complete successfully.
That said, I am pretty sure this was not happening before I setup my own Mongo server (as well as the assert in VS there is an noticeable hitch when that exception occurs, both in my python code and in the mongo command line client). So I still believe there is something I am doing wrong with how I setup my Mongo server, so any suggestions on that would be welcome!

ReactiveMongo with Play 2 Framework saying "entire node set is unreachable"

I'm trying to get a Play (2.1) app with ReactiveMongo (0.9) working on the app's test server. However, when our application is run on my dev box, is able to store image metadata just fine, even pointing to the mongo 2.2 install on the mongo test server. Even ran it with "play stage", then run directly with java 1.6.0. However, run the same way, also with Java 1.6.0 on the test server, the app continuously logs this error:
r.c.a.MongoDBSystem - The entire node set is unreachable, is there a network problem?
r.c.a.MongoDBSystem - The entire node set is unreachable, is there a network problem?
r.c.a.MongoDBSystem - The entire node set is unreachable, is there a network problem?
And not just during initialization... it repeats indefinitely. I've seen this error mentioned elsewhere, but I don't think those solutions apply to this. From the app's test server, I'm able to telnet to port 27017 on the mongo test server successfully. I see both my local install and the test server install of the app log that it's using the same mongodb url.
So based on what I said, I believe I can eliminate:
Blocked port
Mongo server down
Pointing to wrong mongo server
Mongo version mismatch
Java version mismatch
I'm going through the reactivemongo source but it seems the error is spewed when the MongoChannels are not set as authenticating or ready state (usable). I'm planning to try remote-debugging to see where it's going wrong, but I'm running out of time on this, so I'm hoping for a troubleshooting tip or two if I can get any.
Thanks!
Alright, figured it out. We're running Casbah/Salat on the same app, for now. There's a mongodb.uri in the config file that gets read in by both. However, ReactiveMongo seems to only work if the database name is included, which according to the mongodb "connection string uri" spec:
http://docs.mongodb.org/manual/reference/connection-string/
... you only need to include the database if you have credentials you need to authenticate with. In our case, we don't have credentials, so Casbah wasn't including the database. I added it in anyway... casbah ignored it safely, and reactivemongo worked. I neglected to do the same in the test config file, so even though it was showing the correct host, it wasn't about to work correctly.
I see how the host url + db name in one string replaces the two fields "mongodb.servers" and "mongodb.db", but it can be confusing if not conforming to mongo's similar spec.