Trouble Connecting with PostgreSQL - postgresql

To be completely up front, I am a novice when it comes to a lot of database-related topics. I am a data scientist, but my background lends itself more to statistical modeling. So please be patient with me :)
I am trying to use a query service of Adobe Experience Platform (AEP) which relies on PostgreSQL. For the life of me, I can not get it working. I've tried lots of things and I get the following:
Error: server closed the connection unexpectedly This probably means
the server terminated abnormally before or while processing the
request.
Can anyone help me determine the root cause of the issue?

Related

Finding exact query in pg_stat_activity

As recently, I keep getting "org.postgresql.util.PSQLException: FATAL: sorry, too many clients already" on my pgadmin4 (at the same time my application hang/not responsive). Upon studying around and read from another thread in here about pg_stat_activity to troubleshoot this issue I still couldnt find the exact query that perhaps didnot close the connection properly. Here the screenshot after I extract the data from pg_stat_activity. I also combine with pg_stat_statements but still unable to find it.
Is there anyway I can get the actual query so that I find the connection that didnt close properly? Sorry im not backend dev but just junior helping on troubleshooting server issue.
I looking from other thread;
Survey few postgresql monitoring tools.
CREATE EXTENSION pg_stat_statements;

Why is my server response so high (First Byte Time)?

One of my websites has suddenly become extremely slow in server response - currently 41859 ms First Byte Time.
Already talked to hosting provider, but they did not notice any changes or problems with the server. So I am lost on what is the problem. Already tried plugins for caching and image optimization but I know that won't help.
Does anyone know what might be the problem? I've also run a security check but nothing was found, in case it's a virus.
Any ideas?

Google Cloud SQL Postgres - randomly slow queries from Google Compute / Kubernetes

I've been testing Google Cloud SQL with Postgresql, but I have random queries taking ~3s instead of a few ms.
Troubleshooting I did:
The queries themselves aren't problems, rerunning the same query will work.
Indexes are properly set. The database is also very very small, it shouldn't do this, even if there weren't any index.
The Kubernetes container is connecting to the database through SQL Proxy (I followed this https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine). It is not the problem though as I tried to connect directly to the database, with the same issue.
I configured net.ipv4.tcp_keepalive_time to 60 to make sure the connection weren't dropping.
I also have a pool of connection that are never disconnected to make sure it wasn't from that.
When I run queries directly through my local Postgresql client, I never have the problem.
I don't have this issue when developing locally either and connecting to my local database.
What I'm getting at is: I feel there's some weird connection/link issue between my Google Compute instances and my Google SQL instance that I can't seem to figure out.
Any idea?
Edit:
I also noticed these logs in my SQL Cloud instance every 30s:
ERROR: recovery is not in progress
HINT: Recovery control functions can only be executed during recovery.
STATEMENT: SELECT pg_is_xlog_replay_paused(), current_timestamp
That's an interesting problem you are facing. So my knowledge on Kubernetes isn't that great, but I do have a general understanding so let's see if I can provide some suggestions.
To start with, the API that you linked to in your question does mention that it is still in beta. So I do believe there would still be issues to patch in maximizing speed performance.
Secondly, from what I understand, Kubernetes is a great tool for handling stateless workloads. Thus, handling data where state is required for queries would be a slow operation. This article (although not entirely related) does explain some of the pitfalls of Kubernetes (not all the questions are relevant)
Thirdly, could you explain your use case a little bit? Do you really need to use Kubernetes or will another tool like a powerful Compute Engine Instance or or a Dataflow job resolve the the issue? Are you making your database queries through a programming language or an application call?
Thanks, and do let me know!

How to handle db failure at start and runtime

The premises:
I'm using NodeJS, Mongodb Native 2.0+. I have created one connection (pool) which is keept open and reused in all modules. When production ready, the app and the db will be hosted on the same server, probably a VPS.
Questions about MongoDB in a production ready environment.
Should I anticipate that Mongod can crash? And if so, should I have some kind of autostart for it. Can "bugs" in my code be responsible for Mongod crashing?
If i should anticipate crasches, this probably also means that i should anticipate disruptions in my app when db cant be reached. What is the proper way to handle these? How long should i expect these disruptions to be?
If I start my app, manually close the Mongod and initiate a route call, the default response seems to be "waiting/loading" until Mongod is up again. I guess this default behaviour is ok if I dont expect disruption to be more than some seconds?
If Mongod is not up when starting the app, an exception will be thrown and the app will not start. This seems fine because without the db, the app cant do anything. Or should this be handled in another way?
I have made extensive search for this online, but have not found anything useful. Maybe I dont know what search terms to use...
There is alot of questions crammed up in one post here, but I hope someone could give me some answer or provide me with some links to good reading. The big question here is, how do i handle db failure at start and runtime

Cloud SQL errors

Various errors started occurring in Google SQL. The system is saying temporary unavailable, but it's been quite a while. Looks like 1 in 10 queries now give 500/502 errors. Here is an example stacktrace http://pastebin.com/MNk06PT4
This is a follow-up from Severe delays in cloud SQL responses. It could be the same issue. Same conditions, google cloud engine connected to a cloud SQL, no zone preference. Hope that sheds more light on the issue.
Between 11.00PST and 11.30PST there was an issue that interrupted many Cloud SQL instances. The problem should now be resolved.
We apologize for the inconvenience and thank you for your patience and continued support. Please rest assured that system reliability is a top priority for the Google Cloud Platform, and we are making continuous improvements to make our systems better.
To be kept informed of other Google Cloud SQL issues and launches, please join google-cloud-sql-announce#googlegroups.com
https://groups.google.com/forum/#!forum/google-cloud-sql-announce
Seems all Google Cloud SQL is down, any expected recovery time?
https://cloud.google.com/console
Error: Server Error
The server encountered an error and could not complete your request.
Please try again in 30 seconds.
Best regards
Sergio
Also I noticed that you are using the deprecated JDBC driver that has much lower performance than using the MySQL wire protocol natively. See https://developers.google.com/cloud-sql/docs/external for information on connecting using the standard drivers. That will help latency as well as consistency of performance.
I also got this error using CodeIgniter v2.1 and v3 on app engine and got this error as well.
It happens when using $autoload['libraries'] = array('database');
Then after a few random page refreshes this error pops up.
After changing the following in database.php:
'pconnect' => TRUE,
into
'pconnect' => FALSE,
This errors is gone in my application. Now both version 2.1 and 3 are working for me.
Maybe there is a similar setting in the framework or code you're using.