Understanding heroku postgres connections with spring application - postgresql

I'm currently maintaining a hobby application with a spring-boot server (cheapest paid plan) and postgres (hobby plan with 20 connections limit).
When I check "datastores page" there's a utilization of 10/20 connections. No matter if there's someone making requests to my server or not.
The server has only simple cruds and no background jobs or multithread. I did connected to the database directly from heidisql.
I had this config initially:
spring.datasource.maxActive=10
spring.datasource.maxIdle=5
spring.datasource.minIdle=2
spring.datasource.initialSize=5
Then, as a test I changed to:
spring.datasource.maxActive=20
spring.datasource.maxIdle=2
spring.datasource.minIdle=0
spring.datasource.initialSize=1
The utilization still "10/20 connections". There is my question:
Why There's always 10/20 connections, even when there's noone using the application?
Can I estimate how many users my server will tolerate with 20 connections limit?

Related

MongoDB Change Stream heavy on system resources

I'm using MongoDB change stream in order to have indirect communication between servers.
One API server is in DMZ and the other is in Intranet, DB server is also in DMZ and both API servers are allowed to communicate with DB via 27017 port.
DMZ API server is doing inserts to DB and is listening for "update" event in order to return response to user, while Intranet API server is listening for insert event and doing updates to those documents only. Once Intranet API updates the document, response is returned to user from DMZ API. Hope this makes sense so far.
That being a setup, I have an issue with DB server. It's complaining about swap memory being full always and since it's replica set, there are 3 servers in it and each has 4GB of RAM.
Do I need to add more RAM, and how much if anyone knows?

How to find connection leaks on PostgreSQL cloud sql

I’m using Postgres provisioned by Google Cloud SQL,
Recently we see the number of connections to increase by a lot.
Had to raise the limit from 200 to 500, then to 1000. In Google Cloud console Postgres reports 800 currenct connections.
However I have no idea where these connections come from. We have one app engine service, with not a lot of traffic at the moment accessing it, another application hosted on kubernetes. And a dozen or so batch jobs that connect to it. Clearly there must be some connection leakage somewhere.
Is there any way I can see from where these connections originate ?
All applications connecting to it are Java based at the moment.
They use the HikariCP connection pool. I’m considering changing the “test query”upon connection to insert a record in a log table. Hence I could perhaps find out from where the connections originate.
But are there better ways available?
Thanks,
Consider monitoring connection activity with pg_stat_activity, i.e: SELECT * from pg_stat_activity;.
As per the documentation:
Connections that show an IP address, such as 1.2.3.4, are connecting using IP. Connections with cloudsqlproxy~1.2.3.4 are using the Cloud SQL Proxy, or else they originated from App Engine. Connections from localhost are usually to a First Generation instance from App Engine, although that path is also used by some internal Cloud SQL processes.
Also, take a look at the best practices for managing database connections that contain information on opening and closing connections, connection count, or on how to set a connection duration in the Java programming language.

Postgres's tcp_keepalives_idle Not Updating AWS ELB Idle Timeout

I have an Amazon ELB in front of Postgres. This is for Kubernetes-related reasons, see this question. I'm trying to work around the maximum AWS ELB Idle Timeout limit of 1 hour so I can have clients that can execute long-running transactions without being disconnected by the ELB. I have no control over the client configuration in my case, so any workaround needs to happen on the server side.
I've come across the tcp_keepalives_idle setting in Postgres, which in theory should get around this by sending periodic keepalive packets to the client, thus creating activity so the ELB doesn't think the client is idle.
I tried testing this by setting the idle timeout on the ELB to 2 minutes. I set tcp_keepalives_idle to 30 seconds, which should force the server to send the client a keepalive every 30 seconds. I then execute the following query through the load balancer: psql -h elb_dns_name.com -U my_user -c "select pg_sleep(140)". After 2 minutes, the ELB disconnects the client. Why are the keepalives not coming through to the client? Is there something with pg_sleep that might be blocking them? If so, is there a better way to simulate a long running query/transaction?
I fear this might be a deep dive and I may need to bring out tcpdump or similar tools. Unfortunately things do get a bit more complicated to parse with all of the k8s chatter going on as well. So before going down this route I thought it would be good to see if I was missing something obvious. If not, any tips on how to best determine whether a keepalive is actually being sent to the server, through the ELB, and ending up at the client would be much appreciated.
Update: I reached out to Amazon regarding this. Apparently idle is defined as not transferring data over the wire. Data is defined as any network packet that has a payload. Since TCP keep-alives do not have payloads, the client and server keep-alives are considered idle. So unless there's a way to get the server to send data inside of their keep alive payloads, or send data in some other form, this may be impossible.
Keepalives are sent on the TCP level, well below PostgreSQL, so it doesn't make a difference if the server is running a pg_sleep or something else.
Since a hosted database is somewhat of a black box, you could try to control the behavior on the client side. The fortunate thing is that PostgreSQL also offers keepalive parameters on the client side.
Experiment with
psql 'host=elb_dns_name.com user=my_user keepalives_idle=1800' -c 'select pg_sleep(140)'

500 Error: Failed to establish a backside connection on bluemix java liberty app

I deployed my java web application on Bluemix Dedicated environment and use it with Cloudant Dedicated NoSql DB. In this DB i tried to return 60k documents and server returned
500 Error: Failed to establish a backside connection
to me. So i'm wondering about connection timeout in Bluemix, there're posts where people claim that Bluemix resets a network connection in 120 if there's no response received. Is it possible to change this setting, or maybe someone knows how to solve such problem.
P.S. When I deploy it on my computer then it works fine, but of course it takes some time. Particularly this case may be solved using cloudant pagination, but i develop service for scheduling REST-calls and if bluemix reset all connections after 2 minutes i'll have a big problems with it.
Not sure which Bluemix Dedicated you are using, but the timeout is typically global. Paging would work and I thinking a websocket based approach would work as well.
-r

Using different cloud hosts for app and db/will there be latency with Mongo Atlas?

Is it ok to host both a web app and db server on different cloud providers? Traditionally you really needed to host both on the same network - but I'm wondering if, with modern networks, this is less of a necessity.
I have a web app (Aurelia/ASP.Net Core) hosted on Linode and I need to add a mongo db server. I really don't want to have to manage the db servers - so would prefer to use a cloud service like MongoAtlas or mLab etc but my concern is latency. I'm hoping that I could use either of these if I chose a data center in the same country/location as my Linodes are hosted.
My app should be ok with not-so-real-time responses - but lags of a few seconds won't work.
Can anyone comment on experiences with this?