I’m using Postgres provisioned by Google Cloud SQL,
Recently we see the number of connections to increase by a lot.
Had to raise the limit from 200 to 500, then to 1000. In Google Cloud console Postgres reports 800 currenct connections.
However I have no idea where these connections come from. We have one app engine service, with not a lot of traffic at the moment accessing it, another application hosted on kubernetes. And a dozen or so batch jobs that connect to it. Clearly there must be some connection leakage somewhere.
Is there any way I can see from where these connections originate ?
All applications connecting to it are Java based at the moment.
They use the HikariCP connection pool. I’m considering changing the “test query”upon connection to insert a record in a log table. Hence I could perhaps find out from where the connections originate.
But are there better ways available?
Thanks,
Consider monitoring connection activity with pg_stat_activity, i.e: SELECT * from pg_stat_activity;.
As per the documentation:
Connections that show an IP address, such as 1.2.3.4, are connecting using IP. Connections with cloudsqlproxy~1.2.3.4 are using the Cloud SQL Proxy, or else they originated from App Engine. Connections from localhost are usually to a First Generation instance from App Engine, although that path is also used by some internal Cloud SQL processes.
Also, take a look at the best practices for managing database connections that contain information on opening and closing connections, connection count, or on how to set a connection duration in the Java programming language.
Related
I'm currently maintaining a hobby application with a spring-boot server (cheapest paid plan) and postgres (hobby plan with 20 connections limit).
When I check "datastores page" there's a utilization of 10/20 connections. No matter if there's someone making requests to my server or not.
The server has only simple cruds and no background jobs or multithread. I did connected to the database directly from heidisql.
I had this config initially:
spring.datasource.maxActive=10
spring.datasource.maxIdle=5
spring.datasource.minIdle=2
spring.datasource.initialSize=5
Then, as a test I changed to:
spring.datasource.maxActive=20
spring.datasource.maxIdle=2
spring.datasource.minIdle=0
spring.datasource.initialSize=1
The utilization still "10/20 connections". There is my question:
Why There's always 10/20 connections, even when there's noone using the application?
Can I estimate how many users my server will tolerate with 20 connections limit?
I deployed my java web application on Bluemix Dedicated environment and use it with Cloudant Dedicated NoSql DB. In this DB i tried to return 60k documents and server returned
500 Error: Failed to establish a backside connection
to me. So i'm wondering about connection timeout in Bluemix, there're posts where people claim that Bluemix resets a network connection in 120 if there's no response received. Is it possible to change this setting, or maybe someone knows how to solve such problem.
P.S. When I deploy it on my computer then it works fine, but of course it takes some time. Particularly this case may be solved using cloudant pagination, but i develop service for scheduling REST-calls and if bluemix reset all connections after 2 minutes i'll have a big problems with it.
Not sure which Bluemix Dedicated you are using, but the timeout is typically global. Paging would work and I thinking a websocket based approach would work as well.
-r
As far as I know when one establishes multiple Connection objects via JDBC to one database then each connection occupies a separate port on the machine where the Connection is established (they all connect to one port on the server where the DBMS is running).
I tried to extract the port that corresponds to the Connection objects. Unfortunately I did not find any way to do so.
Background: I'm doing performance analysis where I setup multiple clients which issue queries on the db. I'm logging the execution time of queries on the database server. In the resulting log I have - among others - information about the connection who initiated the query e.g. localhost.localdomain:44760 I hope it is possible to use this information to map each query to the client or more precisely the Connection object who initiated the query (which is my ultimate goal and serves analysis purposes).
Just run this select through the JDBC connection:
select inet_client_port()
More functions like that are in the manual:
http://www.postgresql.org/docs/current/static/functions-info.html
We are using google cloud sql instace with our project and we are using Squirrel and Toad as admin tools for our instance.
In the recent days our instace performance is very slow. Connecting our cloud sql instace with the admin tool itself very slow.
In some times it not even establishes the connection with the admin tool for around 2 mins. After that only the connection get established.
We also checked with other internet connection also. In that also the performance is too low.
Can you plese check the performance of the google cloud sql.
Why does mongodb logs show too many opened connections? It's showing me more than the maximum connection limit and number of current operations in db.
Also my primary refused to create more connections after reaching 819 limit. That time, the number of current operations in db were less than 819. Raising ulimit has solved my problem temporarily, but why were idle connections not utilized to serve the requests?
I was having this exact same problem. My connection number just kept growing and growing until it hit 819 and then no more connections were allowed.
I'm using mongo-java-driver version 2.11.3. What has seemed to fix the problem is explicitly setting the connectionsPerHost and threadsAllowedToBlockForConnectionMultiplier properties of the MongoClient. Before I was not setting these values myself and accepting the defaults.
MongoClientOptions mco = new MongoClientOptions.Builder()
.connectionsPerHost(100)
.threadsAllowedToBlockForConnectionMultiplier(10)
.build();
MongoClient client = new MongoClient(addresses, mco); //addresses is a pre-populated List of ServerAddress objects
In my application the MongoClient is defined as a static singleton.
I was watching the mongodb logs, and once the application hit 100 open connections I didn't see any more connections established from the client application. I am running a replica set, so you still see the internal connections being made which properly close.
From the MongoDB documentation:
"If you see a very large number connection and re-connection messages in your MongoDB log,then clients are frequently connecting and disconnecting to the MongoDB server. This is normal behavior for applications that do not use request pooling, such as CGI. Consider using FastCGI, an Apache Module, or some other kind of persistent application server to decrease the connection overhead.
If these connections do not impact your performance you can use the run-time quiet option or the command-line option --quiet to suppress these messages from the log."
http://docs.mongodb.org/manual/faq/developers/#why-does-mongodb-log-so-many-connection-accepted-events
Make sure you are using latest driver of mongodb.