Number of sockets opened increased while profiling - sockets

I am currently profiling a Tomcat 8.5 application which uses DBCP as standard pool for Neo4j JDBC Driver connections.
What I do not understand is why the number of sockets opened (and not closed) is so high when I start profiling the application (with Yourkit or JProfiler) and this number is low and stable without profiling. Same behavior confirmed with lsof and ls /proc/my_tomcat_pid/fd | wc -l commands.
So I am wondering if my application is really not releasing Neo4j JDBC connections as it should, or if it is overhead introduced by the profiler.
Any clue ?

If you are using YourKit profile you can run dedicated "Not closed socket connections" inspection https://www.yourkit.com/docs/java/help/event_inspections.jsp Profiler will show you where not-closed sockets were created and opened.

Actually it was a Yourkit bug linked with Neo4j JDBC driver and Databases probe, which was quickly fixed by Yourkit team. The fix is available in Build 75 : https://www.yourkit.com/forum/viewtopic.php?f=3&t=39801

Related

PgPool-II Sorry, too many clients already

Background
I have a PgPool-II cluster (ver 4.1.4) running on 3 centos 8 machines (virtual); SQL1, SQL2 and SQL3 (each on different hardware). On SQL1 and SQL2 PostgreSQL-12 are running (currently SQL1 is master and SQL2 in streaming replication standby).
In the db cluster there are currently 4 databases for 2 different software environments. One customer (cust) environment with quite much traffic and one educational (edu) environment with basically no user activity at all.
Both environments have one db each and also shares two databases (only for reading).
The application is written in net core 3.1 at the moment and uses npgsql and entity framework core for connecting to the pgpool cluster.
Except "normal" application sql requests to the databases with entity framework there are also periodical calls with psql in order to run the "show pool_nodes;" in pgpool. This could not be done with entity framework, hence psql instead.
Each environment also has one "main api" which handles internet traffic, and one "service api" which runs background tasks. Both uses entity framework to call the database. And psql is also sometimes invoked from the "service api" as described above.
On top of this all applications also have an A and B system.
So to sum up:
2 Environments (cust, edu) has A and B system, which also has "main api" and "service api" each => 8 applications, (12 if counting that all "service api" also invokes psql every 5 min).
The applications are on 2 different machines (A and B), "main api" and "service api" runs on the same machine for one environment and system.
Each entity framework application can also make parallell/multiple simultaneous requests depending on the user activity to the api.
My question
Every now and then there is an error from the pgpool cluster: Sorry, too many clients already.
Usually when connecting with psql, but sometimes this also happens from entity framework.
My initial thought was that this was because the databases had to many clients connected, but running pg_stat_activity just seconds after the error shows that there are way less connections (around 50), then the 150 max_connections in psql config. I could not find any errors at all in psql logs in the "log" folder in psql data directory.
But in the pgpool.log file there are an error entry:
Oct 30 16:34:19 sql1 pgpool[4062984]: [109497-1] 2020-10-30 16:34:19: pid 4062984: ERROR: Sorry, too many clients already
pgpool has num_init_children = 32 and max_pool = 4 so I do not really see where the problem might be coming from.
Some files that might be needed for more info:
pg_stat_activity (Taken 11s after the error)
pgpool.log
pgpool.conf
postgresql.conf
This problem might exist from a bug that according to one of the developers of pgpool will have its fix included in the 4.1.5 update november 19, 2020.
The bug makes the counter for existing connections not counting down if a query is cancelled.
Simply upgrade pgpool from 4.1.2 to 4.1.5 (Newer versions of pgpool are available as well)
I had the same problem with pgpool 4.1.2, Sorry, too many clients already was occurring almost every hour.
The problem got fixed after I upgraded pgpool from 4.1.2 to 4.1.5. We never experienced the error afterward

Releasing JDBC connections after kubernetes pod is killed

I am running a couple of spring boot apps in Kubernetes. Each app is using spring JPA to connect to a Postgresql 10.6 database. What I have noticed is when the pods are killed unexpectedly the connections to the database are not released.
Running SELECT sum(numbackends) FROM pg_stat_database; on the database returns lets say 50, after killing a couple of pods running the the spring app and rerunning the query this number jumps to 60 this eventually causes the number of connections to postgresql to exceeds the maximum and prevent restarted pod's applications connecting to the database.
I have experimented with the postgresql option idle_in_transaction_session_timeout setting it to 15s but this does not drop the old connections and the number keeps increasing.
I am making use of the com.zaxxer.hikari.HikariDataSource datasource for my spring apps and was wondering if there was a way to prevent this from happening, either on the posgresql or spring boot's side.
Any advise or suggestions are welcome.
Spring boot version: 2.0.3.RELEASE
Java version: 1.8
Postgresql version 10.6
This issue can arise not only with kubernetes pods but also with a simple application running on a server which is killed forcibly ( like kill -9 pid on Linux) and not given opportunity to process to clean up via a shutdown hook to spring boot. I think in this case nothing on the application side can help. You can however try cleanup of inactive connections by several methods on database side as mentioned here.

Wildfly 8.2 stop incoming connections suddenly

We have a wildfly 8.2 app server which allocates 6GB of server RAM. Sometime due to havey transaction count wildfly has stop receiving incoming connections. But when I check server (not app server, it is our VM ) memory, it uses 4GB of RAM. Then I checked Wildfly app server's heap memory it did not use at least 25% of allocate heap size. Why is that? When I restart wildfly App server, All things work normally and when it comes that kind of load, above scenario happen again
Try to increase the connection-limit as suggested in this SO question
You can Dump HTTP requests as given here
Also, are you getting any errors in your console? Please post them as well.

Memory Limit Hit

Memory Limit Hit following rubyrailstutorial.org
Hi,
I am following the tutorial on Ruby Rails Tutorial and I am using the Cloud9 IDE however I keep getting a "Memory Limit Hit" when running through the tutorial. I am using the free tier which comes with 512MB Ram which is constantly in the red zone.
I have tried killing some processes, but as soon as I start back on the tutorial I keep getting the error.
Thanks
Michael
Short answer: Spring has a bug that causes it to use up memory by spawning too many processes.
From your c9 shell, run
pkill -9 -f spring
and restart your rails server.
Longer answer: He describes a lot about it in Chapter 3 of the tutorial.
Box 3.4 on this page https://www.railstutorial.org/book/static_pages covers it nicely.
Incidentally, I found that I couldn't simply restart my c9 session...it's really good at persisting the state of your virtual server ... including all of the extra Spring processes.

Are there major downsides to running MongoDB locally on VPS vs on MongoLab?

I have an account with MongoLab for MongoDB and the constant calls to this remote server from my app slow it down quite a lot. When I run the app locally on my computer with a local version of Mongod and MongoDB it's far, far faster, as would be expected.
When I deploy my app (running on Node/Express) it will be run from a VPS on CentOS. I have plenty of storage space available on my VPS, are there any major downsides to running MongoDB locally rather than remotely on Mongolab?
Specs of the VPS:
1024MB RAM
1024MB VSwap
4 CPU Cores # 3.3GHz+
60GB SSD space
1Gbps Port
3000GB Bandwidth
Nothing apart from the obvious:
You will have to worry about storage yourself. MongoDB does tend to take a lot of disk space. upgrading storage will probably be harder to manage than letting Mongolab take care of it.
You will have to worry about making sure the Mongo server doesn't crash and it's running fine.
You will have scaling issues in the future once the load on your application and your database increases.
Using a "database-as-a-service" like Mongolab frees you from worrying about a lot of hardware/OS/system level requirements and configuration. Memory optimization? Which file system? Connection limits? Visualization and IO ops issues? (thanks to Nikolay for pointing that one out)
If your VPS provider doesn't account for local traffic in your bandwidth, then you can set up another VPS for MongoDB. That way, the server will be closer so the requests will be faster, and also, it will have the benefits of being an independent server. It still won't be fully managed like MongoLab though.
[ Edit: As Chris pointed out, MongoLab also helps you with your database schema design and bundles MongoDB support with their plans, so that's also nice. ]
Also, this is a good question, but probably not appropriate for StackOverflow. dba.stackexchange.com and serverfault.com are good places for this question.