Lots of Hikari pool threads being created - hikaricp

In my java application, I am noticing lots of hikari pool threads (around 250 ) being created even if I have set the limit to be 25.
My java application ran out of memory and got stuck. When I did jstack, I see the following hikari pool threads numbered from 1-250.
HikariPool-1, HikariPool-2 .. HikariPool-250.
So, the question is why so many hikaripool threads got created?
Also, I see the following warnings too
"Thread starvation or clock leap detected"
Does anyone have any idea on what might be going on here?

Related

Why is Visual VM showing a thread as parked when it is getting scheduled?

I have created a custom Execution Context (FixedThreadPool with 100 workers) in Scala to run IO Bound Futures. I am making db calls to Cassandra in these futures.
I have made the context available to the DB related Futures. In the log messages I see that these threads are getting executed in the specified thread pool.
However Visual VM tells a different story.
This shows that the threads have a CPU utilisation of 0%.
What am I missing here ?

HikariCP warning message in Play Framework 2.5.x

I'm getting the message below in Play for Scala, what does this mean and what could be the reason? Is this related to Slick or to JDBC (I'm using both)?
[warn] c.z.h.p.HikariPool - HikariPool-7 - Unusual system clock change
detected, soft-evicting connections from pool.
Possible bug in HikariCP
There was some issues in HikariCP that cause this error:
https://github.com/brettwooldridge/HikariCP/issues/559
So be sure yo you use 2.4.4 version or newer
Possible time shifting
HikariCP will log a warning for large forward leaps. Large forward leaps often occur on laptops that go into sleep mode or VMs that are suspended and resumed.
There is similar question:
HikariPool-1 - Unusual system clock change detected, soft-evicting connections from pool
The only thing I would add is that NTP synchronization also could have bugs: Clock drift even though NTPD running

JDBC connection pool never shrinks

I run 3 processes at the same time , all of them are using the same DB (RDS postgres)
all of them are java application that uses JDBC for connecting to the DB
I am using PGPoolingDataSource in every process as a connection pool for the DB
every request is handled by the book - ended with
finally{
connection.close();
}
main problems:
1.
I ran out of connections really fast because I do a massive work
with the DB at the beginning (which is ok) but the pool never
shrinks.
I get some exceptions in the code because there are not enough connections and I wish I could expand the timeout when a requesting
a connection.
My insights:
the PGPoolinDataSource never shrinks by definition! I couldn't find any documentation about it, but I assume this is the case. So I tried the apache DBCP pool and again I am having the same problem .
I think there must be timeout when waiting for a connection - I would guess that this timeout can be configured, but I couldn't find this configuration on both pools .
My Questions:
why does the pool never shrinks?!
how to determine how many connections to allocate for each pool\process (here every process has one pool)
what happens if I don't close the pool (not the connections) and the app is dead does the connections on the pool are still alive? this happens a lot when I update the application I stop and start it so I never close the pool.
what would be a good JDBC connection pool that works best with postgres and that has an option to set the timeout for the getConnection ?
Thanks

CPU usage of Jboss JVM goes upto 99% and stays there

I am doing load testing on my application using jmeter and I have a situation where the cpu usage by the applications jvm goes to 99% and it stays there. Application still work, I am able to login and do some activity. But, it’s understandably slower.
Details of environment:
Server: AMD Optrom, 2.20 Ghz, 8 Core, 64bit, 24 GB RAM. Windows Server 2008 R2 Standard
Application server: jboss-4.0.4.GA
JAVA: jdk1.6.0_25, Java HotSpot(TM) 64-Bit Server VM
JVM settings:
-Xms1G -Xmx10G -XX:MaxNewSize=3G -XX:MaxPermSize=12G -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+UseCompressedOops -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000
Database: MySql 5.6 (in a different machine)
Jmeter: 2.13
My scenario is that, I make 20 users of my application to log into it and perform normal activity that should not be bringing huge load. Some, minutes into the process, JVM of Jboss goes up and it never comes back. CPU usage will remain like that till JVM is killed.
To help better understand, here are few screen shots.
I found few post which had cup # 100%, but nothing there was same as my situation and could not find a solution.
Any suggestion on what’s to be done will be great.
Regards,
Sreekanth.
To understand the root cause of the high CPU utilization, we need to check the CPU data and thread dumps at same time.
Capture 5-6 thread dumps at the time of the issue. Similarly capture CPU consumption thread-by-thread basis.
Generally the root cause of the CPU issue would be problems with threads like BLOCKED threads, long running threads, dead-lock, long running loops etc. That can be resolved by going through the stacks of the threads.

Did netty.3.5.7 begin initializing worker pool with 200 threads?

Can anyone confirm if Netty 3.5.7 introduced a change that causes an NIO threadpool of 200 threads to be created?
We have a webapp that we're running in Tomcat 7 and I've noticed that at some point there is a new block of 200 NIO threads - all labeled "New I/O Worker #". I've verified that with 3.5.6, this threadpool is not initialized with 200 threads, but only a boss thread. As soon as I replaced the the jar with 3.5.7, I now have 200 NIO threads + the boss thread.
If this change was introduced with 3.5.7, is it possible to control the pool size with some external configuration? I ask because we don't explicitly use Netty, it's used by a 3rd party JAR.
Thanks,
Bob
Netty switched to not lazy start workers anymore because of the overhead of synchronization. I guess that could be the problem you see.
The only help here is to change the worker-count when create the Nio*ChannelFactory. 200 is a way to high anyway.