HikariCP warning message in Play Framework 2.5.x - scala

I'm getting the message below in Play for Scala, what does this mean and what could be the reason? Is this related to Slick or to JDBC (I'm using both)?
[warn] c.z.h.p.HikariPool - HikariPool-7 - Unusual system clock change
detected, soft-evicting connections from pool.

Possible bug in HikariCP
There was some issues in HikariCP that cause this error:
https://github.com/brettwooldridge/HikariCP/issues/559
So be sure yo you use 2.4.4 version or newer
Possible time shifting
HikariCP will log a warning for large forward leaps. Large forward leaps often occur on laptops that go into sleep mode or VMs that are suspended and resumed.
There is similar question:
HikariPool-1 - Unusual system clock change detected, soft-evicting connections from pool
The only thing I would add is that NTP synchronization also could have bugs: Clock drift even though NTPD running

Related

Jobrunr background server stopped polling

I had a number of jobs scheduled but seems none of the jobs were running. On further debugging, I found that there are no available servers, and in the jobrunr_backgroundjobservers table, it seems that there has not been a heart beat for any of the servers. What would cause this issue? How would I restart a heartbeat? And how would I know when such an issue occurs and the servers go down again, given that schedules are time sensitive?
It will stop polling if the connection to the database was lost or the database goes down for a while.
The JobRunr Pro version adds extra features and one of them is database fault tolerance - if such an issue occurs, JobRunr Pro will go in standby and will start processing again once the connection to the database is stable again.
See https://www.jobrunr.io/en/documentation/pro/database-fault-tolerance/ for more info.

com.thinkaurelius.titan.diskstorage.PermanentBackendException: Unexpected interrupt

After upgrading to Titan 1.0.0 I started to see the following exceptions under load, using Cassandra (2.2.6) as the storage backend:
Caused by: java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)[:1.8.0_102]
at java.lang.Thread.sleep(Thread.java:340)[:1.8.0_102]
at
java.util.concurrent.TimeUnit.sleep(TimeUnit.java:386)[:1.8.0_102]
at
com.thinkaurelius.titan.diskstorage.util.time.TimestampProviders.sleepPast(TimestampProviders.java:138)
at
com.thinkaurelius.titan.diskstorage.common.DistributedStoreManager.sleepAfterWrite(DistributedStoreManager.java:222)
... 66 more
Can this be fixed through configuration?
While there are several configuration items available around timestamps, I did not find any that strikes me as relevant to the timestamp provider itself.
You should check your Cassandra logs. I have found that Titan under load starts to throw these types of errors as well as Timeout errors when Cassandra starts its compaction process.
Grep for the keyword "GC" in /var/log/cassandra/system.log monitor your disk usage using dstat. If you see "GC" often then you under going heavy compaction and this bogs down titan.
To get around this you can try to optimise how you load your data into titan so as to not cause compaction to often.
The following are just things we tried that worked for our case:
Avoid deletions. Deletions trigger tombstoning which leads to compaction.
Increase the size of your JVM. One of the things which causes compaction to run is when you start to run out of memory so this makes it less likely to run.
You can try to use different compaction strategies. Each one is optimised for a different use case.

HikariPool-1 - Unusual system clock change detected, soft-evicting connections from pool

My application use Spring boot and hikaricp.
It occurs this errors:
HikariPool-1 - Unusual system clock change detected, soft-evicting connections from pool
Please help me fix it!
Two recommendations. One, make sure you are using the latest version of HikariCP. Two, configure the computer to sync time from a NTP server.
Newer versions of HikariCP will only evict connections when backward time motion is detected. But will still log a warning for large forward leaps. Large forward leaps often occur on laptops that go into sleep mode or VMs that are suspended and resumed.

Slick+HikariCP runs out of connections during extensive streaming usage

I am using akka-persistence-jdbc plugin for Akka Persistence in some parts of application and slick directly in another parts.
After migrating the the hottest parts from direct slick usage to akka-persistence HikariCP started to throw exceptions:
WARN com.zaxxer.hikari.pool.LeakTask Connection leak detection triggered for connection org.postgresql.jdbc.PgConnection#3a3c8d5d, stack trace follows
java.lang.Exception: Apparent connection leak detected
That's not a problem of slow SQL queries. Explain analyze shows that PostgreSQL executes them in ~1 millisecond. But connection waits for something and not being released for minutes. Some more details on that are here.
akka-persistence-jdbc uses streaming for writes and reads, could it be a slick bug or something is wrong in the way akka-persistence-jdbc does such operations?
I am using slick 3.1.1, HikariCP 2.3.7, PostgreSQL 9.4 with max_connections: 120.

Cassandra giving TTransportException after some inserts/updates

Around 15 processes were inserting/updating unique entries in Cassandra. Everything was working fine but after sometime I get this error.
(When I restart the process everything is fine again till sometime)
An attempt was made to connect to each of the servers twice, but none
of the attempts succeeded. The last failure was TTransportException:
Could not connect to 10.204.81.77:9160
I did CPU/Memory Analysis of all the Cassandra machines. CPU usage sometimes goes around 110% and Memory Usage was between 60% - 77%. Not sure if this might be the cause, as it was working fine with such memory and cpu usage most of the time.
p.s.: How to ensure Cassandra update/insertion works error free?
Cassandra will throw an exception if anything goes wrong with your inserts; otherwise, you can assume it was error free.
Connection failures are a network problem, not a Cassandra problem. Some places to start: is the Cassandra process still alive? Does netstat show it still listening on 9160? Can you connect to non-Cassandra services on that machine? Is your server or router configured to firewall off frequent connection attempts?