Orientdb client using 100% cpu after connection has been established - orientdb

When I use the following code in a servlet running in tomcat7 (also tested in tomcat6) on Ubuntu 10.10 Server (openjdk 1.6.0_20 64bit), the java process uses 100% cpu and above after the connection has been established once.
ODatabaseObjectTx db = ODatabaseObjectPool.global().acquire("remote:localhost/test", "test", "test");
db.getEntityManager().registerEntityClass(BlogPost.class);
List<BlogPost> posts = db.query(new OSQLSynchQuery<BlogPost>("select * from BlogPost order by date desc"));
db.close();
Does anyone know how to resolve this issue?
EDIT:
The issue also happens right after acquire. There is a Thread "ClientService" being spawned that causes the high load.
I took several thread dumps and it always shows the same for this thread:
"ClientService" daemon prio=10 tid=0x00007f4d88344800 nid=0x558e runnable [0x00007f4d872da000]
java.lang.Thread.State: RUNNABLE
at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:127)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1239)
at java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:431)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinaryAsynch.endResponse(OChannelBinaryAsyn
ch.java:73)
at com.orientechnologies.orient.client.remote.OStorageRemoteServiceThread.execute(OStorageRemoteServiceThread
.java:59)
at com.orientechnologies.common.thread.OSoftThread.run(OSoftThread.java:48)
When I suspend the thread in the debugger, the high load stops until I press continue.

Happens after the acquire() or the query?

Related

Thread blocked while try closing a mongoclient

I have an app that use multiple databases, so I create multiple connections and keep them in a concurrenthashmap.
If a connection doesn't exist I simply create a new one and put it into that map, and we will close and remove some of the client if that task get done.
After many rounds of the create-close-remove operations, exception happens while close some of that client, it says thread has been blocked.
Why close a mongo client end up with thread block for that long period of time, how can I avoid this problem from happen?
THX!
Vertx version: 4.2.3
Java version : 1.8.0_201
02:10:05.626 WARN i.v.core.impl.BlockedThreadChecker - Thread Thread[vert.x-eventloop-thread-27,5,main] has been blocked for 5058 ms, time limit is 2000 ms
io.vertx.core.VertxException: Thread blocked
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at com.mongodb.internal.connection.AsynchronousChannelStream$FutureAsyncCompletionHandler.get(AsynchronousChannelStream.java:317)
at com.mongodb.internal.connection.AsynchronousChannelStream$FutureAsyncCompletionHandler.getRead(AsynchronousChannelStream.java:312)
at com.mongodb.internal.connection.AsynchronousChannelStream.read(AsynchronousChannelStream.java:151)
at com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:648)
at com.mongodb.internal.connection.InternalStreamConnection.receiveMessageWithAdditionalTimeout(InternalStreamConnection.java:513)
at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:356)
at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:280)
at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)
at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)
at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:107)
at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:62)
at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:144)
at com.mongodb.internal.connection.UsageTrackingInternalConnection.open(UsageTrackingInternalConnection.java:51)
at com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.open(DefaultConnectionPool.java:431)
at com.mongodb.internal.connection.DefaultConnectionPool.get(DefaultConnectionPool.java:115)
at com.mongodb.internal.connection.DefaultConnectionPool.get(DefaultConnectionPool.java:100)
at com.mongodb.internal.connection.DefaultServer.getConnection(DefaultServer.java:92)
at com.mongodb.internal.session.ServerSessionPool.endClosedSessions(ServerSessionPool.java:156)
at com.mongodb.internal.session.ServerSessionPool.close(ServerSessionPool.java:103)
at com.mongodb.internal.async.client.AsyncMongoClientImpl.close(AsyncMongoClientImpl.java:119)
at com.mongodb.reactivestreams.client.internal.MongoClientImpl.close(MongoClientImpl.java:65)
at io.vertx.ext.mongo.impl.MongoClientImpl$MongoHolder.close(MongoClientImpl.java:1283)
at io.vertx.ext.mongo.impl.MongoClientImpl.close(MongoClientImpl.java:132)
at io.vertx.ext.mongo.impl.MongoClientImpl.close(MongoClientImpl.java:1243)
at io.vertx.reactivex.ext.mongo.MongoClient.close(MongoClient.java:1921)
at io.vertx.reactivex.ext.mongo.MongoClient.close(MongoClient.java:1928)

kafka connect 100% cpu

I am using kafka_2.12-2.3.1 on ubuntu 18 with JDK 13. While everything is working fine, I noticed that Kafka connect is consuming 100% cpu. Even though there are no messages to process it is continuing to consume cpu.
Took threaddump but there are no clues in it.
I am using Postgres-connector with Postgre 11.X. Also I am starting Kafka connect in distributed mode.
UPDATE 1
After searching through various sites, I got few hints and realised that it could be a bug in JDK itself (caused by polling). I saw a section of Thread dump which was highlighted by most
I will check the behavior with Standalone kafka connect.
"KafkaBasedLog Work Thread - connect-offsets" #31 prio=5 os_prio=0 cpu=828.02ms elapsed=1039.24s tid=0x00007f8530089800 nid=0x305b runnable [0x00007f852a3f4000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base#13.0.1/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base#13.0.1/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base#13.0.1/SelectorImpl.java:124)
- locked <0x00000000809efd98> (a sun.nio.ch.Util$2)
- locked <0x00000000809efd40> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base#13.0.1/SelectorImpl.java:136)
at org.apache.kafka.common.network.Selector.select(Selector.java:794)
at org.apache.kafka.common.network.Selector.poll(Selector.java:467)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1281)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1225)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1201)
at org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:262)
at org.apache.kafka.connect.util.KafkaBasedLog.access$500(KafkaBasedLog.java:71)
at org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:337)
UPDATE 2
Tried standalone mode and the CPU is still topping to 100%. I confirm that there are no messages for processing.

Rest server (Play Framework) gets "Read Timed out" exception during load test

We are running a heavy load test (jmeter: 350 threads, 35M total requests) on a rest server using Play Framework and run into the following error after ~2 hour. We remove other components so that request simply take requests and do nothing. Anyone has any idea or simply Play Framework cannot handle heavy load like this?
2014/07/05 11:59:38 WARN - com.company.test.RestTest2: Run TestSQL throw error java.lang.Exception: com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out
at com.company.dispatcher.RexsterRESTTaskDispatcher.dispatchTask(RexsterRESTTaskDispatcher.java:76)
at com.company.test.RestTest2.runTest(RestTest2.java:375)
at org.apache.jmeter.protocol.java.sampler.JavaSampler.sample(JavaSampler.java:191)
at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:429)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:257)
at java.lang.Thread.run(Thread.java:744)
Part of the application.conf :
....
db.pool.timeout=100000
play {
akka {
akka.loggers = ["akka.event.Logging$DefaultLogger", "akka.event.slf4j.Slf4jLogger"]
loglevel = WARNING
actor {
default-dispatcher = {
fork-join-executor {
parallelism-factor = 64
parallelism-max = 1000
}
}
}
}
}
Had the this error today. It tool me a while to found out that one of the windows (svchost) processes was occupying the 1099 port, which the Jmeter server was trying to use.
I got a hint for this when trying to start the Jmeter-Server.bat file manually. Then, the following PowerShell command provided the details of that process. After closing that process, Jmeter clients started to connect again.
Get-Process -Id (Get-NetTCPConnection -LocalPort 1099).OwningProcess
There a many things to check:
Are you running Test from same machine ? if yes it's a problem
Is your machine TCP stack tuned ?
What is your JVM configuration regarding Xmx as long as your machine memory, CPU ...
What does your test look like ? could you show a screenshot with all elements unfolded ?
I think Play/AKKA can handle this load without problem so I would look into configuration issues.

High tomcat thread count caused by threads waiting on MultiThreadedHttpConnectionManager ConnectionPool

We've recently started seeing spikes in the thread counts on our tomcat servers (peaking at over 1000 when normally at around 100). We performed a thread dump on one of the tomcat servers whilst it's thread count was high and found that a large number of the threads were waiting on MultiThreadedHttpConnectionManager$ConnectionPool, stack trace as follows:
"TP-Processor21700" daemon prio=10 tid=0x4a0b3400 nid=0x2091 in Object.wait() [0x399f3000..0x399f4004]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x58ee5030> (a org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$ConnectionPool)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager.doGetConnection(MultiThreadedHttpConnectionManager.java:518)
- locked <0x58ee5030> (a org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$ConnectionPool)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager.getConnectionWithTimeout(MultiThreadedHttpConnectionManager.java:416)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:153)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
...
There are 3 points in our code where httpClient.executeMethod() is called (to obtain info via an http request to another tomcat server). In each case the GetMethod object passed to it has had its socket timeout value set (i.e. via getMethod.getParams().setSoTimeout();) before hand, and the MultiThreadedConnectionManager is configured in spring to have a connectionTimeout value of 10 seconds. One thing I have noticed is that only 2 of the 3 httpClient.executeMethod() invocations are followed by a call to getMethod.releaseConnection(), so I'm wondering if this may be the cause of the problem (i.e. connections not being explicitly released). However what's strange is that
the problem has only started occurring in the last few days and the source code has not been modified for over a year, plus the fact that there has been no recent surge in requests coming through to the tomcat servers. One change that did occur a couple of days before the problem started to occur was that we upgraded the JVM used by the tomcat server from Java 5 (1.5 update 14) to Java 6 (1.6 update 25). We have tried temporarily reverting the JVM version to Java 5 to see if the problem stopped occurring but it did not. Another point to note is that in most cases the tomcat server eventually recovers and
the thread count drops back to normal - we've only had one instance where a tomcat process appears to have crashed because of the thread count increase.
We are running Tomcat 5.5 with commons-httpclient-3.1.jar running against a Java 1.6 update 25 on a Red Hat linux environment.
Please let me know if you can suggest any ideas as to what may be the cause of this issue.
Thanks.
The problem was indeed caused by the fact that only 2 of the 3 httpClient.executeMethod(getMethod) invocations were followed by a call to getMethod.releaseConnection(). Ensuring all 3 httpClient.executeMethod(getMethod) invocations were inside a try/catch block followed by a finally block containing a call to getMethod.releaseConnection() prevented the high thread counts from occurring. Although this code had been in our live system for over a year, it appears that the reason the high thread count issue only recently started occurring was because various search engine crawlers had started hitting the site with lots of URL requests that caused the code where the connection was being used but not subsequently released to execute. Problem solved.

How to enable JBoss server TRACE log?

I am having web application running in JBOSS AS 4.2.2.
Observed that jboss server automatically shuts down, and the following exception is observed in server.log
14:20:38,048 INFO [Server] Runtime shutdown hook called, forceHalt: true
14:20:38,049 INFO [Server] JBoss SHUTDOWN: Undeploying all packages
I want to enable TRACE for org.jboss.system.server.Server in jboss-log4j.xml, to hopefully get some more info when the server shuts down.
Please let me know how to enable TRACE for org.jboss.system.server.Server in jboss-log4j.xml.
I was able to add trace for server log and i could see the following output when JBOSS AS shuts down automatically:
2010-06-09 19:07:46,631 DEBUG [org.jboss.wsf.stack.jbws.RequestHandlerImpl] END handleRequest: jboss.ws:context=hpnp_lqs,endpoint=APIWebService
2010-06-09 19:07:46,631 DEBUG [org.jboss.ws.core.soap.MessageContextAssociation] popMessageContext: org.jboss.ws.core.jaxws.handler.SOAPMessageContextJAXWS#3290a11e (Thread http-0.0.0.0-8080-1)
2010-06-09 19:07:55,895 INFO [org.jboss.system.server.Server] Runtime shutdown hook called, forceHalt: true
2010-06-09 19:07:55,895 TRACE [org.jboss.system.server.Server] Shutdown caller:
java.lang.Throwable: Here
at org.jboss.system.server.ServerImpl$ShutdownHook.shutdown(ServerImpl.java:1017)
at org.jboss.system.server.ServerImpl$ShutdownHook.run(ServerImpl.java:996)
2010-06-09 19:07:55,895 INFO [org.jboss.system.server.Server] JBoss SHUTDOWN: Undeploying all packages
If anybody, has any clue, on what might be cause for automatic shutdown, pls help me.
Thanks!
There's a JBoss wiki page listing log output for various shutdown causes. It looks like yours was caused by a Ctrl-C. I assume you would have known if you hit Ctrl-C, though.
On unix-type servers, Ctrl-C generates a TERM signal, which could also come from someone or some script running as your jboss user or as root executing "kill <jboss pid>". If you're on linux I'd take a look at this question about the OOM killer.
One possible cause for this behaviour is console logout. We have observed this with our own server.
In brief, by default the Sun JVM listens to the event of the console user logging out, and shuts itself down automatically when that happens. To disable this, start the JVM with the -Xrs parameter.
See here for more details (look for Mysterious shutdowns).
One possible cause for a forced shutdown is if the virtual machine is out of memory.
I had this problem several years ago when a colleague implemented some very nasty bulk loading of objects from a database which caused jboss to shutdown on certain requests.
Try searching for "memory" or similar keywords in the log file and/or monitor the memory usage of the server.