My springboot + mybatis service is very slow, and I analysis the log and find this:
[DEBUG 2018-06-11 15:30:06.295] [http-nio-9973-exec-1] logid:102789834531274752 org.mybatis.spring.SqlSessionUtils.getSqlSession(SqlSessionUtils.java:97) [Creating a new SqlSession]
[DEBUG 2018-06-11 15:30:06.296] [http-nio-9973-exec-1] logid:102789834531274752 org.mybatis.spring.SqlSessionUtils.registerSessionHolder(SqlSessionUtils.java:148) [SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#584909b4] was not registered for synchronization because synchronization is not active]
[DEBUG 2018-06-11 15:30:17.997] [http-nio-9973-exec-1] logid:102789834531274752 org.mybatis.spring.transaction.SpringManagedTransaction.openConnection(SpringManagedTransaction.java:87) [JDBC Connection [com.alibaba.druid.proxy.jdbc.ConnectionProxyImpl#1ebfd05a] will not be managed by Spring]
[DEBUG 2018-06-11 15:30:17.998] [http-nio-9973-exec-1] logid:102789834531274752 org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:181) [==> Preparing: SELECT `id`,`name`,`sales_group_id`,`advertiser_id`,`agent_id`,`industry_id`,`contract_id`,`cast_system_id`,`traffic_category_id`,`distinct_type_id`,`dsp_id`,`dsp_name`,`push_ratio`,`executor_name`,`direct_sales_name`,`agent_sales_name`,`description`,`region_id`,`customer_type_id`,`creator_id`,`creator_name`,`create_time`,`update_time`,`audit_status_id`,`run_status_id`,`version`,`begin_date`,`end_date`,`cipdx`,`progress` FROM campaign_info WHERE id = ? ]
[DEBUG 2018-06-11 15:30:17.998] [http-nio-9973-exec-1] logid:102789834531274752 org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:181) [==> Parameters: 1(Integer)]
[DEBUG 2018-06-11 15:30:19.107] [http-nio-9973-exec-1] logid:102789834531274752 org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:181) [<== Total: 0]
[DEBUG 2018-06-11 15:30:19.107] [http-nio-9973-exec-1] logid:102789834531274752 org.mybatis.spring.SqlSessionUtils.closeSqlSession(SqlSessionUtils.java:191) [Closing non transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#584909b4]]
there is about 10s gap between org.mybatis.spring.SqlSessionUtils.getSqlSession and org.mybatis.spring.transaction.SpringManagedTransaction.openConnection, so what happened? Is it my mysql is slow? Or something error in my spring service?
The main suspect is opening connection to database as that is happening at that time.
You are using connection pool so it quite possible that all connections in the pool were busy and this particular client was waiting all that time until some connection is returned to the pool.
To check if this is the case you can either check connection pool documentation on how to enable logging of when connections are borrowed/returned to the pool or enable DEBUG logging for org.springframework.jdbc.datasource.DataSourceUtils logger.
Another possible reason is that opening connection is slow. You can check this using some simple program (like this) that uses jdbc to connect to the database directly.
Related
I want to connect to Neo4j database using my creds. I am tunneling into a machine and once that is done, I open my broswer at the port: localhost:7474.
I tried with both neo4j and bolt scheme to connect at the url:
bolt://<node_ip>:7687 and neo4j://<node_ip>:7687 but the connection times out.
I tried checking the logs but only found that bolt scheme is enabled:
bash-4.2$ tail -f /logs/debug.log
2021-07-02 21:26:03.323+0000 WARN [o.n.k.a.p.GlobalProcedures] Failed to load `org.apache.commons.logging.impl.LogKitLogger` from plugin jar `/home/sandbox/neo/plugins/apoc-4.2.0.2-all.jar`: org/apache/log/Logger
2021-07-02 21:26:03.946+0000 INFO [c.n.m.g.GlobalMetricsExtension] Sending metrics to CSV file at /home/sandbox/neo/metrics
2021-07-02 21:26:03.973+0000 INFO [o.n.b.BoltServer] Bolt enabled on 0.0.0.0:7687.
2021-07-02 21:26:03.974+0000 INFO [o.n.b.BoltServer] Bolt (Routing) enabled on 0.0.0.0:7688.
2021-07-02 21:26:03.974+0000 INFO [o.n.s.AbstractNeoWebServer$ServerComponentsLifecycleAdapter] Starting web server
2021-07-02 21:26:04.001+0000 INFO [o.n.s.m.ThirdPartyJAXRSModule] Mounted unmanaged extension [n10s.endpoint] at [/rdf]
2021-07-02 21:26:05.341+0000 INFO [c.n.s.e.EnterpriseNeoWebServer] Remote interface available at http://<node_ip>:7474/
2021-07-02 21:26:05.341+0000 INFO [o.n.s.AbstractNeoWebServer$ServerComponentsLifecycleAdapter] Web server started.
2021-07-02 21:35:34.565+0000 INFO [c.n.c.c.c.l.s.Segments] [system/00000000] Pruning SegmentFile{path=raft.log.0, header=SegmentHeader{formatVersion=2, recordOffset=56, prevFileLastIndex=-1, segmentNumber=0, prevIndex=-1, prevTerm=-1}}
2021-07-02 21:35:46.079+0000 INFO [c.n.c.c.c.l.s.Segments] [neo4j/32f6599b] Pruning SegmentFile{path=raft.log.0, header=SegmentHeader{formatVersion=2, recordOffset=56, prevFileLastIndex=-1, segmentNumber=0, prevIndex=-1, prevTerm=-1}}
The query log is empty, as I could not execute any query:
bash-4.2$ tail -f query.log
2021-07-02 21:25:52.510+0000 INFO Query started: id:1 - 1009 ms: 0 B - embedded-session neo4j - - call db.clearQueryCaches() - {} - runtime=pipelined - {}
2021-07-02 21:25:52.580+0000 INFO id:1 - 1080 ms: 112 B - embedded-session neo4j - - call db.clearQueryCaches() - {} - runtime=pipelined - {}
The other articles or answers that I read were mostly about misconfiguration: wrong ports but I don't think that is the case with me since I checked from debug.log file that my ports are alright.
FWIW, I am using 3 replicas for my Neo4j and right now, connecting to just one pod.
I am tunnelling both the ports:
ssh -L 7687:$IP:7687 -L 7474:$IP:7474 domain_name.com -N```
Perhaps you've already checked this, but if not, can you ensure that port 7687 is also forwarded. When I tunnelled via browser, my expectation was that 7474 would be sufficient, but it turned out that forwarding 7687 is also necessary.
So, instead of providing localhost in the connection string, I made a silly mistake of writing down the actual IP and that was the reason for connection timeout.
I installed kaa iot server manually on ubuntu 16.04, and use data collection sample to test how it works.
the code run without any errors, but when I run these commands below nothing happens:
mongo kaa
db.logs_$my_app_token$.find()
I even comment out bind_ip of mongodb.conf and restart mongodb, zookeeper and kaa-node services, but nothings changed.
I also regenerated SDK and rebuild project but that wouldn't help either.
finally this is the kaa log:
2018-06-05 15:03:53,899 [Thread-3] TRACE
o.k.k.s.c.s.l.DynamicLoadManager - DynamicLoadManager recalculate() got 0 redirection rules
2018-06-05 15:03:59,472 [EPS-core-dispatcher-6] DEBUG
o.k.k.s.o.s.a.a.c.OperationsServerActor - Received: org.kaaproject.kaa.server.operations.service.akka.messages.core.stats.StatusRequestMessage#30d61bb1
2018-06-05 15:03:59,472 [EPS-core-dispatcher-6] DEBUG o.k.k.s.o.s.a.a.c.OperationsServerActor - [14fc1a87-8b34-47f6-8f39-d91aff7bfff7] Processing status request
2018-06-05 15:03:59,475 [pool-5-thread-1] INFO o.k.k.s.o.s.l.DefaultLoadBalancingService - Updated load info: {"endpointCount": 0, "loadAverage": 0.02}
2018-06-05 15:03:59,477 [Curator-PathChildrenCache-0] INFO o.k.k.s.c.s.l.DynamicLoadManager - Operations server [-1835393002][localhost:9090] updated
2018-06-05 15:03:59,477 [Curator-PathChildrenCache-4] DEBUG o.k.k.s.o.s.c.DefaultClusterService - Update of node [localhost:9090:1528181889050]-[{"endpointCount": 0, "loadAverage": 0.02}] is pushed to resolver org.kaaproject.kaa.server.hash.ConsistentHashResolver#1d0276a4
2018-06-05 15:04:03,899 [Thread-3] INFO o.k.k.s.c.s.l.LoadDistributionService - Load distribution service recalculation started...
2018-06-05 15:04:03,899 [Thread-3] INFO o.k.k.s.c.s.l.DynamicLoadManager - DynamicLoadManager recalculate() started... lastBootstrapServersUpdateFailed false
2018-06-05 15:04:03,899 [Thread-3] DEBUG o.k.k.s.c.s.l.d.EndpointCountRebalancer - No rebalancing in standalone mode
2018-06-05 15:04:03,899 [Thread-3] TRACE o.k.k.s.c.s.l.DynamicLoadManager - DynamicLoadManager recalculate() got 0 redirection rules
thank you for your help to fix this problem...
After lots of searching and checking, I finally found it!!!
there are multiple reason that this would happen:
if you are using Kaa Sanddbox make sure that you set your network setting into bridge (not NAT).
check your iptables and find out if these ports are open: 9888,9889,9997,9999.
if you are using virtual machine as your server, make sure that hosts firewall system doesn't block the ports.(This is what happened to me...)
I have a Play!+Scala app using Play-2.4. Looking at the server logs, I see unrelated connection logs and exceptions. For example:
2017-11-06 21:36:45,174 ERROR [New I/O worker #1] application [?:?]
! #76020798a - Internal server error, for (CONNECT) [pl.metin2.gameforge.com:443] ->
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[NullPointerException: null]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:265) ~[com.typesafe.play.play_2.11-2.4.6.jar:2.4.6]
I am not connecting with gameforge at all, but there is still this message. Similarly this one:
! #75fjch9ik - Internal server error, for (OPTIONS) [sip:nm] ->
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[NullPointerException: null]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:265) ~[com.typesafe.play.play_2.11-2.4.6.jar:2.4.6]
at org.jboss.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpRequestDecoder.java:75) ~[io.netty.netty-3
It seems to me that this New I/O worker threadpool gets created by netty. Any kind soul has any idea, what might be causing these connections? Server is hosted on AWS EC2.
So I have the following Client code:
def getCluster:Session = {
import collection.JavaConversions._
val endpoints = config.getStringList("cassandra.server")
val keyspace = config.getString("cassandra.keyspace")
val clusterBuilder = Cluster.builder
endpoints.toTraversable.map { x =>
clusterBuilder.addContactPoint(x)
}
val cluster = clusterBuilder.build
cluster
.getConfiguration
.getProtocolOptions
.setCompression(ProtocolOptions.Compression.LZ4)
cluster.connect(keyspace)}
which is shamelessly borrowed from the examples in datastax's driver documentation.
When I attempt to execute code with it, it always tries to connect to localhost, even though it's not configured for that...
In some cases, it will connect (basic reads) but for writes I get the following log message:
2016-07-07 11:34:31 DEBUG Connection:157 - Connection[/127.0.0.1:9042-10, inFlight=0, closed=false] Error connecting to /127.0.0.1:9042 (Connection refused: /127.0.0.1:9042)
2016-07-07 11:34:31 DEBUG STATES:404 - Defuncting Connection[/127.0.0.1:9042-10, inFlight=0, closed=false] because: [/127.0.0.1] Cannot connect
2016-07-07 11:34:31 DEBUG STATES:108 - [/127.0.0.1:9042] Connection[/127.0.0.1:9042-10, inFlight=0, closed=false] failed, remaining = 0
2016-07-07 11:34:31 DEBUG Connection:629 - Connection[/127.0.0.1:9042-10, inFlight=0, closed=true] closing connection
2016-07-07 11:34:31 DEBUG Cluster:1802 - Aborting onDown because a reconnection is running on DOWN host /127.0.0.1:9042
2016-07-07 11:34:31 DEBUG Cluster:1872 - Failed reconnection to /127.0.0.1:9042 ([/127.0.0.1] Cannot connect), scheduling retry in 512000 milliseconds
2016-07-07 11:34:31 DEBUG STATES:196 - [/127.0.0.1:9042] next reconnection attempt in 512000 ms
I can't figure out where/what I need to configure on the driver side (no local client, it's just the driver) to correct this issue
My guess is that this is caused by configuration of the cassandra.yaml file on your cassandra node(s). The two main settings that would impact this are broadcast_rpc_address and rpc_address, from The cassandra.yaml configuration reference:
broadcast_rpc_address
(Default: unset) RPC address to broadcast to drivers and other Cassandra nodes. This cannot be set to 0.0.0.0. If blank, it is set to the value of the rpc_address or rpc_interface. If rpc_address or rpc_interfaceis set to 0.0.0.0, this property must be set.
rpc_address
(Default: localhost) The listen address for client connections (Thrift RPC service and native transport).
If you leave both of these to the defaults, localhost will be the default address cassandra will communicate to connect on.
After the driver is able to connect to a contact point, it queries the system.local and system.peers table of the contact point to determine which hosts to connect to, the addresses those tables communicate are from rpc_address/broadcast_rpc_address
I tried a simple test with memcached from jelastic and always getting the exception "COnnection refused"... But the URL ist correct. Is some add
MemcachedClient c = new MemcachedClient(
new InetSocketAddress("memcached-myexample.jelastic.dogado.eu", 11211));
c.set("someKey", 3600, user);
User cachedUser = (User) c.get("someKey");
Here is the exception:
2014-01-02 00:07:41.820 INFO net.spy.memcached.MemcachedConnection: Added {QA sa=memcached-myexample.jelastic.dogado.eu/92.51.168.106:11211, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2014-01-02 00:07:41.833 WARN net.spy.memcached.MemcachedConnection: Could not redistribute to another node, retrying primary node for someKey.
2014-01-02 00:07:41.835 WARN net.spy.memcached.MemcachedConnection: Could not redistribute to another node, retrying primary node for someKey.
2014-01-02 00:07:41.858 INFO net.spy.memcached.MemcachedConnection: Connection state changed for sun.nio.ch.SelectionKeyImpl#2dc1482f
2014-01-02 00:07:41.859 INFO net.spy.memcached.MemcachedConnection: Reconnecting due to failure to connect to {QA sa=memcached-myexample.jelastic.dogado.eu/92.51.168.106:11211, #Rops=0, #Wops=2, #iq=0, topRop=null, topWop=Cmd: set Key: someKey Flags: 1 Exp: 3600 Data Length: 149, toWrite=0, interested=0}
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
at net.spy.memcached.MemcachedConnection.handleIO(MemcachedConnection.java:629)
at net.spy.memcached.MemcachedConnection.handleIO(MemcachedConnection.java:409)
at net.spy.memcached.MemcachedConnection.run(MemcachedConnection.java:1334)
I would try to telnet to your memcached cluster in order to rule out a firewall issue. You can do that with the following command.
telnet memcached-myexample.jelastic.dogado.eu 11211
If that doesn't work then you have network issues. If this is the case I would first check to see if you have a firewall up.
Add int portNum = 11211; at the first, and try again.
int portNum = 11211;
MemcachedClient c = new MemcachedClient(
new InetSocketAddress("memcached-myexample.jelastic.dogado.eu", portNum));
// Store a value (async) for one hour
c.set("someKey", 3600, someObject);
// Retrieve a value (synchronously).
Object myObject=c.get("someKey");
Thanks but the error was on a firewall rule from the provider. So not my failure.
Check /etc/memcached.conf file and update the server IP address from which you want to access the cache.