Memcached - Connection refused - memcached

I tried a simple test with memcached from jelastic and always getting the exception "COnnection refused"... But the URL ist correct. Is some add
MemcachedClient c = new MemcachedClient(
new InetSocketAddress("memcached-myexample.jelastic.dogado.eu", 11211));
c.set("someKey", 3600, user);
User cachedUser = (User) c.get("someKey");
Here is the exception:
2014-01-02 00:07:41.820 INFO net.spy.memcached.MemcachedConnection: Added {QA sa=memcached-myexample.jelastic.dogado.eu/92.51.168.106:11211, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2014-01-02 00:07:41.833 WARN net.spy.memcached.MemcachedConnection: Could not redistribute to another node, retrying primary node for someKey.
2014-01-02 00:07:41.835 WARN net.spy.memcached.MemcachedConnection: Could not redistribute to another node, retrying primary node for someKey.
2014-01-02 00:07:41.858 INFO net.spy.memcached.MemcachedConnection: Connection state changed for sun.nio.ch.SelectionKeyImpl#2dc1482f
2014-01-02 00:07:41.859 INFO net.spy.memcached.MemcachedConnection: Reconnecting due to failure to connect to {QA sa=memcached-myexample.jelastic.dogado.eu/92.51.168.106:11211, #Rops=0, #Wops=2, #iq=0, topRop=null, topWop=Cmd: set Key: someKey Flags: 1 Exp: 3600 Data Length: 149, toWrite=0, interested=0}
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
at net.spy.memcached.MemcachedConnection.handleIO(MemcachedConnection.java:629)
at net.spy.memcached.MemcachedConnection.handleIO(MemcachedConnection.java:409)
at net.spy.memcached.MemcachedConnection.run(MemcachedConnection.java:1334)

I would try to telnet to your memcached cluster in order to rule out a firewall issue. You can do that with the following command.
telnet memcached-myexample.jelastic.dogado.eu 11211
If that doesn't work then you have network issues. If this is the case I would first check to see if you have a firewall up.

Add int portNum = 11211; at the first, and try again.
int portNum = 11211;
MemcachedClient c = new MemcachedClient(
new InetSocketAddress("memcached-myexample.jelastic.dogado.eu", portNum));
// Store a value (async) for one hour
c.set("someKey", 3600, someObject);
// Retrieve a value (synchronously).
Object myObject=c.get("someKey");

Thanks but the error was on a firewall rule from the provider. So not my failure.

Check /etc/memcached.conf file and update the server IP address from which you want to access the cache.

Related

Spymemcached issue with 2 nodes configured

I am using Ketama algorithm's spymemcached for my project. I do have two memcached servers running as part of HA (high availability) and my configurations are
hibernate.cache.use_second_level_cache=true
hibernate.cache.use_query_cache=true
hibernate.cache.region.factory_class=kr.pe.kwonnam.hibernate4memcached.Hibernate4MemcachedRegionFactory
hibernate.cache.default_cache_concurrency_strategy=NONSTRICT_READ_WRITE
hibernate.cache.region_prefix=myProjectCache
hibernate.cache.use_structured_entries=false
h4m.adapter.class=kr.pe.kwonnam.hibernate4memcached.spymemcached.SpyMemcachedAdapter
h4m.timestamper.class=kr.pe.kwonnam.hibernate4memcached.timestamper.HibernateCacheTimestamperMemcachedImpl
h4m.adapter.spymemcached.hosts=host1:11211,host2:11211
h4m.adapter.spymemcached.hashalgorithm=KETAMA_HASH
h4m.adapter.spymemcached.operation.timeout.millis=5000
h4m.adapter.spymemcached.transcoder=kr.pe.kwonnam.hibernate4memcached.spymemcached.KryoTranscoder
h4m.adapter.spymemcached.cachekey.prefix=myProject
h4m.adapter.spymemcached.kryotranscoder.compression.threashold.bytes=20000
# 10 minutes
h4m.expiry.seconds=600
# a day
h4m.expiry.seconds.validatorCache.org.hibernate.cache.spi.UpdateTimestampsCache=86400
# 1 hour
h4m.expiry.seconds.validatorCache.org.hibernate.cache.internal.StandardQueryCache=3600
# 30 minutes
h4m.expiry.seconds.myProjectCache.database1=1800
h4m.expiry.seconds.myProjectCache.database2=1800
Configurations are followed as per the link below :
SpyMemcachedAdapter
Both nodes host1 and host2 are reachable, up and running.
Issue :
As part of testing HA , when I bring down one memcached (host1) my application is connecting to host2 but only after trying to connect host1 (which will be timedout - as host1 is made down) for every request. Which will result in too much of time taken
Below is the exception thrown for every request
2017-07-07 17:27:31.915 [SimpleAsyncTaskExecutor-6] ERROR u.c.o.sProcessor - TransId:004579 - Exception occurred while processing request :Timeout waiting for value: waited 5,000 ms. Node status: Connection Status { /host1:11211 active: false, authed: true, last read: 247,290 ms ago /host2:11211 active: true, authed: true, last read: 5 ms ago }
2017-07-07 17:28:54.666 INFO net.spy.memcached.MemcachedConnection: Reconnecting due to failure to connect to {QA sa=/host1:11211, #Rops=0, #Wops=214, #iq=0, topRop=null, topWop=Cmd: 5 Opaque: 341143 Key: myProject.myProjectCache.databse1# Amount: 0 Default: 1499444604639 Exp: 2592000, toWrite=0, interested=0}
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
at net.spy.memcached.MemcachedConnection.handleIO(MemcachedConnection.java:677)
at net.spy.memcached.MemcachedConnection.handleIO(MemcachedConnection.java:436)
at net.spy.memcached.MemcachedConnection.run(MemcachedConnection.java:1446)
2017-07-07 17:28:54.666 WARN net.spy.memcached.MemcachedConnection: Closing, and reopening {QA sa=/host1:11211, #Rops=0, #Wops=214, #iq=0, topRop=null, topWop=Cmd: 5 Opaque: 341143 Key: myProject.myProjectCache.databse1# Amount: 0 Default: 1499444604639 Exp: 2592000, toWrite=0, interested=0}, attempt 14.
2017-07-07 17:28:54.841 WARN net.spy.memcached.MemcachedConnection: Could not redistribute to another node, retrying primary node for myProject.myProjectCache.databse1#-1:my.co.org.myProject.dao.entity.databse1.tablexyz#14744.
Am using memcached for the first time, not sure whether this is the behavior of spymemcached? Or am I missing something in my configurations? Or
by changing timeout configurations will it resolve time taken to process the request?
Any suggestions/help much appreciated.
If you are using DefaultConnectionFactory which uses OOTB ConnectionFactoryBuilder then the reconnect will happen after failed operation count has reached timeoutExceptionThreshold (in version 2.7 of spymemcached) is initialized to 998. So if you create your own ConnectionFactory and change the timeoutExceptionThreshold to lower value then you should see the automatic recovery.
Hope this helps.

Datastax Cassandra Driver always attempts to connect to localhost, even though it's not configured to do so

So I have the following Client code:
def getCluster:Session = {
import collection.JavaConversions._
val endpoints = config.getStringList("cassandra.server")
val keyspace = config.getString("cassandra.keyspace")
val clusterBuilder = Cluster.builder
endpoints.toTraversable.map { x =>
clusterBuilder.addContactPoint(x)
}
val cluster = clusterBuilder.build
cluster
.getConfiguration
.getProtocolOptions
.setCompression(ProtocolOptions.Compression.LZ4)
cluster.connect(keyspace)}
which is shamelessly borrowed from the examples in datastax's driver documentation.
When I attempt to execute code with it, it always tries to connect to localhost, even though it's not configured for that...
In some cases, it will connect (basic reads) but for writes I get the following log message:
2016-07-07 11:34:31 DEBUG Connection:157 - Connection[/127.0.0.1:9042-10, inFlight=0, closed=false] Error connecting to /127.0.0.1:9042 (Connection refused: /127.0.0.1:9042)
2016-07-07 11:34:31 DEBUG STATES:404 - Defuncting Connection[/127.0.0.1:9042-10, inFlight=0, closed=false] because: [/127.0.0.1] Cannot connect
2016-07-07 11:34:31 DEBUG STATES:108 - [/127.0.0.1:9042] Connection[/127.0.0.1:9042-10, inFlight=0, closed=false] failed, remaining = 0
2016-07-07 11:34:31 DEBUG Connection:629 - Connection[/127.0.0.1:9042-10, inFlight=0, closed=true] closing connection
2016-07-07 11:34:31 DEBUG Cluster:1802 - Aborting onDown because a reconnection is running on DOWN host /127.0.0.1:9042
2016-07-07 11:34:31 DEBUG Cluster:1872 - Failed reconnection to /127.0.0.1:9042 ([/127.0.0.1] Cannot connect), scheduling retry in 512000 milliseconds
2016-07-07 11:34:31 DEBUG STATES:196 - [/127.0.0.1:9042] next reconnection attempt in 512000 ms
I can't figure out where/what I need to configure on the driver side (no local client, it's just the driver) to correct this issue
My guess is that this is caused by configuration of the cassandra.yaml file on your cassandra node(s). The two main settings that would impact this are broadcast_rpc_address and rpc_address, from The cassandra.yaml configuration reference:
broadcast_rpc_address
(Default: unset) RPC address to broadcast to drivers and other Cassandra nodes. This cannot be set to 0.0.0.0. If blank, it is set to the value of the rpc_address or rpc_interface. If rpc_address or rpc_interfaceis set to 0.0.0.0, this property must be set.
rpc_address
(Default: localhost) The listen address for client connections (Thrift RPC service and native transport).
If you leave both of these to the defaults, localhost will be the default address cassandra will communicate to connect on.
After the driver is able to connect to a contact point, it queries the system.local and system.peers table of the contact point to determine which hosts to connect to, the addresses those tables communicate are from rpc_address/broadcast_rpc_address

RHQ Agent does not install

I configured the rhq-agent.bat. When I try to run it. It has the followings errors:
2015-06-25 14:51:24,891 ERROR [RHQ Server Polling Thread] (enterprise.communications.command.client.JBossRemotingRemoteCommunicator)- {JBossRemotingRemoteCommunicator.init-callback-failed}The initialize callback has failed. It will be tried again. Cause: org.jboss.remoting.CannotConnectException:Can not connect http client invoker after 1 attempt(s) -> java.net.ConnectException:Connection timed out: connect. Cause: org.jboss.remoting.CannotConnectException: Can not connect http client invoker after 1 attempt(s)
2015-06-25 14:51:42,987 ERROR [main] (org.rhq.enterprise.agent.AgentMain)- {AgentMain.plugin-update-failure}Failed to update the plugins.. Cause: java.lang.IllegalStateException: The sender object is currently not sending commands now. Command not sent: [Command: type=[remotepojo]; cmd-in-response=[false]; config=[{rhq.send-throttle=true}]; params=[{invocation=NameBasedInvocation[getLatestPlugins], targetInterfaceName=org.rhq.core.clientapi.server.core.CoreServerService}]]
I could solve it myself the problem was the security token. I modified those fields in the registry vierwer (Windows). When I restarted the agent it worked.

PostgreSQL Exception: “An I/O error occured while sending to the backend”

I run two web app in a machine and one DB in another machine.(They use the same DB)
One can run very well,But another one always down after about 4 hours.
Here is error information:
Error 2014-11-03 13:31:05,902 [http-bio-8080-exec-7] ERROR spi.SqlExceptionHelper - An I/O error occured while sending to the backend.
| Error 2014-11-03 13:31:05,904 [http-bio-8080-exec-7] ERROR spi.SqlExceptionHelper - This connection has been closed.
Postgresql logs:
2014-10-26 23:41:31 CDT WARNING: pgstat wait timeout
2014-10-27 01:13:48 CDT WARNING: pgstat wait timeout
2014-10-27 03:55:46 CDT LOG: could not receive data from client: Connection timed out
2014-10-27 03:55:46 CDT LOG: unexpected EOF on client connection
Who caused this problem, app or database? or net?
Reason:
At this point it was clear that the TCP connection that was sitting idle was already broken, but our app still assumed it to be open. By idle connections, I mean connections in the pool that aren’t in active use at the moment by the application.
After some search, I came to the conclusion that the network firewall between my app and the database is dropping the idle/stale connections after 1 hour. It seemed to be a common problem that many people have faced.
Solution:
In grails, you can set this in DataSource.groovy.
environments {
development {
dataSource {
//configure DBCP
properties {
maxActive = 50
maxIdle = 25
minIdle = 1
initialSize = 1
minEvictableIdleTimeMillis = 60000
timeBetweenEvictionRunsMillis = 60000
numTestsPerEvictionRun = 3
maxWait = 10000
testOnBorrow = true
testWhileIdle = true
testOnReturn = false
validationQuery = "SELECT 1"
}
}
}
}

MSDTC (Distributed Transaction Coordinator) Stops working. Error code -1073737669

I cannot start Distributed Transaction Coordinator service.
It stoped to work few days ago.
When I am trying to start service:
Registry properties:
RPC (For a test values was changed here to oposite and back - without any results ):
Windows logs\application logs:
53283
A MS DTC component has encountered an internal error. The process is being terminated. Error Specifics: DtcSystemShutdown (d:\w7rtm\com\complus\dtc\dtc\msdtc\src\msdtc.cpp#2539): Shutting down with an error
4111
The MS DTC service is stopping.
4102
DTC Security Configuration values (OFF = 0 and ON = 1): Network Administration of Transactions = 1,
Network Clients = 1,
Inbound Distributed Transactions using Native MSDTC Protocol = 1,
Outbound Distributed Transactions using Native MSDTC Protocol = 1,
Transaction Internet Protocol (TIP) = 0,
XA Transactions = 1,
SNA LU 6.2 Transactions = 1
Could not initialize the MS DTC Transaction Manager.
4356
Failed to initialize the MS DTC Communication Manager. Error Specifics: hr = 0x80070057, d:\w7rtm\com\complus\dtc\dtc\cm\src\ccm.cpp:2117, CmdLine: C:\Windows\System32\msdtc.exe, Pid: 5332
4358
The MS DTC Connection Manager is unable to register with RPC to use one of LRPC, TCP/IP, or UDP/IP. Please ensure that RPC is configured properly. If "ServerTcpPort" registry key is configured(DWORD value under the HKEY_LOCAL_MACHINE\Software\Microsoft\MSDTC for local DTC instance or under cluster hive for clustered DTC instance), please verify if the configured port is valid and the port is not already in use by a different component. Error Specifics:hr = 0x80070057, d:\w7rtm\com\complus\dtc\dtc\cm\src\iomgrsrv.cpp:2523, CmdLine: C:\Windows\System32\msdtc.exe, Pid: 5332
4156
String message: RPC raised an exception with a return code RPC_S_INVALIDA_ARG..
I found that we can use -resetlog command. But this doesnot resolving my problem:
Firewall is disabled.
Try to delete key HKLM\Software\Microsoft\Rpc\Internet from registry.
To get around this issue, I had to copy the log file (I accidentally deleted) to the location specified by the Local DTC Log information location.