Enyim.Caching.Memcached - Failed to read from Socket - memcached

I'm currently building an environment for deploying a web.application.
The Web.Application uses Enyim.Caching.
There looks to be an issues with the sockets
I'm unfamiliar with membase server, if there is any additional information that I can include in this post please ask...
Any suggestions on what I can check would be greatly appreciated:
Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Pool has been inited for 127.0.0.1:11212 with 10 sockets`
Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Acquiring stream from pool. 127.0.0.1:11212`
Enyim.Caching.Memcached.PooledSocket - Socket 86101442-5fc2-4169-bba2-9f25f1647254 was reset
Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Socket was reset. 86101442-5fc2-4169-bba2-f25f1647254
Enyim.Caching.Memcached.MemcachedNode - System.IO.IOException: Failed to read from the socket '127.0.0.1:11212'. Error: ?
at Enyim.Caching.Memcached.PooledSocket.BasicNetworkStream.Read(Byte[] buffer, Int32 offset, Int32 count) in d:\d\repo\EnyimMemcached\Enyim.Caching\Memcached\BasicNetworkStream.cs:line 92
at System.IO.BufferedStream.ReadByte()
at Enyim.Caching.Memcached.PooledSocket.ReadByte() in

Enyim uses port 11211 by default. It looks like you are trying 11212 instead, try changing to 11211.

Related

how can I make large number of connections without error at client side

I have written a program in golang to make request about 2000qps to different remote ip with local port randomly selected by linux, and close request immediately after connection established, but still encounter bind: address already in use error periodically
what I have done:
net.ipv4.ip_local_port_range is 15000-65535
net.ipv4.tcp_tw_recycle=1 net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_fin_timeout=30
above is sockstat:
sockets: used 1200 TCP: inuse 2302 orphan 1603 tw 40940 alloc 2325 mem 201
I don't figure it out why this error still there with kernel selecting available local port,will kernel return a port in use ?
This is a good answer from 2012:
https://serverfault.com/questions/342741/what-are-the-ramifications-of-setting-tcp-tw-recycle-reuse-to-1#434669
As of 2018, tcp_tw_recycle exists only in the sysctl binary, is otherwise gone from the kernel:
https://github.com/torvalds/linux/search?utf8=%E2%9C%93&q=tcp_tw_recycle&type=
tcp_tw_reuse is still in use as described in the above answer:
https://github.com/torvalds/linux/blob/master/net/ipv4/tcp_ipv4.c#L128
However, while a TCP_TIMEWAIT_LEN is in use:
https://github.com/torvalds/linux/search?utf8=%E2%9C%93&q=TCP_TIMEWAIT_LEN&type=
the value is hardcoded:
https://github.com/torvalds/linux/blob/master/include/net/tcp.h#L120
and tcp_fin_timeout refers to a different state:
https://github.com/torvalds/linux/blob/master/Documentation/networking/ip-sysctl.txt#L294
One can relatively safely change the local port range to 1025-65535.
For kicks, if there were a situation where this client was talking to servers and network under my control, I would build a new kernel with a not-to-spec TCP_TIMEWAIT_LEN, and perhaps also fiddle with tcp_max_tw_buckets:
https://github.com/torvalds/linux/blob/master/Documentation/networking/ip-sysctl.txt#L379
But doing so in other circumstances- if this client is behind a NAT and talking to common public servers- will likely be disruptive.

Remote Logging using Log4j2

So i have this task to log activities to a file, but it has to be done
remotely on the server side, Remote logging.
NOTE : Remote Logging has to be in latest version of Log4j2(2.10)
My task was simple
Send logging info to a port.
Log info from port to a file.
My Discoveries
Socket Appender exist which help send info to a port. This is it, you dont need to create a client side code or anything.
Socket appender configuration in log4j2.properties
appender.socket.type = Socket
appender.socket.name= Socket_Appender
appender.socket.host = "IP address"
appender.socket.port = 8101
appender.socket.layout.type = SerializedLayout
appender.socket.connectTimeoutMillis = 2000
appender.socket.reconnectionDelayMillis = 1000
appender.socket.protocol = TCP
Adapting from here. But this is also log4j 1.x adaptation.
I found out that before log4j 2.6 to listen to a port we used TcpSocketServer which started a server using LogEventBridgeThis helped reach that conclusion. This class was in core.net.server which is no longer available.Assuming it is not used anymore and the only similar/closest class, TcpSocketManager.Other links that helped. How to use SocketAppend?
Then i tried this
public static final Logger LOG=LogManager.getLogger(myapp.class.getName());
main(){
LOG.debug("DEBUG LEVEL");
}
and got the following error
main ERROR TcpSocketManager (TCP:IPAddress:8111) caught exception
and will continue: java.net.SocketTimeoutException: connect timed out
I know this work because i made it read to a socket but there was no one listening, but somehow i messed up big time and there was a code change.
I need help how to go ahead. Thank You in advance
The socket server to remotely receive log events has been moved to a separate repository: https://github.com/apache/logging-log4j-tools
This still needs to be released.

failed to find free socket port for process dispatcher when trying remote debug

Highlights:
windows 10 host machine
ubuntu vagrant box (virtualbox) as guest vm
using vagrant port forwarding as like this: config.vm.network "forwarded_port", guest: 1234, host: 12340
IDE: IntelliJ IDEA with Ruby plugin
The Issue:
I've tried to set up remote ruby debug following this guide and getting an error in IDE: "failed to find free socket port for process dispatcher". It looks this issue is not IntelliJ-specific, I was able to reproduce it with latest RubyMine as well.
From IDEA's log
2017-07-07 21:53:03,515 [8879188] INFO - tion.impl.ExecutionManagerImpl - Failed to find free socket port for process dispatcher
com.intellij.execution.ExecutionException: Failed to find free socket port for process dispatcher
at org.jetbrains.plugins.ruby.ruby.debugger.RubyProcessDispatcher.<init>(RubyProcessDispatcher.java:46)
at org.jetbrains.plugins.ruby.ruby.debugger.RubyRemoteDebugRunner.doExecute(RubyRemoteDebugRunner.java:62)
...
Caused by: java.net.BindException: Address already in use: JVM_Bind
at java.net.TwoStacksPlainSocketImpl.socketBind(Native Method)
at java.net.TwoStacksPlainSocketImpl.socketBind(TwoStacksPlainSocketImpl.java:137)
...
I can understand it says Address already in use: JVM_Bind, but how remote debug supposed to work at all then? (I mean Is there any way to access guest vm port not forwarding it before? Clearly no) Any help to solve this issue is much appreciated.
For me the issue was due to another debug session that was open in the background. To prevent that from happening again (and also close all other currently open sessions, once you run the configuration again) select "Single instance only" in the Debug Configuration:

memcached apparently resetting connections

UPDATE:
It's not memcached, it's a lot of sockets in TIME_WAIT state:
% ss -s
Total: 2494 (kernel 2784)
TCP: 43323 (estab 2314, closed 40983, orphaned 0, synrecv 0, timewait 40982/0), ports 16756
BTW, I have modified previous version (below) to use Brad Fitz's memcache client and to reuse the same memcache connection:
http://dpaste.com/1387307/
OLD VERSION:
I have thrown together the most basic webserver in Go that has handler function doing only one thing:
retrieving a key from memcached
sending it as http response to client
Here's the code: http://dpaste.com/1386559/
The problem is I'm getting a lot of connection resets on memcached:
2013/09/18 20:20:11 http: panic serving [::1]:19990: dial tcp 127.0.0.1:11211: connection reset by peer
goroutine 20995 [running]:
net/http.funcĀ·007()
/usr/local/go/src/pkg/net/http/server.go:1022 +0xac
main.maybe_panic(0xc200d2e570, 0xc2014ebd80)
/root/go/src/http_server.go:19 +0x4d
main.get_memc_val(0x615200, 0x7, 0x60b5c0, 0x6, 0x42ee58, ...)
/root/go/src/http_server.go:25 +0x64
main.funcĀ·001(0xc200149b40, 0xc2017b3380, 0xc201888b60)
/root/go/src/http_server.go:41 +0x35
net/http.HandlerFunc.ServeHTTP(0x65e950, 0xc200149b40, 0xc2017b3380, 0xc201888b60)
/usr/local/go/src/pkg/net/http/server.go:1149 +0x3e
net/http.serverHandler.ServeHTTP(0xc200095410, 0xc200149b40, 0xc2017b3380, 0xc201888b60)
/usr/local/go/src/pkg/net/http/server.go:1517 +0x16c
net/http.(*conn).serve(0xc201b9b2d0)
/usr/local/go/src/pkg/net/http/server.go:1096 +0x765
created by net/http.(*Server).Serve
/usr/local/go/src/pkg/net/http/server.go:1564 +0x266
I have taken care to set Linux kernel networking in such way as not to get in the way (turning off SYN flooding protection etc).
...
...
And yet on testing with "ab" (below) I'm getting those errors.
ab -c 1000 -n 50000 "http://localhost:8000/"
There is no sign whatsoever anywhere I looked that it's the kernel (dmesg, /var/log).
I would guess that is because you are running out of sockets - you never close the memc here. Check with netstat while your program is running.
func get_memc_val(k string) []byte {
memc, err := gomemcache.Connect(mc_ip, mc_port)
maybe_panic(err)
val, _, _ := memc.Get(k)
return val
}
I'd use this go memcache interface if I were you - it was written by the author of memcached who now works for Google on Go related things.
Try memcache client from YBC library. Unlike gomemcache it opens and re-uses only a few connections to memcache server irregardless of the number of concurrent requests issued via the client. It achieves high performance by pipelining concurrent requests over a small number of open connections to the memcache server.
The number of connections to the memcache server can configured via ClientConfig.ConnectionsCount.

SSL_Connection failed with SSL_ERROR_SYSCALL with errno=2?

SSL_Connect API is failing with return value 5 errno=2.
can any one help me in how to trace the same? can any one let me know what could cause this issue?
OS: Windows 2003 Std Sp2 32 bit
You can use following APIs to check the error further as it will store the error as a string in buf.
value = ERR_get_error();
ERR_error_string_n(value,buf,sizeof buf);
Furthermore, I also received this error when I added "SET_MODE_AUTO_RETRY" to CTX object and created an SSL object. I removed it and made changes to retry on my own in case of some error after some delay.
Another check you can do is what port value are you using? If it is not 443 then please try with 443; it may help.
I am also new to this so just sharing what I tried in order to resolve these issues.