Netty concurrency and "Connection reset by peer" - sockets

I've built the following simple server, and I'm stress testing it using ab.
If I run ab making 3000 total request (300 concurrent) it works. If I run it again, it shows me:
apr_socket_connect(): Connection reset by peer (54)
And If after this error I try to make a single request with curl without restarting the server, it works. If I run again ab it shows the same error.
It seems that it can't handle too many concurrent connections. Below the code:
public static void main(String[] args) throws Exception {
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline(new StringEncoder(), new MyServerHandler());
}
});
bootstrap.bind(new InetSocketAddress(9090));
System.out.println("Running");
}
Here is the handler:
public class MyServerHandler extends SimpleChannelUpstreamHandler {
private static AtomicLong request = new AtomicLong();
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e)
throws Exception {
ChannelFuture channelFuture = e.getChannel().write("This is request #" + request.incrementAndGet() + "\n");
channelFuture.addListener(ChannelFutureListener.CLOSE);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e)
throws Exception {
System.out.println(e.getCause());
e.getChannel().close();
}
}
As you see it's very simple, it just shows the total number of requests handled.
Any tips?
Thanks

'Connection reset by peer' usually means you have written to a connection that has already been closed by the other end. In other words, an application protocol error. You get the error itself on a subsequent read or write.

I don't immediately see anything wrong, but you could try the following to try to get more information:
Override channelClosed and output something so that you're 100% sure that Netty is at least trying to close the channel.
Use jvisualvm to have a look at the JVM running your server; you should be able to see the threads and whether they're active or not.
Write something to System.out server-side on channelConnected so you know that your connections have made it that far (especially for the 2nd run).
When you run ab the second time, is there an error for every connection attempt, or just for some?
What I find odd is that it seems to work the first time, but not thereafter. Keep in mind that this may not be a Netty - or even a JVM - issue, but rather the OS somehow limiting the the connection attempts.
I have done some tests with my own Netty test server, and found that starting large batches concurrent connections will produce an unpredictable outcome (most will connect, some will fail, but always a different ratio). As of yet I haven't figure out why that is (yet), but I suspect it is my OS refusing the connections rather than Netty.

Related

how does Netty server know when client is disconnected?

I am making Netty Server that satisfies the following conditions:
Server needs to do transaction process A, when it receives packet from its client.
After finishing transaction, if it is still connected, it sends back return message to client. If not, do some rollback process B.
But my problem is that when I send back to client, the Server does not know wheter it is still connected or not.
I've tried following codes to figure out its connection before sending messages. However it always succeeds even though client already closed its socket. It only fails when client process is forcibly killed (e.g. Ctrl+C)
final ChannelFuture cf = inboundChannel.writeAndFlush(resCodec);
cf.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
inboundChannel.close();
} else {
try {
// do Rollback Process B Here
}
}
});
I thought this is because of TCP protocol. If client disconnect gracefully, then FIN signal is sent to server. So it does think writeAndFlush succeeds somehow even when it doesn't.
So I've tried following code too, but these have the same result (always return true)
if(inboundChannel.isActive()){
inboundChannel.writeAndFlush(msg);
} else{
// do RollBack B
}
// Similar codes using inboundChannel.isOpen(), inboundChannel.isWritable()
Neither 'channelInactive' event nor 'Connection Reset by Peer' exception occur in my case.
This is the part of Netty test client code that I used.
public void channelActive(ChannelHandlerContext ctx) {
ctx.writeAndFlush(message).addListener(ChannelFutureListener.CLOSE);
}
How can I notice disconnection at the time that I want to reply?
May be you should override the below method and see if the control goes here when channel is closed.
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
// write cleanup code
}
I don't think its possible to track that whether client is connected or not in netty because there is an abstraction between netty and client.

AndroidAsync TCP -- proper way to detect socket is no longer available using write?

I am wondering what is the proper way to check on the client side that a TCP socket opened using the AndroidAsync library is no longer available? This is in the case the (plain TCP, non-AndroidAsync) server did not initiate explicitly closing the socket (so the ClosedCallback is not invoked). For instance, when the server has been cold rebooted.
It seems that the DataCallback is available only when the server sends back data and can't be used to receive error messages.
It seems to me also that
Util.writeAll(socket, (byte[]) payload.array(), new CompletedCallback()
{
#Override
public void onCompleted(Exception ex)
{
if (ex != null)
{
Log.e(TAG, "write failed with ex message= " + ex.getMessage());
throw new RuntimeException(ex);
}
}
});
does not throw an Exception either.
So at this point I'm not sure how to detect the socket is no longer available even if the client periodically writes data to it.
It will throw an IOexception if you send enough data or call it enough times. It won't throw on the first call due to TCP buffering at both ends.
I ended up implementing some sort of a "ping"-alike periodic check. The client opens and immediately closes a TCP connection to the very same port using a plain Java NIO socket call (not using AndroidAsync). If that one times out, it is assumed that the connection has been lost, and a recovery attempt is made once it succeeds again. This periodic check is performed only when the app has focus, or is just awakened. This is clearly a far from ideal workaround but it seems to work for my purposes.
You could use the closed/end callbacks
socket.setClosedCallback(new CompletedCallback()
{
#Override
public void onCompleted(Exception ex)
{
}
});
socket.setEndCallback(new CompletedCallback()
{
#Override
public void onCompleted(Exception ex)
{
}
});

How to cache a Memcached connection using the java spymemcached client

I am learning how to cache objects using memcached with the spymemcached client from spymemcached examples
MemcachedClient c=new MemcachedClient(new InetSocketAddress("hostname", portNum));
// Store a value (async) for one hour
c.set("someKey", 3600, someObject);
// Retrieve a value (synchronously).
Object myObject=c.get("someKey");
I have noted that each time I want to cache or retrieve an object I create a new memcached client which am assuming is a new connection and that memcached has no connection pooling mechanism therefore users are advised to cache the connections to decrease overhead for reconnecting from this question opening closing and reusing connections.
My question is how do I cache this connection? Can someone please give me an example I can start from.
If you are wondering what I have tried, I tried to put my connection in the memcached but then I realized that I have to create a connection to get it. :)
Thanks in advance.
I have noted that each time I want to cache or retrieve an object I
create a new memcached client which am assuming is a new connection
Don't do this; spymemcache uses a single connection for ALL I/O to and from memcache; it's all done asychronously; from spymemcache docs...
Each MemcachedClient instance establishes and maintains a single
connection to each server in your cluster.
Just do the following once in your app; make sure the client is available to other services in your app so they can access it.
MemcachedClient memClient = new MemcachedClient(new InetSocketAddress(host, port));
Use memClient for I/O with memcache; no need to create a new instance of MemcachedClient each time; done. The link you provided answers all of your questions.
What is your deployment? web-app or standalone?
This just means that you should use reuse the connections that you open as opposed to opening a connection for each request. It doesn't make sense to store a connection instance in memcached.
Cacheing the connection in the case means caching it in your application (keeping it in memory and open), not actually storing the connection in memcached.
I did a little more research and stumbled on this question
Then I came up with my solution as,
first created a contextlistener
public class ContextListener implements ServletContextListener {
#Override
public void contextInitialized(ServletContextEvent sce) {
Memcached.createClient();
}
#Override
public void contextDestroyed(ServletContextEvent sce) {
}
}
then i added the listener to the deployment discriptor by adding these lines to web.xml
<listener>
<description>Used with memcached to initialize connection</description>
<listener-class>com.qualebs.managers.ContextListener</listener-class>
</listener>
I created a class Memcached and added these methods
static void createClient() {
try {
client = new MemcachedClient(new InetSocketAddress("localhost", 11211));
} catch (IOException ex) {
Logger.getLogger(Memcached.class.getName()).log(Level.SEVERE, ex.getMessage(), ex);
}
}
static MemcachedClient getClient() throws IOException {
return client;
}
Now anywhere I need to use memcached connection just call Memcached.getClient()
I hope that will help anybody else out there with the same question.

What is best apprach to attempt multiple times same RPC call

What is best way to attempt multiple time same RPC call while failing RPC call?
just example: Here one case like if RPC get failed due to network connection, it will catch in onFailure(Throwable caught).
Now here it should recall same RPC again for check network connection. The maximum attempt should be 3 times only then show message to user like "Network is not established"
How can I achieve it?
Some couple of thoughts like call same rpc call in onFailure but here request become different.but I want same request have a three request and it is not good approach and I don't know if any good solution for it.
Thanks In Advance.
Use a counter in your AsynCallBack implementation. I recommend as well to use a timer before requesting the server again.
This code should work:
final GreetingServiceAsync greetingService = GWT.create(GreetingService.class);
final String textToServer = "foo";
greetingService.greetServer(textToServer, new AsyncCallback<String>() {
int tries = 0;
public void onSuccess(String result) {
// Do something
}
public void onFailure(Throwable caught) {
if (tries ++ < 3) {
// Optional Enclose the new call in a timer to wait sometime before requesting the server again
new Timer() {
public void run() {
greetingService.greetServer(textToServer, this);
}
}.schedule(4000);
}
}
});
#Jens given this answer from Google Groups.
You could transparently handle this for all your requests of a given GWT-RPC interface by using a custom RpcRequestBuilder. This custom RpcRequestBuilder would make 3 request attempts and if all 3 fail, calls the onFailure() method.
MyRemoteServiceAsync service = GWT.create(MyRemoteService.class);
((ServiceDefTarget) service).setRpcRequestBuilder(new RetryThreeTimesRequestBuilder());
The custom RequestBuilder could also fire a "NetworkFailureEvent" on the eventBus if multiple application components may be interested in that information. For example you could overlay the whole app with a dark screen and periodically try sending Ping requests to your server until network comes back online. There is also the onLine HTML 5 property you can check, but its not 100% reliable (https://developer.mozilla.org/en-US/docs/Web/API/window.navigator.onLine)

How can I determine why onFailure is triggered in GWT-RPC?

I have a project that does 2 RPC calls and then saves the data that the user provided in tha datastore. The first RPC call works ok, but from the second I always recieve the onFailure() message. How can I determine why the onFailure() is triggered? I tried caught.getCause() but it doesn't return anything.
feedbackService.saveFeedback(email,studentName,usedTemplates,
new AsyncCallback<String>() {
public void onFailure(Throwable caught) {
// Show the RPC error message to the user
caught.getCause();
Window.alert("Failure!");
}
public void onSuccess(String result) {
Window.alert("Saved!");
}
});
Throwable instance is instance of an Exception. You can check if it is a custom Exception like this:
if (caught instanceOf CustomException){
or if you want to show the message of exception you can use the getMessage():
Window.alert("Failure: " + caught.getMessage());
GWT-rpc is not not easy to ebug if an error occurs.
The easiest part is th check if the Exception is part of StatusCodeException.
A Statuscode of 404 means, you are pointing to a wrong endpoint
0 means, that
The searver is unreachable
You don't have permissions to check, if the server is available (X-domain-request)
You can use the Chrome-Web-Inspector to bedug GWT-RPC
You should be able to see all calls from the browser to you backend.
The most common failures are because of serialization of object. You have to ensure, that all dtransferred object implement java.io.Serializable
Most of the time it will just be a server side exception being raised which fires the onFailure() method.
Try putting breakpoints on your server side. That should help you pinpoint what's going wrong.