how does Netty server know when client is disconnected? - sockets

I am making Netty Server that satisfies the following conditions:
Server needs to do transaction process A, when it receives packet from its client.
After finishing transaction, if it is still connected, it sends back return message to client. If not, do some rollback process B.
But my problem is that when I send back to client, the Server does not know wheter it is still connected or not.
I've tried following codes to figure out its connection before sending messages. However it always succeeds even though client already closed its socket. It only fails when client process is forcibly killed (e.g. Ctrl+C)
final ChannelFuture cf = inboundChannel.writeAndFlush(resCodec);
cf.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
inboundChannel.close();
} else {
try {
// do Rollback Process B Here
}
}
});
I thought this is because of TCP protocol. If client disconnect gracefully, then FIN signal is sent to server. So it does think writeAndFlush succeeds somehow even when it doesn't.
So I've tried following code too, but these have the same result (always return true)
if(inboundChannel.isActive()){
inboundChannel.writeAndFlush(msg);
} else{
// do RollBack B
}
// Similar codes using inboundChannel.isOpen(), inboundChannel.isWritable()
Neither 'channelInactive' event nor 'Connection Reset by Peer' exception occur in my case.
This is the part of Netty test client code that I used.
public void channelActive(ChannelHandlerContext ctx) {
ctx.writeAndFlush(message).addListener(ChannelFutureListener.CLOSE);
}
How can I notice disconnection at the time that I want to reply?

May be you should override the below method and see if the control goes here when channel is closed.
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
// write cleanup code
}
I don't think its possible to track that whether client is connected or not in netty because there is an abstraction between netty and client.

Related

vertx timeout if async result is failed

I am seeing a timeout in the browser when the server-side service ends in a failed result. Everything works fine if the service call succeeds but it seems as though the browser never receives a response if the call fails.
My service passes a result handler to a DAO containing the following code:
final SQLConnection conn = ar.result();
conn.updateWithParams(INSERT_SQL, params, insertAsyncResult -> {
if (insertAsyncResult.failed()) {
conn.close();
resultHandler.handle(ServiceException.fail(1, "TODO"));
} else {
resultHandler.handle(Future.succeededFuture());
}
});
I'm not sure where to go from here. How do I debug what the framework is sending back to the client?
The problem was that I needed to register a ServiceExceptionMessageCodec in an intermediate Verticle, one that was sitting between the browser and the Verticle that was performing the database operation.

AndroidAsync TCP -- proper way to detect socket is no longer available using write?

I am wondering what is the proper way to check on the client side that a TCP socket opened using the AndroidAsync library is no longer available? This is in the case the (plain TCP, non-AndroidAsync) server did not initiate explicitly closing the socket (so the ClosedCallback is not invoked). For instance, when the server has been cold rebooted.
It seems that the DataCallback is available only when the server sends back data and can't be used to receive error messages.
It seems to me also that
Util.writeAll(socket, (byte[]) payload.array(), new CompletedCallback()
{
#Override
public void onCompleted(Exception ex)
{
if (ex != null)
{
Log.e(TAG, "write failed with ex message= " + ex.getMessage());
throw new RuntimeException(ex);
}
}
});
does not throw an Exception either.
So at this point I'm not sure how to detect the socket is no longer available even if the client periodically writes data to it.
It will throw an IOexception if you send enough data or call it enough times. It won't throw on the first call due to TCP buffering at both ends.
I ended up implementing some sort of a "ping"-alike periodic check. The client opens and immediately closes a TCP connection to the very same port using a plain Java NIO socket call (not using AndroidAsync). If that one times out, it is assumed that the connection has been lost, and a recovery attempt is made once it succeeds again. This periodic check is performed only when the app has focus, or is just awakened. This is clearly a far from ideal workaround but it seems to work for my purposes.
You could use the closed/end callbacks
socket.setClosedCallback(new CompletedCallback()
{
#Override
public void onCompleted(Exception ex)
{
}
});
socket.setEndCallback(new CompletedCallback()
{
#Override
public void onCompleted(Exception ex)
{
}
});

keep all connected clients' ip in netty

My TCP server uses netty.The situation is: When a client connects to the server,I will save the client's ip in a global variable(such as a Map); When the client is disconnected,I will remove the IP from the map.
I used channelConnected() and channelDisconnected() method in SimpleChannelHandler.But my problem is ,some times the channelDisconnected() method cannot catch the event when I think the client is disconnected(maybe the computer closed,or the client process closed,or some other situations...) Can you give me some suggestions.
Just use DefaultChannelGroup which will automatically remove the Channel from it when it was closed.
Alternative you can register a ChannelFutureListener to the Channels close future to do the removal from your map.
Something like this:
channel.getCloseFuture().addListener(new ChannelFutureListener() {
public void operationCompleted(ChannelFuture f) {
map.remove(f.getChannel());
}
});

What is best apprach to attempt multiple times same RPC call

What is best way to attempt multiple time same RPC call while failing RPC call?
just example: Here one case like if RPC get failed due to network connection, it will catch in onFailure(Throwable caught).
Now here it should recall same RPC again for check network connection. The maximum attempt should be 3 times only then show message to user like "Network is not established"
How can I achieve it?
Some couple of thoughts like call same rpc call in onFailure but here request become different.but I want same request have a three request and it is not good approach and I don't know if any good solution for it.
Thanks In Advance.
Use a counter in your AsynCallBack implementation. I recommend as well to use a timer before requesting the server again.
This code should work:
final GreetingServiceAsync greetingService = GWT.create(GreetingService.class);
final String textToServer = "foo";
greetingService.greetServer(textToServer, new AsyncCallback<String>() {
int tries = 0;
public void onSuccess(String result) {
// Do something
}
public void onFailure(Throwable caught) {
if (tries ++ < 3) {
// Optional Enclose the new call in a timer to wait sometime before requesting the server again
new Timer() {
public void run() {
greetingService.greetServer(textToServer, this);
}
}.schedule(4000);
}
}
});
#Jens given this answer from Google Groups.
You could transparently handle this for all your requests of a given GWT-RPC interface by using a custom RpcRequestBuilder. This custom RpcRequestBuilder would make 3 request attempts and if all 3 fail, calls the onFailure() method.
MyRemoteServiceAsync service = GWT.create(MyRemoteService.class);
((ServiceDefTarget) service).setRpcRequestBuilder(new RetryThreeTimesRequestBuilder());
The custom RequestBuilder could also fire a "NetworkFailureEvent" on the eventBus if multiple application components may be interested in that information. For example you could overlay the whole app with a dark screen and periodically try sending Ping requests to your server until network comes back online. There is also the onLine HTML 5 property you can check, but its not 100% reliable (https://developer.mozilla.org/en-US/docs/Web/API/window.navigator.onLine)

Netty concurrency and "Connection reset by peer"

I've built the following simple server, and I'm stress testing it using ab.
If I run ab making 3000 total request (300 concurrent) it works. If I run it again, it shows me:
apr_socket_connect(): Connection reset by peer (54)
And If after this error I try to make a single request with curl without restarting the server, it works. If I run again ab it shows the same error.
It seems that it can't handle too many concurrent connections. Below the code:
public static void main(String[] args) throws Exception {
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline(new StringEncoder(), new MyServerHandler());
}
});
bootstrap.bind(new InetSocketAddress(9090));
System.out.println("Running");
}
Here is the handler:
public class MyServerHandler extends SimpleChannelUpstreamHandler {
private static AtomicLong request = new AtomicLong();
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e)
throws Exception {
ChannelFuture channelFuture = e.getChannel().write("This is request #" + request.incrementAndGet() + "\n");
channelFuture.addListener(ChannelFutureListener.CLOSE);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e)
throws Exception {
System.out.println(e.getCause());
e.getChannel().close();
}
}
As you see it's very simple, it just shows the total number of requests handled.
Any tips?
Thanks
'Connection reset by peer' usually means you have written to a connection that has already been closed by the other end. In other words, an application protocol error. You get the error itself on a subsequent read or write.
I don't immediately see anything wrong, but you could try the following to try to get more information:
Override channelClosed and output something so that you're 100% sure that Netty is at least trying to close the channel.
Use jvisualvm to have a look at the JVM running your server; you should be able to see the threads and whether they're active or not.
Write something to System.out server-side on channelConnected so you know that your connections have made it that far (especially for the 2nd run).
When you run ab the second time, is there an error for every connection attempt, or just for some?
What I find odd is that it seems to work the first time, but not thereafter. Keep in mind that this may not be a Netty - or even a JVM - issue, but rather the OS somehow limiting the the connection attempts.
I have done some tests with my own Netty test server, and found that starting large batches concurrent connections will produce an unpredictable outcome (most will connect, some will fail, but always a different ratio). As of yet I haven't figure out why that is (yet), but I suspect it is my OS refusing the connections rather than Netty.