Netty: when does writeAndFlush channel future listener callback get executed? - callback

I am new to netty and trying to understand how the channel future for writeAndFlush works. Consider the following code running on a netty client:
final ChannelFuture writeFuture = abacaChannel.writeAndFlush("Test");
writeFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (writeFuture.isSuccess()) {
LOGGER.debug("Write successful");
} else {
LOGGER.error("Error writing message to Abaca host");
}
}
});
When does this writeFuture operationComplete callback executed?
After netty hands over the data to the OS send buffers (or)
After the OS writes the data to the network socket. (or)
After this data is actually received by the server.
TIA

1. After netty hands over the data to the OS send buffers (or)
Listener will be notified after data is removed from ChannelOutboundBuffer (netty's send buffer)

Related

Why can not I read bytes from the TcpClient in C#?

Why can not I read bytes from the TcpClient in C#?
Here is the error I am getting:
Unable to read data from the transport connection: An established connection was aborted by the software in your host machine.
Here is how I start my TcpClient:
public static async void Start()
{
TcpListener server = null;
try
{
server = new TcpListener(IPAddress.Loopback, 13000);
server.Start();
var client = await server.AcceptTcpClientAsync();
var stream = client.GetStream();
var bytes = Convert.FromBase64String("ABCD");
await stream.WriteAsync(bytes, 0, bytes.Length);
client.Close();
}
catch (Exception e)
{
throw;
}
finally
{
if(server != null)
{
server.Stop();
}
}
}
Here is how I run a request to the TcpClient:
try {
var response = (new HttpClient()).GetByteArrayAsync("http://localhost:13000").Result;
return Convert.ToBase64String(response);
} catch(Exception e) {
throw;
}
The return Convert.ToBase64String(response); line is never reached. While I see the quoted above error message inside the Exception e if I hit a breakpoint on the throw line.
Also, during debug the Start() method completes just fine. I.e. it starts, then wait for a request, gets a request, writes to the TclClient and at the end runs the server.Stop(); command.
I am expecting my code to work, because I took it and modified from the official documentation over here.
I tried to check out a few resources which would tackle my exception, but none of them did help.
E.g. I tried to use the question.
First answer tells nothing useful actually, but just plays around with words and at the end states that one can do nothing about the exception (please, correct me if I am missing a point in the answer).
And the second answer tells an impossible in my case problem. Because, I am sure there is nothing running on the 13000 port.
Your client code is using HttpClient, which sends an HTTP request and expects an HTTP response. But your server is not an HTTP server, it is just a plain TCP server, so the client is likely to fail and forcibly close the connection when it doesn't receive a properly formatted HTTP response.
The "official documentation" whose example you modified is not using HttpClient at all, it is using TcpClient instead.
If you want to use HttpClient in your client, then you should use HttpListener instead of TcpListener in your server.

Jmeter - Force close a socket/wait until message recieved

I am opening a socket in jmeter (using groovy in JSR223 Sampler), and storing the message in a jmeter variable. This is the below code:
SocketAddress inetSocketAddress = new InetSocketAddress(InetAddress.getByName("localhost"),4801);
def server = new ServerSocket()
server.bind(inetSocketAddress)
while(!vars.get("caseId"))) {
server.accept { socket ->
log.info('Someone is connected')
socket.withStreams { input, output ->
InputStreamReader isReader = new InputStreamReader(input);
BufferedReader reader = new BufferedReader(isReader);
StringBuffer sb = new StringBuffer();
String str;
while((str = reader.readLine())!= null){
sb.append(str);
}
String finalStr = sb.toString()
String caseId = finalStr.split("<caseId>")[1].split("</caseId>")[0]
vars.put("caseId", caseId)
}
log.info("Connection processed")
}
}
if(vars.get("caseId"))
{
try
{
server.close();
vars.put("socketClose",true);
}
catch(Exception e)
{
log.info("Error in closing the socket: " + e.getMessage());
}
}
Now, there is some time delay between the first loop is executed and the message being recieved from the port. It doesnt receive the message immediately, and hence while loop is executed again. And then message is received and it sets caseId. It goes on to close the socket, because caseId is set. And that is throwing the error, because socket is still waiting for the message. So is there a way, to wait until socket has recieved all the messages, so i could properly close it?
Or just force close the socket, and Jmeter wont throw any exception?
Or when i execute next component, say IF controller in Jmeter, it waits until variable socketClose is set true? In that way, instead of while loops inside JSR223 sampler, i could use multiple If Controllers in Jmeter thread.
This is how ServerSocket.close() function works
public void close()
throws IOException
Closes this socket. Any thread currently blocked in accept() will throw a SocketException.
I don't think there is a way "to wait until socket has recieved all the messages" because Socket is dump as a rock and it can either listen for connections or shut down.
Maybe you might be interested in setSoTimeout() function?
Also this line:
vars.put("socketClose",true)
is very suspicious, I think you need to change it either to:
vars.put("socketClose", "true")
or to
vars.putObject("socketClose",true)
as JMeterVariables.put() function can accept only a String, see Top 8 JMeter Java Classes You Should Be Using with Groovy article for more details.

How to dispatch incoming NetSocket handlers into different event loop threads?

I'm trying to use Vertx to implement a TCP server, accepting incoming connections and then handling different sockets. Since each socket can be handled independently, the handlers belonging to different sockets are supposed to run in different event loop threads concurrently.
According to Vert.x document,
Standard verticles are assigned an event loop thread when they are created and the start method is called with that event loop. When you call any other methods that takes a handler on a core API from an event loop then Vert.x will guarantee that those handlers, when called, will be executed on the same event loop.
I think, this code snippet can print different thread names:
Vertx vertx = Vertx.vertx(); // The number of event loop threads is 2*core.
vertx.createNetServer().connectHandler(socket -> {
vertx.deployVerticle(new AbstractVerticle() {
#Override
public void start() throws Exception {
socket.handler(buffer -> {
log.trace(socket.toString() + ": Socket Message");
socket.close();
});
}
});
}).listen(port);
But unfortunately, all handlers were located in the same thread.
23:59:42.359 [vert.x-eventloop-thread-1] TRACE Server - io.vertx.core.net.impl.NetSocketImpl#253fa4f2: Socket Message
23:59:42.364 [vert.x-eventloop-thread-1] TRACE Server - io.vertx.core.net.impl.NetSocketImpl#465f1533: Socket Message
23:59:42.365 [vert.x-eventloop-thread-1] TRACE Server - io.vertx.core.net.impl.NetSocketImpl#5ab8dac: Socket Message
23:59:42.366 [vert.x-eventloop-thread-1] TRACE Server - io.vertx.core.net.impl.NetSocketImpl#5fc72993: Socket Message
23:59:42.367 [vert.x-eventloop-thread-1] TRACE Server - io.vertx.core.net.impl.NetSocketImpl#38ee66d7: Socket Message
23:59:42.368 [vert.x-eventloop-thread-1] TRACE Server - io.vertx.core.net.impl.NetSocketImpl#6a60a74: Socket Message
23:59:42.369 [vert.x-eventloop-thread-1] TRACE Server - io.vertx.core.net.impl.NetSocketImpl#5f3921e1: Socket Message
23:59:42.370 [vert.x-eventloop-thread-1] TRACE Server - io.vertx.core.net.impl.NetSocketImpl#39d41024: Socket Message
... more than 100+ lines ...
An opposite example is similar to this echo server written in BOOST.ASIO. The handlers run in different event loop threads if a thread pool is used to execute io_service::run().
So, my question is how to run these handlers concurrently?
Actually, you do something entirely different from what you intend.
Each time you receive connection on your socket, you launch a new actor,
Simplest way to prove that:
Vertx vertx = Vertx.vertx(); // The number of event loop threads is 2*core.
vertx.createHttpServer().requestHandler(request -> {
vertx.deployVerticle(new AbstractVerticle() {
String uuid = UUID.randomUUID().toString(); // Some random unique number
#Override
public void start() throws Exception {
request.response().end(uuid + " " + Thread.currentThread().getName());
}
});
}).listen(8888);
vertx.setPeriodic(1000, r -> {
System.out.println(vertx.deploymentIDs().size()); // Print verticles count every second
});
I'm using httpServer just because it's easier to check in browser.
As wrong as it may be, you'll still see that you should receive different threads:
fe931b18-89cc-4c6a-9d6a-8565bb1f1c12 vert.x-eventloop-thread-9
277330da-4df8-4e91-bd8f-82c0f62156d0 vert.x-eventloop-thread-11
bbd3207c-80a4-41d8-9be5-b40727badc84 vert.x-eventloop-thread-13
Now to how you should do it:
// We create 10 workers
for (int i = 0; i < 10; i++) {
vertx.deployVerticle(new AbstractVerticle() {
#Override
public void start() {
vertx.eventBus().consumer("processMessage", (request) -> {
// Do something smart
// Reply
request.reply("I'm on thread " + Thread.currentThread().getName());
});
}
});
}
// This is your handler
vertx.createHttpServer().requestHandler(request -> {
// Only one server, that should dispatch events to workers as quickly as possible
vertx.eventBus().send("processMessage", null, (response) -> {
if (response.succeeded()) {
request.response().end("Request :" + response.result().body().toString());
}
// Handle errors
});
}).listen(8888);
vertx.setPeriodic(1000, r -> {
System.out.println(vertx.deploymentIDs().size()); // Notice that number of workers doesn't change
});
It's not possible to determine which event loop Vert.x will assign to each of your verticles without more details (number of cores of your test machines for example).
Anyway, it is not a good idea to deploy a verticle per incoming connection. Verticles are units of deployment in Vert.x. You would typically create one per "functionality".
Back to your use case, the purpose of event driven programming is precisely to avoid using a thread per connection. You can handle a lot of concurrent connections with a single event loop. If you have multiple cores on your machine then you can deploy multiple instances of your verticle to use them all (1 event loop per core).
int processors = Runtime.getRuntime().availableProcessors();
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(TCPServerVerticle.class.getName(), new DeploymentOptions().setInstances(processors));
public class TCPServerVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> startFuture) throws Exception {
vertx.createNetServer().connectHandler(socket -> {
socket.handler(buffer -> {
log.trace(socket.toString() + ": Socket Message");
socket.close();
});
}).listen(port, ar -> {
if (ar.succeeded()) {
startFuture.complete();
} else {
startFuture.fail(ar.cause());
}
});
}
}
With Vertx TCP server sharing the connect handlers will be called on a round-robin fashion.

UWP streamsocket ping networktimer issues

I have a StreamSocket in UWP and I send my messages like this using a DataWriter object using the StoreAsync() method:
public static async Task<bool> SendNetworkMessage(NetworkMember member, NetworkMessage message)
{
DataWriter writer = member.DataWriter;
//Check that writer is not null
if (writer != null)
{
try
{
//Serialize Message
string stringToSend = SerializeObject<NetworkMessage>(message);
//Send Message Length
writer.WriteUInt32(writer.MeasureString(stringToSend));
//Send Message
writer.WriteString(stringToSend);
await writer.StoreAsync();
return true;
}
catch (Exception e)
{
Debug.WriteLine("DataWriter failed because of " + e.Message);
Debug.WriteLine("");
Disconnect(member);
OnMemberDisconnectedEvent(member);
return false;
}
}
else { return false; }
}
All is well, the only problem is that I don't know if a connection went down.
Now I want to check my connection using a DispatcherTimer like this:
private static async void NetworkTimer_Tick(object sender, object e)
{
foreach (NetworkMember member in networkMemberCollection)
{
if (member.Connected == true && member.Disconnecting == false)
{
await SendNetworkMessage(member, new PingMessage());
}
}}
However, this is causing timing issues which is causing ObjectDisposedExceptions on the DataWriter. It seems that the DispatcherTimer thread cannot use the StreamSocket when I send a message from a different thread. My question is: How can I make sure the Ping is sent each time but that SendNetworkMessage operations are done in order instead of overlapping?
Thanks
It seems that the DispatcherTimer thread cannot use the StreamSocket when I send a message from a different thread.
How can I make sure the Ping is sent each time but that SendNetworkMessage operations are done in order instead of overlapping?
It's possible, and I think your code using foreach and await operation can ensure the work of sending message in order.
the only problem is that I don't know if a connection went down.
If you want to know if the connection went down, you can refer to Handling WinRT StreamSocket disconnects (both server and client side).

Design choice for automatically reconnecting socket client

I'm working with a windows form application in C#. I'm using a socket client which is connecting in an asynchronous way to a server. I would like the socket to try reconnecting immediately to the server if the connection is broken for any reason. Which is the best design to approach the problem? Should I build a thread which is continuously checking if the connection is lost and tries to reconnect to the server?
Here is the code of my XcomClient class which is handling the socket communication:
public void StartConnecting()
{
socketClient.BeginConnect(this.remoteEP, new AsyncCallback(ConnectCallback), this.socketClient);
}
private void ConnectCallback(IAsyncResult ar)
{
try
{
// Retrieve the socket from the state object.
Socket client = (Socket)ar.AsyncState;
// Complete the connection.
client.EndConnect(ar);
// Signal that the connection has been made.
connectDone.Set();
StartReceiving();
NotifyClientStatusSubscribers(true);
}
catch(Exception e)
{
if (!this.socketClient.Connected)
StartConnecting();
else
{
}
}
}
public void StartReceiving()
{
StateObject state = new StateObject();
state.workSocket = this.socketClient;
socketClient.BeginReceive(state.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(OnDataReceived), state);
}
private void OnDataReceived(IAsyncResult ar)
{
try
{
StateObject state = (StateObject)ar.AsyncState;
Socket client = state.workSocket;
// Read data from the remote device.
int iReadBytes = client.EndReceive(ar);
if (iReadBytes > 0)
{
byte[] bytesReceived = new byte[iReadBytes];
Buffer.BlockCopy(state.buffer, 0, bytesReceived, 0, iReadBytes);
this.responseList.Enqueue(bytesReceived);
StartReceiving();
receiveDone.Set();
}
else
{
NotifyClientStatusSubscribers(false);
}
}
catch (SocketException e)
{
NotifyClientStatusSubscribers(false);
}
}
Today I try to catch a disconnection by checking the number of bytes received or catching a socket exception.
If your application only receives data on a socket, then in most cases, you will never detect a broken connection. If you don't receive any data for a long time, you don't know if it's because the connection is broken or if the other end simply hasn't sent any data. You will, of course, detect (as EOF on the socket) connections closed by the other end in the normal fashion despite this.
In order to detect a broken connection, you need a keepalive. You need to either:
make the other end guarantee that it will send data on a set schedule, and you time out and close the connection if you don't get it, or,
send a probe to the other end once in a while. In this case the OS will take care of noticing a broken connection and you will get an error reading the socket if it's broken, either promptly (connection reset by peer) or eventually (connection timed out).
Either way, you need a timer. Whether you implement the timer as an event in an event loop or as a thread that sleeps is up to you and the best solution probably depends on how the rest of your application is structured. If you have a main thread that runs an event loop then it's probably best to hook in to that.
You can also enable the TCP keepalives option on the socket, but an application-layer keepalive is generally considered more robust.