I am wondering how it is possible to avoid one socket connection pr. thread in Scala. I have thought a lot about it, but I always end up with some code which is listening for incoming data for each client connection.
The problem is that I want to develop an application which should simultanously handle perhaps a couple of thousand connections. However I will of course not want to create a thread for each connection because of the lack of scalability and context switching.
What would be the "right" way to do this. In my world it should be possible to have one actor for each connection without the need to block one thread per actor.
In the book "Programming Scala" the authors used a library called naggati which provides a framework that combines NIO and actors, http://programming-scala.labs.oreilly.com/ch09.html.
I have an application that mixes actors with non-blocking sockets (i.e. NIO). The way I have done this is to have a dedicated IO thread, which sends messages to actors (in much the same way it would delegate work to a thread pool in a Java system) using the reactor pattern.
Obviously using the old blocking sockets, you are restricted to one thread per connection. And actor could handle this but of course this places a restriction on the number of connections which can be handled simultaneously.
In the case of a single IO thread, this is a bottleneck in theory but not much in practice (in our observations) as the IO thread is doing computationally non-intensive work. There are plenty of good discussions to be found on the NIO reactor pattern.
Related
We know Akka is one implementation of actor pattern. Without Akka, I usually implement a simple actor pattern using ThreadPool+BlockingQueue. So the message is offered into the queue, and the works(actors) take the message from the Queue, then do what they should do. Of course, this kind of implementation can be only in just ONE process.
So as to in one process,
What's the essential difference between these two(Akka vs.
ThreadPool+BlockingQueue)
Moreover, what's the difference between actor pattern and producer-consumer model?
Actor model is indeed quite similar to producer-consumer model (P-C).
However, if you use a blocking queue with P-C your application won't be completely non-blocking and asynchronous. The promise of actor model and Akka is that all messages are sent asynchronously and don't block the sender.
Another aspect of it is managing these queues gets quite cumbersome once you have many consumers and producers. With actors you simply send a message and don't have to think about these low level details. Under the hood Akka will keep a message queue aka mailbox per actor with a dispatcher assigning actors to the thread pool to process those messages.
It's much easier to use Akka to achieve highly performant and resilient application than coding it yourself. You get fault tolerance, resource management, location transparency, routing, distributed, async processing, hierarchical supervision out of the box. Not to mention other frameworks and libraries leveraging these features to give you even more (reactive streams, akka http, etc). There are lot's of patterns developed for you already there, so why bother with your own.
I need to build a TCP server using C# .NET 4.5+, it must be capable of comfortably handling at least 3,000 connected clients that will be send messages every 10 seconds and with a message size from 250 to 500 bytes.
The data will be offloaded to another process or queue for batch processing and logging.
I also need to be able to select an existing client to send and receive messages (greater then 500 bytes) messages within a windows forms application.
I have not built an application like this before so my knowledge is based on the various questions, examples and documentation that I have found online.
My conclusion is:
non-blocking async is the way to go. Stay away from creating multiple threads and blocking IO.
SocketAsyncEventArgs - Is complex and really only needed for very large systems, BTW what constitutes a very large system? :-)
BeginXXX methods will suffice (EAP).
Using TAP I can simplify 3. by using Task.Factory.FromAsync, but it only produces the same outcome.
Use a global collection to keep track of the connected tcp clients
What I am unsure about:
Should I use a ManualResetEvent when interacting with the TCP Client collection? I presume the asyc events will need to lock access to this collection.
Best way to detect a disconnected client after I have called BeginReceive. I've found the call is stuck waiting for a response so this needs to be cleaned up.
Sending messages to a specific TCP Client. I'm thinking function in custom TCP session class to send a message. Again in an async model, would I need to create a timer based process that inspects a message queue or would I create an event on a TCP Session class that has access to the TcpClient and associated stream? Really interested in opinions here.
I'd like to use a thread for the entire service and use non-blocking principals within, are there anythings I should be mindful of espcially in context of 1. ManualResetEvent etc..
Thank you for reading. I am keen to hear constructive thoughts and or links to best practices/examples. It's been a while since I've coded in c# so apologies if some of my questions are obvious. Tasks, async/await are new to me! :-)
I need to build a TCP server using C# .NET 4.5+
Well, the first thing to determine is whether it has to be base-bones TCP/IP. If you possibly can, write one that uses a higher-level abstraction, like SignalR or WebAPI. If you can write one using WebSockets (SignalR), then do that and never look back.
Your conclusions sound pretty good. Just a few notes:
SocketAsyncEventArgs - Is complex and really only needed for very large systems, BTW what constitutes a very large system? :-)
It's not so much a "large" system in the terms of number of connections. It's more a question of how much traffic is in the system - the number of reads/writes per second.
The only thing that SocketAsyncEventArgs does is make your I/O structures reusable. The Begin*/End* (APM) APIs will create a new IAsyncResult for each I/O operation, and this can cause pressure on the garbage collector. SocketAsyncEventArgs is essentially the same as IAsyncResult, only it's reusable. Note that there are some examples on the 'net that use the SocketAsyncEventArgs APIs without reusing the SocketAsyncEventArgs structures, which is completely ridiculous.
And there's no guidelines here: heavier hardware will be able to use the APM APIs for much more traffic. As a general rule, you should build a barebones APM server and load test it first, and only move to SAEA if it doesn't work on your target server's hardware.
On to the questions:
Should I use a ManualResetEvent when interacting with the TCP Client collection? I presume the asyc events will need to lock access to this collection.
If you're using TAP-based wrappers, then await will resume on a captured context by default. I explain this in my blog post on async/await.
There are a couple of approaches you can take here. I have successfully written a reliable and performant single-threaded TCP/IP server; the equivalent for modern code would be to use something like my AsyncContextThread class. It provides a context that will cause await to resume on that same thread by default.
The nice thing about single-threaded servers is that there's only one thread, so no synchronization or coordination is necessary. However, I'm not sure how well a single-threaded server would scale. You may want to give that a try and see how much load it can take.
If you do find you need multiple threads, then you can just use async methods on the thread pool; await will not have a captured context and so will resume on a thread pool thread. In this case, yes, you'd need to coordinate access to any shared data structures including your TCP client collection.
Note that SignalR will handle all of this for you. :)
Best way to detect a disconnected client after I have called BeginReceive. I've found the call is stuck waiting for a response so this needs to be cleaned up.
This is the half-open problem, which I discuss in detail on my blog. The best way (IMO) to solve this is to periodically send a "noop" keepalive message to each client.
If modifying the protocol isn't possible, then the next-best solution is to just close the connection after a no-communication timeout. This is how HTTP "persistent"/"keep-alive" connections decide to close. There's another possibile solution (changing the keepalive packet settings on the socket), but it's not as easy (requires p/Invoke) and has other problems (not always respected by routers, not supported by all OS TCP/IP stacks, etc).
Oh, and SignalR will handle this for you. :)
Sending messages to a specific TCP Client. I'm thinking function in custom TCP session class to send a message. Again in an async model, would I need to create a timer based process that inspects a message queue or would I create an event on a TCP Session class that has access to the TcpClient and associated stream? Really interested in opinions here.
If your server can send messages to any client (i.e., it's not just a request/response protocol; any part of the server can send messages to any client without the client requesting an update), then yes, you'll need a proper queue of outgoing requests because you can't (reliably) issue multiple concurrent writes on a socket. I wouldn't have the consumer be timer-based, though; there are async-compatible producer/consumer queues available (like BufferBlock<T> from TPL Dataflow, and it's not that hard to write one if you have async-compatible locks and condition variables).
Oh, and SignalR will handle this for you. :)
I'd like to use a thread for the entire service and use non-blocking principals within, are there anythings I should be mindful of espcially in context of 1. ManualResetEvent etc..
If your entire service is single-threaded, then you shouldn't need any coordination primitives at all. However, if you do use the thread pool instead of syncing back to the main thread (for scalability reasons), then you will need to coordinate. I have a coordination primitives library that you may find useful because its types have both synchronous and asynchronous APIs. This allows, e.g., one method to block on a lock while another method wants to asynchronously block on a lock.
You may have noticed a recurring theme around SignalR. Use it if you possibly can! If you have to write a bare-bones TCP/IP server and can't use SignalR, then take your initial time estimate and triple it. Seriously. Then you can get started down the path of painful TCP with my TCP/IP FAQ blog series.
Reading the documentation about the Play Framework and ReactiveMongo leads me to believe that ReactiveMongo works in such a way that it uses few threads and never blocks.
However, it seems that the communication from the Play application to the Mongo server would have to happen on some thread somewhere. How is this implemented? Links to the source code for Play, ReactiveMongo, Akka, etc. would also be very appreciated.
The Play Framework includes some documentation about this on this page about thread pools. It starts off:
Play framework is, from the bottom up, an asynchronous web framework. Streams are handled asynchronously using iteratees. Thread pools in Play are tuned to use fewer threads than in traditional web frameworks, since IO in play-core never blocks.
It then talks a little bit about ReactiveMongo:
The most common place that a typical Play application will block is when it’s talking to a database. Unfortunately, none of the major databases provide asynchronous database drivers for the JVM, so for most databases, your only option is to using blocking IO. A notable exception to this is ReactiveMongo, a driver for MongoDB that uses Play’s Iteratee library to talk to MongoDB.
Following is a note about using Futures:
Note that you may be tempted to therefore wrap your blocking code in Futures. This does not make it non blocking, it just means the blocking will happen in a different thread. You still need to make sure that the thread pool that you are using there has enough threads to handle the blocking.
There is a similar note in the Play documentation on the page Handling Asynchronous Results:
You can’t magically turn synchronous IO into asynchronous by wrapping it in a Future. If you can’t change the application’s architecture to avoid blocking operations, at some point that operation will have to be executed, and that thread is going to block. So in addition to enclosing the operation in a Future, it’s necessary to configure it to run in a separate execution context that has been configured with enough threads to deal with the expected concurrency.
The documentation seems to be saying that ReactiveMongo is non-blocking, so you don't have to worry about it eating up a lot of the threads in your thread pool. But ReactiveMongo has to communicate with the Mongo server somewhere.
How is this communication implemented so that Mongo doesn't use up threads from Play's default thread pool?
Once again, links to the specific files in Play, ReactiveMongo, Akka, etc, would be very appreciated.
Yes, indeed, you still need to use threads to perform any kind of work, including communication with the database. What's important is how exactly this communication happens.
ReactiveMongo "does not use threads" in a sense that it does not use blocking I/O. Usual Java I/O facilities like java.io.InputStream are blocking; this means that reading from such an InputStream or writing to OutputStream blocks the thread until the "other side" provides the required data or is ready to accept it. For network communication this means that threads will be blocked.
However, Java provides NIO API which supports non-blocking and asynchronous I/O. I don't want to get into its details right now, but the basic idea, naturally, is that non-blocking I/O allow not to block threads which need to exchange some data with the outside world: for example, these threads can poll the data source to check if there is some data available, and if there is none, they return to the thread pool and can be used for other tasks. Of course, down there these facilities are provided by the underlying OS.
Exact implementation details of non-blocking I/O is usually hidden inside high-level libraries like Netty because it is not at all nice to use. Netty (which is exactly the library ReactiveMongo uses), for example, provides nice asynchronous callback-like API which is really easy to use but is also powerful and expressive enough to allow building complex I/O-heavy applications with high throughput.
So, ReactiveMongo uses Netty to talk with Mongo database server, and because Netty is an implementation of asynchronous network I/O, ReactiveMongo really does not need to block threads for a long time.
I am trying to understand implementations/options for server-side Websocket endpoints - particularly in Perl using PSGI/Plack and I have a question: Why are all server-side websocket implementations based around event-driven PSGI servers (Twiggy, Tatsumaki, etc.)?
I get that websocket communication is asynchronous, but a non-event driven PSGI server (say Starman) could spawn an asynchronous listener to handle the websocket side of things. I have seen (but not understood) PHP implementations of Websocket servers, so why cant the same be done with PSGI without having to change the server to an event driven one?
Underlying network logic to deal with sockets depends on platform, OS and particular software implementations.
Most common three methods are:
pulling - there is blocking constant "asking" if socket has some data. This method is well bad, as it will block execution of main thread for as long as it waits for some data.
thread per socket - each new connection involves creating new thread and asking each socket in blocking manner happens within that thread. So it wont block main thread with logic. This method is bad as creating thread for each connection is too expensive for memory, and can be around 1Mb or RAM based on OS and other criteria.
async - uses system features to "notify" your process when there is something. So you can react once your app is ready (in case of single threaded app) or even react in separate thread straight away. This method is well efficient as it saves RAM, and allows your app to work without need of waiting or asking for data. It utilises existing functionalities that most OS and platforms provide.
Taking this in account, you indeed can create single process functional way to deal with sockets traffic. But that is not efficient at all as been proven previously. That is why fully async models are major today, as most languages and platforms do support such paradigm.
I'm doing some Akka lately and wonder: Can I do blocking I/O in Akka without getting into big trouble? Let us say we have an Actor which does some blocking I/O because it uses a legacy library or for any other reason: Couldn't I just use a special dispatcher for those Actors which a reasonably sized ThreadPool and do blocking I/O without blocking all other actors because they run with a different dispatcher?
What are the downsides of this? And what would be the optimal way to call a 3rd party HTTP-API from an actor?
Doing blocking IO is a bad idea in general, and in a reactive multithreaded environment in particular, so your first step is to try to avoid it alltogether, that means looking into using AsyncHttpClient or HttpAsyncClient.
If that does not work, you can at least mitigate the risks by giving the blocking actors their own threads. This will of course be costly and you still risk filling up their mailboxes, but such is the choice of using blocking IO.
You also might want to look at the IO Actor module for a more raw interface to network IO.
Hope any of this helps,
Cheers,
√