I'm writing a custom wrapper for native MSMQ in C#/ Having looked at the System.Messaging code in ILSpy, I notice that this implementation does quite a lot to work around 'stale' queue handles. I don't handle this situation at present, and am wondering in what circumstances a handle can become 'stale'?
Thanks in advance.
A stale handle no longer points to a queue. For example, if the destination queue manager restarts, the queue handle will no longer be known to it and communication using that handle should fail.
Related
I need to build a TCP server using C# .NET 4.5+, it must be capable of comfortably handling at least 3,000 connected clients that will be send messages every 10 seconds and with a message size from 250 to 500 bytes.
The data will be offloaded to another process or queue for batch processing and logging.
I also need to be able to select an existing client to send and receive messages (greater then 500 bytes) messages within a windows forms application.
I have not built an application like this before so my knowledge is based on the various questions, examples and documentation that I have found online.
My conclusion is:
non-blocking async is the way to go. Stay away from creating multiple threads and blocking IO.
SocketAsyncEventArgs - Is complex and really only needed for very large systems, BTW what constitutes a very large system? :-)
BeginXXX methods will suffice (EAP).
Using TAP I can simplify 3. by using Task.Factory.FromAsync, but it only produces the same outcome.
Use a global collection to keep track of the connected tcp clients
What I am unsure about:
Should I use a ManualResetEvent when interacting with the TCP Client collection? I presume the asyc events will need to lock access to this collection.
Best way to detect a disconnected client after I have called BeginReceive. I've found the call is stuck waiting for a response so this needs to be cleaned up.
Sending messages to a specific TCP Client. I'm thinking function in custom TCP session class to send a message. Again in an async model, would I need to create a timer based process that inspects a message queue or would I create an event on a TCP Session class that has access to the TcpClient and associated stream? Really interested in opinions here.
I'd like to use a thread for the entire service and use non-blocking principals within, are there anythings I should be mindful of espcially in context of 1. ManualResetEvent etc..
Thank you for reading. I am keen to hear constructive thoughts and or links to best practices/examples. It's been a while since I've coded in c# so apologies if some of my questions are obvious. Tasks, async/await are new to me! :-)
I need to build a TCP server using C# .NET 4.5+
Well, the first thing to determine is whether it has to be base-bones TCP/IP. If you possibly can, write one that uses a higher-level abstraction, like SignalR or WebAPI. If you can write one using WebSockets (SignalR), then do that and never look back.
Your conclusions sound pretty good. Just a few notes:
SocketAsyncEventArgs - Is complex and really only needed for very large systems, BTW what constitutes a very large system? :-)
It's not so much a "large" system in the terms of number of connections. It's more a question of how much traffic is in the system - the number of reads/writes per second.
The only thing that SocketAsyncEventArgs does is make your I/O structures reusable. The Begin*/End* (APM) APIs will create a new IAsyncResult for each I/O operation, and this can cause pressure on the garbage collector. SocketAsyncEventArgs is essentially the same as IAsyncResult, only it's reusable. Note that there are some examples on the 'net that use the SocketAsyncEventArgs APIs without reusing the SocketAsyncEventArgs structures, which is completely ridiculous.
And there's no guidelines here: heavier hardware will be able to use the APM APIs for much more traffic. As a general rule, you should build a barebones APM server and load test it first, and only move to SAEA if it doesn't work on your target server's hardware.
On to the questions:
Should I use a ManualResetEvent when interacting with the TCP Client collection? I presume the asyc events will need to lock access to this collection.
If you're using TAP-based wrappers, then await will resume on a captured context by default. I explain this in my blog post on async/await.
There are a couple of approaches you can take here. I have successfully written a reliable and performant single-threaded TCP/IP server; the equivalent for modern code would be to use something like my AsyncContextThread class. It provides a context that will cause await to resume on that same thread by default.
The nice thing about single-threaded servers is that there's only one thread, so no synchronization or coordination is necessary. However, I'm not sure how well a single-threaded server would scale. You may want to give that a try and see how much load it can take.
If you do find you need multiple threads, then you can just use async methods on the thread pool; await will not have a captured context and so will resume on a thread pool thread. In this case, yes, you'd need to coordinate access to any shared data structures including your TCP client collection.
Note that SignalR will handle all of this for you. :)
Best way to detect a disconnected client after I have called BeginReceive. I've found the call is stuck waiting for a response so this needs to be cleaned up.
This is the half-open problem, which I discuss in detail on my blog. The best way (IMO) to solve this is to periodically send a "noop" keepalive message to each client.
If modifying the protocol isn't possible, then the next-best solution is to just close the connection after a no-communication timeout. This is how HTTP "persistent"/"keep-alive" connections decide to close. There's another possibile solution (changing the keepalive packet settings on the socket), but it's not as easy (requires p/Invoke) and has other problems (not always respected by routers, not supported by all OS TCP/IP stacks, etc).
Oh, and SignalR will handle this for you. :)
Sending messages to a specific TCP Client. I'm thinking function in custom TCP session class to send a message. Again in an async model, would I need to create a timer based process that inspects a message queue or would I create an event on a TCP Session class that has access to the TcpClient and associated stream? Really interested in opinions here.
If your server can send messages to any client (i.e., it's not just a request/response protocol; any part of the server can send messages to any client without the client requesting an update), then yes, you'll need a proper queue of outgoing requests because you can't (reliably) issue multiple concurrent writes on a socket. I wouldn't have the consumer be timer-based, though; there are async-compatible producer/consumer queues available (like BufferBlock<T> from TPL Dataflow, and it's not that hard to write one if you have async-compatible locks and condition variables).
Oh, and SignalR will handle this for you. :)
I'd like to use a thread for the entire service and use non-blocking principals within, are there anythings I should be mindful of espcially in context of 1. ManualResetEvent etc..
If your entire service is single-threaded, then you shouldn't need any coordination primitives at all. However, if you do use the thread pool instead of syncing back to the main thread (for scalability reasons), then you will need to coordinate. I have a coordination primitives library that you may find useful because its types have both synchronous and asynchronous APIs. This allows, e.g., one method to block on a lock while another method wants to asynchronously block on a lock.
You may have noticed a recurring theme around SignalR. Use it if you possibly can! If you have to write a bare-bones TCP/IP server and can't use SignalR, then take your initial time estimate and triple it. Seriously. Then you can get started down the path of painful TCP with my TCP/IP FAQ blog series.
I've been working on a project of mine using Akka to create a real-time processing system which takes in the Twitter stream (for now) and uses actors to process said messages in various ways. I've been reading about similar architectures that others have built using Akka and this particular blog post caught my eye:
http://blog.goconspire.com/post/64901258135/akka-at-conspire-part-5-the-importance-of-pulling
Here they explain different issues that arise when pushing work (ie. messages) to actors vs. having the actors pull work. To paraphrase the article, by pushing messages there is no built-in way to know which units of work were received by which worker, nor can that be reliably tracked. In addition, if a worker suddenly receives a large number of messages where each message is quite large you might end up overwhelmed and the machine could run out of memory. Or, if the processing is CPU intensive you could render your node unresponsive due to CPU thrashing. Furthermore, if the jvm crashes, you will lose all the messages that the actor(s) had in its mailbox.
Pulling messages largely eliminates these problems. Since a specific actor must pull work from a coordinator, the coordinator always knows which unit of work each worker has; if a worker dies, the coordinator knows which unit of work to re-process. Messages also don’t sit in the workers’ mailboxes (since it's pulling a single message and processing it before pulling another one) so the loss of those mailboxes if the actor crashes isn't an issue. Furthermore, since each worker will only request more work once it completes its current task, there are no concerns about a worker receiving or starting more work than it can handle concurrently. Obviously there are also issues with this solution like what happens when the coordinator itself crashes but for now let's assume this is a non-issue. More about this pulling pattern can also be found at the "Let It Crash" website which the blog references:
http://letitcrash.com/post/29044669086/balancing-workload-across-nodes-with-akka-2
This got me thinking about a possible alternative to doing this pulling pattern which is to do pushing but with durable mailboxes. An example I was thinking of was implementing a mailbox that used RabbitMQ (other data stores like Redis, MongoDB, Kafka, etc would also work here) and then having each router of actors (all of which would be used for the same purpose) share the same message queue (or the same DB/collection/etc...depending on the data store used). In other words each router would have its own queue in RabbitMQ serving as a mailbox. This way, if one of the routees goes down, those that are still up can simply keep retrieving from RabbitMQ without too much worry that the queue will overflow since they are no longer using typical in-memory mailboxes. Also since their mailbox isn't implemented in-memory, if a routee crashes, the most messages that it could lose would just be the single one it was processing before the crash. If the whole router goes down then you could expect RabbitMQ (or whatever data store is being used) to handle an increased load until the router is able to recover and start processing messages again.
In terms of durable mailboxes, it seems that back in version 2.0, Akka was gravitating towards supporting these more actively since they had implemented a few that could work with MongoDB, ZooKeeper, etc. However, it seems that for whatever reason they abandoned the idea at some point since the latest version (2.3.2 as of the writing of this post) makes no mention of them. You're still able to implement your own mailbox by implementing the MessageQueue interface which gives you methods like enqueue(), dequeue(), etc... so making one that works with RabbitMQ, MongoDB, Redis, etc wouldn't seem to be a problem.
Anyways, just wanted to get your guys' and gals' thoughts on this. Does this seem like a viable alternative to doing pulling?
This question also spawned a rather long and informative thread on akka-user. In summary it is best to explicitly manage the work items to be processed by a (persistent) actor from which a variable number of worker actors pull new jobs, since that allows better resource management and explicit control over what gets processed and how retries are handled.
Let's say, I have a server with many connected clients via TCP, i have a socket for every client and i have a sending and receiving thread for every client. Is it safe and possible to call send function at the same time as it will not call send function for same socket.
If it's safe and ok, Can i stream data to clients simultaneously without blocking send function for other clients ?
Thank you very much for answers.
Yes it is possible and thread-safe. You could have tested it, or worked out for yourself that IS, IIS, SQL Server etc. wouldn't work very well if it wasn't.
Assuming this is Windows from the tag of "Winsock".
This design (having a send/receive thread for every single connected client), overall, is not going to scale. Hopefully you are aware of that and you know that you have an extremely limited number of clients (even then, I wouldn't write it this way).
You don't need to have a thread pair for every single client.
You can serve tons of clients with a single thread using non-blocking IO and read/write ready notifications (either with select() or one of the varieties of Overlapped IO such as completion routines or completion ports). If you use completion ports you can set a pool of threads to handle socket IO and queue the work for your own worker thread or threads/threadpool.
Yes, you can send and receive to many sockets at once from different threads; but you shouldn't need those extra threads because you shouldn't be making blocking calls to send/recv at all. When you make a non-blocking call the amount that could be written immediately is written and the function returns, you then note how much was sent and ask for notification when the socket is next writable.
I think you might want to consider a different approach as this isn't simple stuff; if you're using .Net you might get by with building this with TcpListener or HttpListener (both of which use completion ports for you), though be aware that you can't easily disable Nagle's algorithm with those so if you need interactivity (think of the auto-complete on Google's search page) then you probably won't get the performance you want.
Nginx uses epoll, or other multiplexing techniques(select) for its handling multiple clients, i.e it does not spawn a new thread for every request unlike apache.
I tried to replicate the same in my own test program using select. I could accept connections from multiple client by creating a non-blocking socket and using select to decide which client to serve. My program would simply echo their data back to them .It works fine for small data transfers (some bytes per client)
The problem occurs when I need to send a large file over a connection to the client. Since i have only one thread to serve all client till the time I am finished reading the file and writing it over to the socket i cannot resume serving other client.
Is there a known solution to this problem, or is it best to create a thread for every such request ?
When using select you should not send the whole file at once. If you e.g. are using sendfile to do this it will block until the whole file has been sent. Instead use a small buffer, and send a little data at a time to each client. Then use select to identify when the socket is again ready to be written to and send some more until all data has been sent. This will allow you to handle multiple clients in parallel.
The simplest approach is to create a thread per request, but it's certainly not the most scalable approach. I think at this time basically all high-performance web servers use various asynchronous approaches built on things like epoll (Linux), kqueue (BSD), or IOCP (Windows).
Since you don't provide any information about your performance requirements, and since all the non-threaded approaches require restructuring your application to use these often-complex asynchronous techniques (as described in the C10K article and others found from there), for now your best bet is just to use the threaded approach.
Please update your question with concrete requirements for performance and other relevant data if you need more.
For background this may be useful reading http://www.kegel.com/c10k.html
I think you are using your callback to handle a single connection. This is not how it was designed. Your callback has to handle the whatever-thousand of connections you are planning to serve, i.e from the number of file descriptor you get as parameter, you have to know (by reading the global variables) what to do with that client, either read() or send() or ... whatever
We are using EntLib Logging Application Block. And also it turned out that we should use msmq for logging because of performance.
Now we are trying to use Msmqdistributor service to log those messages in the queue.
What are pros and cons of Msmqdistributor service of Enterprise Library?
Please share your experience.
The main drawback is going to be the Microsoft Message Queue (MSMQ) itself. MSMQ has been around for awhile and it is a pretty cool tool. It does however lack utilities. Because of the way that data is stored in the queue, most people end up needing to write some helper utilities for debugging and manually manipulating the queue. Some other things to consider:
Queue size - if too many items get put in the queue, and aren't removed in a timely manner the server can stall.
Purpose - MSMQ is designed for multi-step transactions (such as billing), you mention you are going to use it for logging. If the log is just for debugging, Then a DB table or a flat file or sending errors to a bug tracker will serve you better. If you need complicated logging and are using MSMQ to send the information to a different copmuter, then you will find MSMQ more useful.