Atomicity guarantees on Winsocks? - winsock

As Windows doesn't provide UNIX domain sockets, I am using a a local TCP connection to simulate the behaviour. Now, POSIX guarantees that if several threads are writing to the UNIX domain socket in parallel, chunks up to PIPE_BUF will be handled atomically - i.e. no interleaving will happen. Is there are similar guarantee on local TCP winsock or do I have to synchronise the writers using critical section?

If you have several threads writing to the same socket then each write call will be atomic but each interleaved with respect to other write calls that are occurring on different threads;
So, if you have thread 1 writing a string of A's in a single write and thread 2 writing a series of B's with one write and a series of C's with another then you might get ABC, or BAC or BCA but you wont get a broken run of A's with some B's in the middle...
If you require that the two writes that are being issued by thread 2 are not interleaved with the write that is issued by thread 1 (that is ABC and BCA are fine but BAC is not) then you should either use a single call to WSASend() in thread 2 with the two buffers in an array of WSABUF structs (scatter/gather writing) or you need to lock around the write calls so that thread 1 can't interrupt.

Its not explicitly guaranteed. Lock. Uncontended locks are cheap unless you are in an extremely tight loop. If you really don't want to do this, use overlapped i/o.

Related

Which kind of inter process communication (ipc) mechanism should I use at which moment?

I know that there are several methods of inter-process communication (ipc), like:
File
Signal
Socket
Message Queue
Pipe
Named pipe
Semaphore
Shared memory
Message passing
Memory-mapped file
However I was unable to find a list or a paper comparing these mechanism to each other and pointing out the benefits of them in different environemnts.
E.g I know that if I use a file which gets written by process A and process B reads it out it will work on any OS and is pretty robust, on the other hand - why shouldn't I use TCP Socket ? Has anyone a kind of overview in which cases which methods are the most suitable ?
Long story short:
Use lock files, mutexes, semaphores and barriers when processes compete for a scarce resource. They operate in a similar manner: several process try to acquire a synchronisation primitive, some of them acquire it, others are put in sleeping state until the primitive is available again. Use semaphores to limit the amount of processes working with a resource. Use a mutex to limit the amount to 1.
You can partially avoid using synchronisation primitives by using non-blocking thread-safe data structures.
Use signals, queues, pipes, events, messages, unix sockets when processes need to exchange data. Signals and events are usually used for notifying a process of something (for instance, ctrl+c in unix terminal sends a SIGINT signal to a process). Pipes, shared memory and unix sockets are for transmitting data.
Use sockets for networking (or, speaking formally, for exchanging data between processes located on different machines).
Long story long: take a look at Modern Operating Systems book by Tanenbaum & Bos, namely IPC chapter. The topic is vast and can't be completely covered within a list or a paper.

Can a server handle multiple sockets in a single thread?

I'm writing a test program that needs to emulate several connections between virtual machines, and it seems like the best way to do that is to use Unix domain sockets, for various reasons. It doesn't really matter whether I use SOCK_STREAM or SOCK_DGRAM, but it seems like SOCK_STREAM is easier/simpler for my usage.
My problem seems to be a little backwards from the typical scenario. I want to have a single client communicating with the server over 4 distinct sockets. (I could have 4 clients with one socket each, but that distinction shouldn't matter.) Now, the thing I'm emulating doesn't have multiple threads and gets an interrupt whenever a data packet is received over one of the "sockets". Is there some easy way to emulate this with Unix sockets?
I believe that I have to do the socket(), bind(), and listen() for all 4 sockets first, then do an accept() for all 4, and do fcntl( fd, F_SETFF, FNDELAY ) for each one to make them nonblocking, so that I can check each one for data with read() in a round-robin fashion. Is there any way to make it interrupt-driven or event-driven, so that my main loop only checks for data in the socket if there's data there? Or is it better to poll them all like this?
Yes. Handling multiple connections is almost synonymous with "server", and they are often single threaded -- but please not this way:
check each one for data with read() in a round-robin fashion
That would require, as you mention, non-blocking sockets and some kind of delay to prevent your "round-robin" from becoming a system killing busy loop.
A major problem with that is the granularity of the delay. You can't make it too small, or the loop will still hog too much CPU time when nothing is happening. But what about when something is happening, and that something is data incoming simultaneously on multiple connections? Now your delay can produce a snowballing backlog of tish leading to refused connections, etc.
It just is not feasible, and no one writes a server that way, although I am sure anyone would give it serious thought if they were unaware of the library functions intended to tackle the problem. Note that networking is a platform specific issue, so these are not actually part of the C standard (which does not deal with sockets at all).
The functions are select(), poll(), and epoll(); the last one is linux specific and the other two are POSIX. The basic idea is that the call blocks, waiting until one or more of any number of active connections is ready to read or write. Waiting for a socket to be ready to write only meaningfully applies to NON_BLOCK sockets. You don't have to use NON_BLOCK, however, and the select() call blocks regardless. Using NON_BLOCK on the individual sockets makes the implementation more complex, but increases performance potential in a single threaded server -- this is the idea behind asynchronous servers (such as nginx), a paradigm which contrasts with the more traditional threaded synchronous model.
However, I would recommend that you not use NON_BLOCK initially because of the added complexity. When/if it ends up being called for, you'll know. You still do not need threads.
There are many, many, many examples and tutorials around about how to use select() in particular.

Should I use IOCPs or overlapped WSASend/Receive?

I am investigating the options for asynchronous socket I/O on Windows. There is obviously more than one option: I can use WSASend... with an overlapped structure providing either a completion callback or an event, or I could use IOCPs and the (new) thread pool. From I usually read, the latter option is the recommended one.
However, it is not clear to me, why I should use IOCPs if the completion routine suffices for my goal: tell the socket to send this block of data and inform me if it is done.
I understand that the IOCP stuff in combination with CreateThreadpoolIo etc. uses the OS thread pool. However, the "normal" overlapped I/O must also use separate threads? So what is the difference/disadvantage? Is my callback called by an I/O thread and blocks other stuff?
Thanks in advance,
Christoph
You can use either but, for servers, IOCP with the 'completion queue' will have better performance, in general, because it can use multiple client<>server threads, either with CreateThreadpoolIo or some user-space thread pool. Obviously, in this case, dedicated handler threads are usual.
Overlapped completion-routine I/O is more useful for clients, IMHO. The completion-routine is fired by an Asynchronous Procedure Call that is queued to the thread that initiated the I/O request, (WSASend, WSARecv). This implies that that thread must be in a position to process the APC and typically this means a while(true) loop around some 'blahEx()' call. This can be useful because it's fairly easy to wait on a blocking queue, or other inter-thread signal, that allows the thread to be supplied with data to send and the completion routine is always handled by that thread. This I/O mechanism leaves the 'hEvent' OVL parameter free to use - ideal for passing a comms buffer object pointer into the completion routine.
Overlapped I/O using an actual synchro event/Semaphore/whatever for the overlapped hEvent parameter should be avoided.
Windows IOCP documentation recommends no more than one thread per available core per completion port. Hyperthreading doubles the number of cores. Since use of IOCPs results in a for all practical purposes event-driven application the use of thread pools adds unnecessary processing to the scheduler.
If you think about it you'll understand why: an event should be serviced in its entirety (or placed in some queue after initial processing) as quickly as possible. Suppose five events are queued to an IOCP on a 4-core computer. If there are eight threads associated with the IOCP you run the risk of the scheduler interrupting one event to begin servicing another by using another thread which is inefficient. It can be dangerous too if the interrupted thread was inside a critical section. With four threads you can process four events simultaneously and as soon as one event has been completed you can start on the last remaining event in the IOCP queue.
Of course, you may have thread pools for non-IOCP related processing.
EDIT________________
The socket (file handles work fine too) is associated with an IOCP. The completion routine waits on the IOCP. As soon as a requested read from or write to the socket completes the OS - via the IOCP - releases the completion routine waiting on the IOCP and returns with the additional information you provided when you called the read or write (I usually pass a pointer to a control block). So the completion routine immediately "knows" where the to find information pertinent to the completion.
If you passed information referring to a control block (similar) then that control block (probably) needs to keep track of what operation has completed so it knows what to do next. The IOCP itself neither knows nor cares.
If you're writing a server attached to the internet, the server would issue a read to wait for client input. That input may arrive a milli-second or a week later and when it does the IOCP will release the completion routine which analyzes the input. Typically it responds with a write containing the data requested in the input and then waits on the IOCP. When the write completed the IOCP again releases the completion routine which sees that the write has completed, (typically) issues a new read and a new cycle starts.
So an IOCP-based application typically consumes very little (or no) CPU until the moment a completion occurs at which time the completion routine goes full tilt until it has finished processing, sends a new I/O request and again waits on the completion port. Apart from the IOCP timeout (which can be used to signal house-keeping or such) all I/O-related stuff occurs in the OS.
To further complicate (or simplify) things it is not necessary that sockets be serviced using the WSA routines, the Win32 functions ReadFile and WriteFile work just fine.

Are socket operations performed in parallel at the system level?

If I have a quad core machine and 4 open and active sockets is it faster to have 4 threads to read from these sockets in parallel or just have one thread that reads the sockets iteratively? Can the socket read operation be performed in parallel or is it synchronized somewhere at the system level?
Analogously, if I have 4 sockets to which I need to write some data, is it faster to have 4 threads that write to these sockets at the same time, or is it better to have one thread that does all the writing?
It seems that multiple cores should enable higher throughput (of course if the network bandwidth is not the bottleneck), however I wonder if this is the case, since after all, in the end there is only a single ethernet cord (or other medium), so at some time, all these writes will have to be synchronized anyway, and the packets will be sent sequentially...
Specifically, if I have a machine running n threads which generate requests that need to be sent to another machine is it better to open n sockets and let each thread write to it? Or is it better to synchronize all the threads and send all the data through a single socket?
And on the receiving side is it better to have a single socket to read from? Or is it better to have n sockets read in parallel? What if all the readers need to be synchronized after reading anyway?
Socket operations are largely asynchronous to the application; buffered; and, if you're lucky, concurrent inside the kernel: the only significant state to synchronize on is per-connection. However the rate-determining step is really the bandwidth of the network, which you can easily saturate with even just one thread. Web browsers typically open between 4 and 8 threads per Web page to fetch the HTML itself and the images etc.

Using multiple sockets, is non-blocking or blocking with select better?

Lets say I have a server program that can accept connections from 10 (or more) different clients. The clients send data at random which is received by the server, but it is certain that at least one client will be sending data every update. The server cannot wait for information to arrive because it has other processing to do. Aside from using asynchronous sockets, I see two options:
Make all sockets non-blocking. In a loop, call recv() on each socket and allow it to fail with WSAEWOULDBLOCK if there is no data available and if I happen to get some data, then keep it.
Leave the sockets as blocking. Add all sockets to a FD_SET and call select(). If the return value is non-zero (which it will be most of the time), loop through all the sockets to find the appropriate number of readable sockets with FD_ISSET() and only call recv() on the readable sockets.
The first option will create a lot more calls to the recv() function. The second method is a bigger pain from a programming perspective because of all the FD_SET and FD_ISSET looping.
Which method (or another method) is preferred? Is avoiding the overhead on letting recv() fail on a non-blocking socket worth the hassle of calling select()?
I think I understand both methods and I have tried both with success, but I don't know if one way is considered better or optimal.
I would recommend using overlapped IO instead. You can then kick off a WSARecv(), and provide a callback function to be invoked when the operation completes. What's more, since it'll only be invoked when your program is in an alertable wait state, you don't need to worry about locks like you would in a threaded application (assuming you run them on your main thread).
Note, however, that you do need to enter such an alertable wait state frequently. If this is your UI thread, make sure to use MsgWaitForMultipleObjectsEx() in your message loop, with the MWMO_ALERTABLE flag. This will give your callbacks a chance to run. On non-UI threads, call on a regular basis any of the wait functions that put you into an alertable wait state.
Note also that modal dialogs generally will not enter an alertable wait state, as they have their own message loop which doesn't call MsgWaitForMultipleObjectsEx(). If you need to process network IO when showing a dialog box, do all of your network IO on a dedicated thread, which does enter an alertable wait state regularly.
If, for whatever reason, you can't use overlapped IO - definitely use blocking select(). Using non-blocking recv() like that in an infinite loop is an inexcusable waste of CPU time. However, do put the sockets in non-blocking mode - as otherwise, if one byte arrives and you try to read two, you might end up blocking unexpectedly.
You might also want to consider using a library to abstract away the finicky details. For example, libevent or boost::asio.
the IO should be either completely blocking with one thread per connection and in this case the event loop is essentially an OS scheduler or the IO should be completely non-blocking, and in this case select/waitformultipleobjects-based event loop will be in your application
All intermediate variants are not very maintainable and error prone
Completely non blocking approach scales much better when the amount of concurrent connections grows and does not have a thread context switch overhead, so it is a preferrable where the number of concurrent connections is not fixed. This approach has higher implementation complexity compared to completely blocking one.
For a completely non-blocking IO the core of the applicaiton is a select/waitformultipleobjects-based event loop, all sockets are in non-blocking mode, all reads/writes are generally done from within event loop thread (for top performance writes can be first attempted directly from the thread requesting the write)