IOCP With WSASend and OVERLAPPED pool - sockets

This is a server with sockets using IOCP.
I initalize a pool of OVERLAPPED which i use to send WSASend() calls.
Every WSASend() call take out a single OVERLAPPED pointer out of the pool and puts it back in IOCP worker thread on notification.
However, when a client dissconnect, SOME of the pending WSASend() calls gets dropped and therefor i have no chance to recycle the OVERLAPPED pointers that were taken out of the pool.
How can i cancel 100% all the pending WSASend() calls while making sure that they wont get to the IOCP worker, so i can manually recycle the OVERLAPPED pointers on disconnection?
Thanks.

That's not how IOCPs work.
If you have pending operations that you want to cancel then close the corresponding socket and the operations will either complete or fail and all of the completions (including the failures) will come out of the IOCP eventually.
You need to wait for that to occur and once it has then you are good to shut down.
What I tend to do is have a 'per connection' structure which contains the socket and which is used as the completion key. I then have "per operation" structures which include the OVERLAPPED and which also include details of which operation type, the I/O buffer used and other stuff. Both of these structures is reference counted.
When an operation is initiated you increment the reference count on both the connection object and the operation object. When you get a completion you process it and then decrement the counts. When the counts reach 0 you're not doing any work with the objects and they can be recycled to the pool for reuse.
To aid in clean shutdown I have a counter that I can wait on that tracks the number of 'active' 'per connection' objects (sockets).
To shut down you abort all connections and then wait for the connection counter to hit zero. At that point all of your objects are either destroyed or in your pools and you can clean up.
I have some example code, here, which is a set of full featured IOCP server examples which may help - it's working code that you can step through and get ideas from if nothing else.

Related

What happens to a process and/or thread while it waits on a mutex?

When a process and/or thread waits on mutex in which state the process and/or thread is? Is it in WAIT or READY or some other state? I tried to search the answer over the web but could not find a clear, definitive answer, maybe there isn't one or maybe there is, to find out that I am posting this question here.
tl;dr: Nothing happens when it is waiting; it is simply a kernel data structure.
Without loss of generality, all operating systems have some sort of model where a unit of execution (task) moves between the states : Ready, Running, Waiting. This Task has a data structure associated with it, where its state and registers (among other things) are recorded.
When a task moves from Ready to Running, its saved registers are loaded on a cpu, and it continues execution from its last saved state. Initially, its saved registers are set to reasonable values for the program to start.
From Running to Waiting or Ready, its registers are stored in its task data structure, and this structure is placed on either a list of Ready or Waiting tasks.
From Waiting to Ready, the task data structure is removed from the Waiting list and appended to the Ready list.
When a task tries to acquire a mutex that is unavailable, it moves from Running (how else could it try to get the mutex) to Waiting. If the mutex was available, it remains Running, and the mutex becomes unavailable.
When a task releases a mutex, and another task is Waiting for that mutex, the Waiting task becomes Ready, and acquires the mutex. If many tasks are Waiting, one is chosen to acquire the mutex and become Ready, the rest remain Waiting.
This is a very abstract description; real systems are complicated by both a plurality of synchronization mechanisms (mailbox, queue, semaphore, pipe, ...), a desire to optimise the various paths, and the utilization of multiple CPUs.

how to terminate an infinitely running process

this question was asked to me in NVIDIA interview
Q)if a process is running from infinite time and O.S. has completely lost control on it, then how would you deallocate resources from that process?
This is a very open question for which the answer depends on lot of factors like signal mechanism of the OS, fork wait exit are implemented, states in which a process can be ...etc. Your answers should have been on the following lines.
A process can be in 1 of these states: ready, running, waiting.. apart from this there is born state and terminated/zombie state. During its lifetime i.e after fork() till exit(), a process has system resources under its control. Ex:entries in process table, open files to IO devices, VM allocated, mappings in page table etc...
OS has lost complete control of the process: I interpret as follows--
----- A process can execute a system call Ex-read and the OS will put the process into sleep/waiting state. There are 2 kinds of sleep: INTERRUPTIBLE & UNINTERRUPTIBLE
After the read is done the Hardware sends a signal to the OS which will get redirected to the sleeping process. And the process will spring up to action again. There may also be a case where the read never completed so the Hardware never sent a signal.
Assume the process initiated a read (which will fail) and went to a:
a INTERRUPTIBLE SLEEP: Now the OS/parent of this process can send a SIGSTOP, SIGKILL etc to this process and it will be interrupted from its sleep and then the OS can kill the process reclaim all the resources. This is fine and solves your problem of infinite resource control.
b UNINTERRUPTIBLE SLEEP: Now even though the read has not sent any signal, but is taking infinite time, and the parent/OS knows that the read has failed likely sending SIGSTOP,SIGKILL has no effect since it is sleeping UNITERRUPTIBLY waiting for read to finish. So now the process has control over system resource and OS cant claim it back. See this for clear understanding of UNITERRUPTIBLE SLEEP. So, if a process is in this state from an infinite time, the OS or parent CANNOT kill/stop it and reclaim the resource, those resources are tied indefinitely and can be reclaimed only when the system is powered off, or you need to hack the driver where a service is struck and get it terminated by the driver which will then send the read failed signal to UNINTERRUPTIBLE PROCESS.
----- after the process has completed its execution and ready to die, the parent which created this process has not yet called wait on this child and process is in ZOMBIE state, in which case it is still hanging on to some system resources. Usually it is the job of the parent to ensure that the child it created terminates normally. But, it may be the case that the parent which was supposed to call wait on this child itself got killed, then the onus of terminating this child goes to the OS, that is exactly what init process does on UNIX like OSes. Apart from its other jobs, init acts like a foster parent for the process whoes parent are dead. It runs the check_ZOMBIE procedures every now and then ( when system resources are low) to ensure that ZOMBIES dont hang on to system resources.
From the wikipedia article : When a process loses its parent, init becomes its new parent. Init periodically executes the wait system call to reap any zombies with init as parent. One of init's responsibilities is reaping orphans and parentless zombies.
So your answer should be pointing at the way process states are defined, signal handling mechanisms are implemented and also the way init process reaps ZOMBIES etc ....
Hope it cleared a thing or two..
Cheers....

Should I use IOCPs or overlapped WSASend/Receive?

I am investigating the options for asynchronous socket I/O on Windows. There is obviously more than one option: I can use WSASend... with an overlapped structure providing either a completion callback or an event, or I could use IOCPs and the (new) thread pool. From I usually read, the latter option is the recommended one.
However, it is not clear to me, why I should use IOCPs if the completion routine suffices for my goal: tell the socket to send this block of data and inform me if it is done.
I understand that the IOCP stuff in combination with CreateThreadpoolIo etc. uses the OS thread pool. However, the "normal" overlapped I/O must also use separate threads? So what is the difference/disadvantage? Is my callback called by an I/O thread and blocks other stuff?
Thanks in advance,
Christoph
You can use either but, for servers, IOCP with the 'completion queue' will have better performance, in general, because it can use multiple client<>server threads, either with CreateThreadpoolIo or some user-space thread pool. Obviously, in this case, dedicated handler threads are usual.
Overlapped completion-routine I/O is more useful for clients, IMHO. The completion-routine is fired by an Asynchronous Procedure Call that is queued to the thread that initiated the I/O request, (WSASend, WSARecv). This implies that that thread must be in a position to process the APC and typically this means a while(true) loop around some 'blahEx()' call. This can be useful because it's fairly easy to wait on a blocking queue, or other inter-thread signal, that allows the thread to be supplied with data to send and the completion routine is always handled by that thread. This I/O mechanism leaves the 'hEvent' OVL parameter free to use - ideal for passing a comms buffer object pointer into the completion routine.
Overlapped I/O using an actual synchro event/Semaphore/whatever for the overlapped hEvent parameter should be avoided.
Windows IOCP documentation recommends no more than one thread per available core per completion port. Hyperthreading doubles the number of cores. Since use of IOCPs results in a for all practical purposes event-driven application the use of thread pools adds unnecessary processing to the scheduler.
If you think about it you'll understand why: an event should be serviced in its entirety (or placed in some queue after initial processing) as quickly as possible. Suppose five events are queued to an IOCP on a 4-core computer. If there are eight threads associated with the IOCP you run the risk of the scheduler interrupting one event to begin servicing another by using another thread which is inefficient. It can be dangerous too if the interrupted thread was inside a critical section. With four threads you can process four events simultaneously and as soon as one event has been completed you can start on the last remaining event in the IOCP queue.
Of course, you may have thread pools for non-IOCP related processing.
EDIT________________
The socket (file handles work fine too) is associated with an IOCP. The completion routine waits on the IOCP. As soon as a requested read from or write to the socket completes the OS - via the IOCP - releases the completion routine waiting on the IOCP and returns with the additional information you provided when you called the read or write (I usually pass a pointer to a control block). So the completion routine immediately "knows" where the to find information pertinent to the completion.
If you passed information referring to a control block (similar) then that control block (probably) needs to keep track of what operation has completed so it knows what to do next. The IOCP itself neither knows nor cares.
If you're writing a server attached to the internet, the server would issue a read to wait for client input. That input may arrive a milli-second or a week later and when it does the IOCP will release the completion routine which analyzes the input. Typically it responds with a write containing the data requested in the input and then waits on the IOCP. When the write completed the IOCP again releases the completion routine which sees that the write has completed, (typically) issues a new read and a new cycle starts.
So an IOCP-based application typically consumes very little (or no) CPU until the moment a completion occurs at which time the completion routine goes full tilt until it has finished processing, sends a new I/O request and again waits on the completion port. Apart from the IOCP timeout (which can be used to signal house-keeping or such) all I/O-related stuff occurs in the OS.
To further complicate (or simplify) things it is not necessary that sockets be serviced using the WSA routines, the Win32 functions ReadFile and WriteFile work just fine.

Why is epoll faster than select?

I have seen a lot of comparisons which says select have to walk through the fd list, and this is slow. But why doesn't epoll have to do this?
There's a lot of misinformation about this, but the real reason is this:
A typical server might be dealing with, say, 200 connections. It will service every connection that needs to have data written or read and then it will need to wait until there's more work to do. While it's waiting, it needs to be interrupted if data is received on any of those 200 connections.
With select, the kernel has to add the process to 200 wait lists, one for each connection. To do this, it needs a "thunk" to attach the process to the wait list. When the process finally does wake up, it needs to be removed from all 200 wait lists and all those thunks need to be freed.
By contrast, with epoll, the epoll socket itself has a wait list. The process needs to be put on only that one wait list using only one thunk. When the process wakes up, it needs to be removed from only one wait list and only one thunk needs to be freed.
To be clear, with epoll, the epoll socket itself has to be attached to each of those 200 connections. But this is done once, for each connection, when it is accepted in the first place. And this is torn down once, for each connection, when it is removed. By contrast, each call to select that blocks must add the process to every wait queue for every socket being monitored.
Ironically, with select, the largest cost comes from checking if sockets that have had no activity have had any activity. With epoll, there is no need to check sockets that have had no activity because if they did have activity, they would have informed the epoll socket when that activity happened. In a sense, select polls each socket each time you call select to see if there's any activity while epoll rigs it so that the socket activity itself notifies the process.
The main difference between epoll and select is that in select() the list of file descriptors to wait on only exists for the duration of a single select() call, and the calling task only stays on the sockets' wait queues for the duration of a single call. In epoll, on the other hand, you create a single file descriptor that aggregates events from multiple other file descriptors you want to wait on, and so the list of monitored fd's is long-lasting, and tasks stay on socket wait queues across multiple system calls. Furthermore, since an epoll fd can be shared across multiple tasks, it is no longer a single task on the wait queue, but a structure that itself contains another wait queue, containing all processes currently waiting on the epoll fd. (In terms of implementation, this is abstracted over by the sockets' wait queues holding a function pointer and a void* data pointer to pass to that function).
So, to explain the mechanics a little more:
An epoll file descriptor has a private struct eventpoll that keeps track of which fd's are attached to this fd. struct eventpoll also has a wait queue that keeps track of all processes that are currently epoll_waiting on this fd. struct epoll also has a list of all file descriptors that are currently available for reading or writing.
When you add a file descriptor to an epoll fd using epoll_ctl(), epoll adds the struct eventpoll to that fd's wait queue. It also checks if the fd is currently ready for processing and adds it to the ready list, if so.
When you wait on an epoll fd using epoll_wait, the kernel first checks the ready list, and returns immediately if any file descriptors are already ready. If not, it adds itself to the single wait queue inside struct eventpoll, and goes to sleep.
When an event occurs on a socket that is being epoll()ed, it calls the epoll callback, which adds the file descriptor to the ready list, and also wakes up any waiters that are currently waiting on that struct eventpoll.
Obviously, a lot of careful locking is needed on struct eventpoll and the various lists and wait queues, but that's an implementation detail.
The important thing to note is that at no point above there did I describe a step that loops over all file descriptors of interest. By being entirely event-based and by using a long-lasting set of fd's and a ready list, epoll can avoid ever taking O(n) time for an operation, where n is the number of file descriptors being monitored.

Using multiple sockets, is non-blocking or blocking with select better?

Lets say I have a server program that can accept connections from 10 (or more) different clients. The clients send data at random which is received by the server, but it is certain that at least one client will be sending data every update. The server cannot wait for information to arrive because it has other processing to do. Aside from using asynchronous sockets, I see two options:
Make all sockets non-blocking. In a loop, call recv() on each socket and allow it to fail with WSAEWOULDBLOCK if there is no data available and if I happen to get some data, then keep it.
Leave the sockets as blocking. Add all sockets to a FD_SET and call select(). If the return value is non-zero (which it will be most of the time), loop through all the sockets to find the appropriate number of readable sockets with FD_ISSET() and only call recv() on the readable sockets.
The first option will create a lot more calls to the recv() function. The second method is a bigger pain from a programming perspective because of all the FD_SET and FD_ISSET looping.
Which method (or another method) is preferred? Is avoiding the overhead on letting recv() fail on a non-blocking socket worth the hassle of calling select()?
I think I understand both methods and I have tried both with success, but I don't know if one way is considered better or optimal.
I would recommend using overlapped IO instead. You can then kick off a WSARecv(), and provide a callback function to be invoked when the operation completes. What's more, since it'll only be invoked when your program is in an alertable wait state, you don't need to worry about locks like you would in a threaded application (assuming you run them on your main thread).
Note, however, that you do need to enter such an alertable wait state frequently. If this is your UI thread, make sure to use MsgWaitForMultipleObjectsEx() in your message loop, with the MWMO_ALERTABLE flag. This will give your callbacks a chance to run. On non-UI threads, call on a regular basis any of the wait functions that put you into an alertable wait state.
Note also that modal dialogs generally will not enter an alertable wait state, as they have their own message loop which doesn't call MsgWaitForMultipleObjectsEx(). If you need to process network IO when showing a dialog box, do all of your network IO on a dedicated thread, which does enter an alertable wait state regularly.
If, for whatever reason, you can't use overlapped IO - definitely use blocking select(). Using non-blocking recv() like that in an infinite loop is an inexcusable waste of CPU time. However, do put the sockets in non-blocking mode - as otherwise, if one byte arrives and you try to read two, you might end up blocking unexpectedly.
You might also want to consider using a library to abstract away the finicky details. For example, libevent or boost::asio.
the IO should be either completely blocking with one thread per connection and in this case the event loop is essentially an OS scheduler or the IO should be completely non-blocking, and in this case select/waitformultipleobjects-based event loop will be in your application
All intermediate variants are not very maintainable and error prone
Completely non blocking approach scales much better when the amount of concurrent connections grows and does not have a thread context switch overhead, so it is a preferrable where the number of concurrent connections is not fixed. This approach has higher implementation complexity compared to completely blocking one.
For a completely non-blocking IO the core of the applicaiton is a select/waitformultipleobjects-based event loop, all sockets are in non-blocking mode, all reads/writes are generally done from within event loop thread (for top performance writes can be first attempted directly from the thread requesting the write)