How to block until all file descriptors are ready? Use select()/poll()/epoll()? - select

I am in the situation where I would like a C program to block on a set of file descriptors until all files are ready. This differs from the traditional select(), poll(), and epoll() system calls that only block until any file descriptor is ready. Is there a standard function that will block until all files are ready? Or perhaps there are some other clever tricks?
Obviously, I could call select() in a loop until all file descriptors are ready, but I don't want to incur the overheads of context switches, preemptions, migrations, etc.. I'd rather that the select()'ing task just sleep until all files are ready.

It's not thread safe in case there are other threads operating on some of the same file descriptors at the same time (but you probably shouldn't be doing that anyway) but you can try this:
Initialize the poll set to all of the file descriptors you're interested in.
poll() for the current set of file descriptors
When poll() returns, scan the revents and find all of the file descriptors that are ready. Remove them from the poll set.
If there are any file descriptors still in the set, go back to step 2.
poll one last time with the full set of file descriptors to make sure they are all still ready.
If some are not ready anymore, go back to step 1.
success
It still may involve many poll() calls, but at least it doesn't busy-wait. I don't think there exists a more efficient way.

Related

What's the read logic when I call recvfrom() function in C/C++

I wrote a C++ program to create a socket and bind on this socket to receive ICMP/UDP packets. The code I wrote as following:
while(true){
recvfrom(sockId, rePack, sizeof(rePack), 0, (struct sockaddr *)&raddr, (socklen_t *)&len);
processPakcet(recv_size);
}
So, I used a endless while loop to receive messages continually, But I worried about the following two questions:
1, How long the message would be kept in the receiver queue or say in NIC queue?
I worried about that if it takes too long to process the first message, then I might miss the second message. so how fast should I read after read.
2, How to prevent reading the duplicated messages?
i.e, does the receiver queue knows me, when my thread read the first message done, would the queue automatically give me the second one? or say, when I read the first message, then the first message would be deleted by the queue and no one could receive it again.
Additionally, I think the while(true) module is not good, anyone could give me a good suggestion please. (I heard something like polling module).
First, you should always check the return value from recvfrom. It's unlikely the recvfrom will fail, but if it does (for example, if you later implement signal handling, it might fail with EINTR) you will be processing undefined data. Also, of course, the return value tells you the size of the packet you received.
For question 1, the actual answer is operating system-dependent. However, most operating systems will buffer some number of packets for you. The OS interrupt handler that handles the incoming packet will never be copying it directly into your application level buffer, so it will always go into an OS buffer first. The OS has previously noted your interest in it (by virtue of creating the socket and binding it you expressed interest), so it will then place a pointer to the buffer onto a queue associated with your socket.
A different part of the OS code will then (after the interrupt handler has completed) copy the data from the OS buffer into your application memory, free the OS buffer, and return to your program from the recvfrom system call. If additional packets come in, either before or after you have started processing the first one, they'll be placed on the queue too.
That queue is not infinite of course. It's likely that you can configure how many packets (or how much buffer space) can be reserved, either at a system-wide level (think sysctl-type settings in linux), or at the individual socket level (setsockopt / ioctl).
If, when you call recvfrom, there are already queued packets on the socket, the system call handler will not block your process, instead it will simply copy from the OS buffer of the next queued packet into your buffer, release the OS buffer, and return immediately. As long as you can process incoming packets roughly as fast as they arrive or faster, you should not lose any. (However, note that if another system is generating packets at a very high rate, it's likely that the OS memory reserved will be exhausted at some point, after which the OS will simply discard packets that exceed its resource reservation.)
For question 2, you will receive no duplicate messages (unless something upstream of your machine is actually duplicating them). Once a queued message is copied into your buffer, it's released before returning to you. That message is gone forever.
(Note that it's possible that some other process has also created a socket expressing interest in the same packets. That process would also get a copy of the packet data, which is typically handled internal to the operating system by reference counting rather than by actually duplicating the OS buffers, although that detail is invisible to applications. In any case, once all interested processes have received the packet, it will be discarded.)
There's really nothing at all wrong with a while (true) loop; it's a very common control structure for long-running server-type programs. If your program has nothing else it needs to be doing in the meantime, while true allowing it to block in recvfrom is the simplest and hence clearest way to implement it.
(You could use a select(2) or poll(2) call to wait. This allows you to handle waiting for any one of multiple file descriptors at the same time, or to periodically "time out" and go do something else, say, but again if you have nothing else you might need to be doing in the meantime, that is introducing needless complication.)

Update file descriptor pointing to /proc/self after fork() from python multiprocess.Process

I'm working on a C++ program that uses boost::python to provide a python wrapper/API for the user. The program tracks and limits its own memory usage by opening /proc/self/statm using a file descriptor. Every timestep it seeks to the beginning of that file and reads the vmsize from it.
proc_self_statm_fd = open( "/proc/self/statm", O_RDONLY );
However, this causes a problem when calling fork(). In particular, when a user writes a python script that does something like this:
proc = multiprocessing.Process(name="bkg_process",target=bkg_process,daemon=True)
The problem is that the forked process gets the file descriptor pointing to /proc/self/statm from the parent process, not its own, and this reports the wrong memory usage. Even worse, if the parent process exits, the child process will fail when trying to read from the file descriptor.
What's the correct solution for this? It needs to be handled at the C++ level because we don't have control over the user's python scripts. Is there a way to have the class auto detect that a fork has happened and grab a new file descriptor? In the worst case I can have it re-open the file for every update. I'm worried that would add runtime overhead though.
You could store the PID in the class, and check it against the value of getpid() on each call, and then reopen the file if the PID has changed. getpid() is typically much cheaper than open - on some systems it doesn't even need a context switch (it just fetches the PID from a magic location in the process's own memory).
That said, you may also want to actually measure the cost of reopening the file each time - it may not actually be significant.

Files read immediately after watchman notify are empty

I'm integrating watchman via the socket/bser interface in a JVM program.
I'm seeing odd timing where:
A file is written to by the build system (a small text file)
I get a watchman notification on the bser interface
Thread A listening for bser subscription notifications puts the update onto a queue for a separate thread
Thread B reads the queue, reads the changed file, and then puts the file's data on the wire
However, somehow, Thread B is reading an empty file.
Which, I assume is validly empty at some point, e.g. the IO/syscalls might be:
Clear the file contents
Write chunk 1
Write chunk 2
Close the file
And I assume my Thread B is reading the file between steps 1 and 2. Or maybe 1 and 4, if 4 is when the result is flushed.
My confusion is two fold:
1) I thought watchman's default 20ms wait would account for things like this, and I'd only see an update on my thread A, let alone when my thread B does a read, after step 4, and the data is done being written to the file.
2) Even if watchman did tell me "too soon" about the 1st syscall (say step 1), and I read the results while it was an empty file, there should be another syscall/watchman notification that "btw, the file has some content now".
FWIW/oddly enough, I was seeing this very same behavior when using the Java WatchService API, where I would get an inotify event, but read a file "too soon", and so get either empty or partial results, and then no follow up inotify event when the rest of the data was available.
I assumed this was a fluke/nuance of the WatchService, so I solved it at the time by checking the file mod time before reading it, and just waiting to ensure mod time >2 seconds old before assuming the file is "done" being written.
(Note that this also handled ~100mb+ files being written, where the build process might write a chunk of data every 100ms+, but with WatchService I was seeing 100s of inotify notifications for what was essentially a single continuous write.)
When I ported my WatchService code to watchman, I dropped this "ensureSettled" hack, because I assumed watchman's 20ms settle period (which is way lower than the 2s I was using, but hey it's the default) + it's general robustness compared to the somewhat beta WatchService would mean it wouldn't be a problem.
But within ~a day of using the watchman-ported code, I'm seeing empty file reads, just like I was with the WatchService.
Any ideas about what I'm missing?
I can add back the ensureSettled hack, but at this point I'm curious about what is going on.
The docs aren't very clear on this, sorry!
Dispatching of subscription notifications is subject to the settle timeout, but since file updates are non-atomic it's likely that the default 20ms kicks in before the file contents are visible to you; under the covers, the kernel generates a series of notifications for the various mutations that you're doing, so if the truncate takes 20ms before you write (or perhaps flush) the data out, you'll likely get a notification "in the middle".
This stuff is also operating system dependent. Here's an example of a recently discovered and resolved issue: https://github.com/facebook/watchman/commit/bac383c751b248ae742a2a20df3e8272238c0ae2
it doesn't sound like it is quite the same thing as you're experiencing, it just adds some color to this discussion.
If you already have code to manage the settling in your client, then it may be easier for you to add that back; we do this in watchman-make for example.
You may also wish to try setting https://facebook.github.io/watchman/docs/config.html#settle in a .watchmanconfig file in the root of the directory tree that you're watching and leave that to the watchman server. If/when you change this setting, you will need to delete and restart the watch.
Which you choose depends on how you want to trade ease of configuration against volume of code you want to maintain and (perhaps) volume of support questions from your user base if the .watchmanconfig isn't correctly configured for them.
Note that you can use the command invocation from https://facebook.github.io/watchman/docs/cmd/log-level.html to see the debug logging for the kernel notifications as they come in in real time; this may be helpful for you in understanding exactly which notifications are coming in and when.
Just curious, are you using https://github.com/facebook/watchman/tree/master/java to talk to the watchman server?

If a close is interrupted or fails, what is the state of the fd?

Reading the man page for close, if it's interrupted by a signal then the fd's state is unspecified. Is there a best practice for handling this case, or is it assumed to become the OS's problem afterwards.
I assume that failure after EIO closes the fd appropriately.
If you want your program to run for a long time, a possible file descriptor leak is never only the operating system's problem. Short-lived programs which don't use many file descriptors of course have the option of terminating the the descriptors unclosed, and rely on the kernel closing them when the program terminates. So, for the rest of my answer I'll assume your program is long-running.
If your program is not multi-threaded, you have a very easy situation:
int close_some_fd(int fd, int *was_interrupted) {
*was_interrupted = 0;
/* this is point X to which I will draw your attention later. */
for (;;) {
if (0 == close(fd))
return 0; /* Success. */
if (EIO != errno)
return -1; /* Some failure, not interrupted by a signal. */
/* Our close attempt was interrupted. */
*was_interrupted = 1;
int fdflags = 0;
/* Just use the fd to find out if it is still open. */
if (0 != fcntl(fd, F_GETFD, &fdflags))
return 0; /* Interrupted, but file is also closed. So we are done. */
}
}
On the other hand, if your code is multi-threaded, some other thread (and perhaps one you don't control, such as a name service cache) may have called dup, dup2, socket, open, accept or some other similar function that makes the kernel allocate a file descriptor.
To make a similar approach work in such an environment you will need to be able to tell the difference between the file descriptor you started with and a file descriptor newly opened by another thread. Knowing that there is already a lower-numbered fd which still isn't open is enough to discount may of those, but in a multi-threaded environment you don't have a simple way of figuring out that this is still the case.
One option is to rely on some common aspect to all the file descriptors your program works with. For example if it never uses O_CLOEXEC, you can use fcntl to set the O_CLOEXEC flag at the point marked X in the code, and then you just change the existing call to fcntl like this:
if (0 = fcntl(fd, F_GETFD, &fdflags)) {
if (fdflags & O_CLOEXEC) {
/* open, and still marked with O_CLOEXEC, unused elsewhere. */
continue;
} else {
return 0; /* Interrupted, but file is also closed. So we are done. */
}
}
You can adjust this approach to use something other than O_CLOEXEC (perhaps fstat for example, recording st_dev and st_ino), but if you can't be sure what the rest of your multithreaded program is doing, this general idea is likely to be unsatisfying.
There's another approach which is also a bit problematic, but which might serve. This is that, instead of closing your file descriptor, you use sendmsg to pass the file descriptor across a Unix domain socket to a separate, single-threaded special-purpose server whose only job is to close the file descriptor. Yes, this is a bit icky. Some entertaining properties of this approach though are:
In order to avoid uncertainty over whether your fd was really passed to the server and closed successfully, you should probably read from a return channel of fd-closed-OK messages coming back from the server. This avoids needing to block signal delivery while you are executing sendmsg too. However, it means a user-space context-switch overhead for every file descriptor close (unless you batch them up to amortise the cost). You need to avoid a situation where thread A may be reading an fd-closed-OK report corresponding to a request made from thread B. You can avoid that problem by serialising close operations (which will limit performance) or demultiplexing the responses (which is complex). Alternatively, you could use some other IPC mechanism to avoid the need to serialise or demultiplex (SYSV semaphores for example).
For a high-volume process this will place an entertainingly high load on the kernel's file descriptor garbage collector apparatus, which is normally not highly stressed and may therefore give you some interesting symptoms.
As for what I'd do personally in the applications I work with most often, I'd figure out to what extent I could make assumptions about what kind of file descriptor I was closing. If, for example, they were normally sockets, I'd just try this out with a test program and figure out whether EIO normally leaves the file descriptor closed or not. That is, determine whether the theoretically unspecified state of the file descriptor on EIO is, in practice, predictable. This will not work well if the fd could be anything (e.g. disk file, socket, tty, ...). Of course, if you're using some open-source operating system you may just be able to read the kernel source and determine what the possible outcomes are.
Certainly I'd try the above experiment-based system before worrying about sharding the fd-close servers to scale out on fd-closing.

An IOCP documentation interpretation question - buffer ownership ambiguity

Since I'm not a native English speaker I might be missing something so maybe someone here knows better than me.
Taken from WSASend's doumentation at MSDN:
lpBuffers [in]
A pointer to an array of WSABUF
structures. Each WSABUF structure
contains a pointer to a buffer and the
length, in bytes, of the buffer. For a
Winsock application, once the WSASend
function is called, the system owns
these buffers and the application may
not access them. This array must
remain valid for the duration of the
send operation.
Ok, can you see the bold text? That's the unclear spot!
I can think of two translations for this line (might be something else, you name it):
Translation 1 - "buffers" refers to the OVERLAPPED structure that I pass this function when calling it. I may reuse the object again only when getting a completion notification about it.
Translation 2 - "buffers" refer to the actual buffers, those with the data I'm sending. If the WSABUF object points to one buffer, then I cannot touch this buffer until the operation is complete.
Can anyone tell what's the right interpretation to that line?
And..... If the answer is the second one - how would you resolve it?
Because to me it implies that for each and every data/buffer I'm sending I must retain a copy of it at the sender side - thus having MANY "pending" buffers (in different sizes) on an high traffic application, which really going to hurt "scalability".
Statement 1:
In addition to the above paragraph (the "And...."), I thought that IOCP copies the data to-be-sent to it's own buffer and sends from there, unless you set SO_SNDBUF to zero.
Statement 2:
I use stack-allocated buffers (you know, something like char cBuff[1024]; at the function body - if the translation to the main question is the second option (i.e buffers must stay as they are until the send is complete), then... that really screws things up big-time! Can you think of a way to resolve it? (I know, I asked it in other words above).
The answer is that the overlapped structure and the data buffer itself cannot be reused or released until the completion for the operation occurs.
This is because the operation is completed asynchronously so even if the data is eventually copied into operating system owned buffers in the TCP/IP stack that may not occur until some time in the future and you're notified of when by the write completion occurring. Note that with write completions these may be delayed for a surprising amount of time if you're sending without explicit flow control and relying on the the TCP stack to do flow control for you (see here: some OVERLAPS using WSASend not returning in a timely manner using GetQueuedCompletionStatus?) ...
You can't use stack allocated buffers unless you place an event in the overlapped structure and block on it until the async operation completes; there's not a lot of point in doing that as you add complexity over a normal blocking call and you don't gain a great deal by issuing the call async and then waiting on it.
In my IOCP server framework (which you can get for free from here) I use dynamically allocated buffers which include the OVERLAPPED structure and which are reference counted. This means that the cleanup (in my case they're returned to a pool for reuse) happens when the completion occurs and the reference is released. It also means that you can choose to continue to use the buffer after the operation and the cleanup is still simple.
See also here: I/O Completion Port, How to free Per Socket Context and Per I/O Context?