Ubuntu Linux, 2.6.32-45 kernel, 64b, Perl 5.10.1
I connect many new IO::Socket::UNIX stream sockets to a server, and mostly they work fine. But sometimes in a heavily threaded environment on a faster processor, they return "Resource temporarily unavailable" (EAGAIN/EWOULDBLOCK). I use a timeout on the Connect, so this causes the sockets to be put into non-blocking mode during the connect. But my timeout period isn't occurring - it doesn't wait any noticeable time, it returns quickly.
I see that inside IO::Socket, it tries the connect, and if it fails with EINPROGRESS or EAGAIN/EWOULDBLOCK, it does a select to wait for the write bit to be set. This seems normal so far. In my case the select quickly succeeds, implying that the write bit is set, and the code then tries a re-connect. (I guess this is an attempt to get any error via error slippage?) Anyway, the re-connect fails again with the EAGAIN/EWOULDBLOCK.
In my code this is easy to fix with a re-try loop. But I don't understand why, when the socket becomes writeable, that the socket is not re-connectable. I thought the select guard was always sufficient for a non-blocking connect. Apparently not; so my questions are:
What conditions cause the connect to fail when the select works (the write bit gets set)?
Is there a better way than spinning and retrying, to wait for the connect to succeed? The spinning is wasting cycles. Instead I'd like it to block on something like a select/poll, but I still need a timeout.
Thanx,
-- Steve
But I don't understand why, when the socket becomes writeable, that the socket is not re-connectable.
I imagine it's because whatever needed resource became free got snatched up before you were able to connect again. Replacing the select with a spin loop would not help that.
Related
I am currently in the process of converting some of my code from blocking to non-blocking using the sockets2 crate, however I am running into issues with connecting the socket. The socket always fails to connect before the timeout is exceeded. Despite my attempts to search for examples, I have yet to find any Rust code showing how a non-blocking TCP stream is created.
To give you an idea what I am attempting to do, the code I am currently converting looks looks roughly like this. This gives me no issues and works fine, but it is getting too costly to create a new thread for every socket.
let address = SocketAddr::from(([x, y, z, v], port));
let mut socket = TcpStream::connect_timeout(&address, timeout)?;
At the moment, my code to connect the socket looks like this. Since connect_timeout can only be executed in blocking mode, I use connect instead and regularly poll the socket to check if it is connected. At the moment, I keep getting WouldBlock errors when calling connect, but I do not know what this means. At first I assumed that the connect was proceeding, but returning the result immediately would require blocking so a WouldBlock error was given instead. However, due to the issues getting the socket to connect, I am second guessing those assumptions.
let address = SocketAddr::from(([x, y, z, v], port));
// Create socket equivalent to TcpStream
let socket = Socket::new(Domain::IPV4, Type::STREAM, Some(Protocol::TCP))?;
// Enable non-blocking mode on the socket
socket.set_nonblocking(true)?;
// What response should I expect? Do I need to bind an address first?
match socket.connect(&address.into()) {
Ok(_) => {}
Err(e) if e.kind() == ErrorKind::WouldBlock => {
// I keep getting this error, but I don't know what this means.
// Is non-blocking connect unavailable?
// Do I need to keep trying to connect until it succeeds?
},
// Are there any other types of errors I should be looking for before failing the connection?
Err(e) => return Err(e),
}
I am also unsure what the correct approach is to determine if a socket is connected. At the moment, I attempt to read to a zero length buffer and check if I get a NotConnected error. However, I am unsure what WouldBlock means in this context and I have never gotten a positive response from this approach.
let mut buffer = [0u8; 0];
// I also tried self.socket.peer_addr(), but ran into issues where it returned a positive
// response despite not being connected.
match self.socket.read(&mut buffer) {
Ok(_) => Ok(true),
// What does WouldBlock mean in this context?
Err(e) if e.kind() == ErrorKind::WouldBlock => Ok(false),
Err(e) if e.kind() == ErrorKind::NotConnected => Ok(false),
Err(e) => Err(e),
}
Each socket is periodically checked until an arbitrary timeout is reached to determine if it has connected. So far, no socket has passed the connected before reaching its timeout (20 sec) when connecting to a known-good server. These tests are all performed in a single threaded application on Windows using a known-good server that has been checked to work with the blocking version of my program.
Edit: Here is a minimum reproducible example for this issue. However, it likely won't work if you run it on Rust playground due to network restrictions. https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=a08c22574a971c0032fd9dd37e10fd94
WouldBlock is the expected error when a non-blocking connect() (or other operation) is successfully started in the background. You can then wait up to your desired timeout interval for the operation to finish (use select() or epoll() or other platform-specific notification to detect this). If the timeout elapses, close the socket and handle the timeout accordingly. Otherwise, check the socket's SO_ERROR option to see if the operation was successful or failed, and act accordingly.
To give you an idea what I am attempting to do, the code I am currently converting looks looks roughly like this. This gives me no issues and works fine, but it is getting too costly to create a new thread for every socket.
This sounds to me strongly like an XY-Problem.
I think you misunderstand what 'nonblocking' means. What it does not mean is that you can simply and without worrying run multiple sockets in parallel. What it does mean is that every operation that would block returns an error instead and you have to retry it at a later time.
Actual non-blocking sockets usually don't get used at enduser level. They are meant for libraries that depend on them and provide some higher level interface for asynchronism. Non-blocking sockets are hard to get right. They need to be paired with events, because otherwise you can only implement them with 100% cpu hungry busy loops, which is most likely not what you want.
There's good news, though! Remember the high-level libraries I talked about that use nonblocking sockets internally? The most famous one right now is called tokio and does exactly what you want. It will require you to learn a programming mechanism called asynchronism, but you will grasp it, I'm sure :)
I recommend this read: https://tokio.rs/tokio/tutorial
I'm writing a TCP client using asynchronous calls. If the server is active when the app starts then it connects and talks OK. However if the first connect fails, then every subsequent call to connect() fails with WSAENOTCONN(10057) without producing any network traffic (checked with Wireshark).
Currently the code does not close the socket when it gets the error. The TCP state diagram does not seem to require it. It simply waits for 30 seconds and tries the connect() again.
That subsequent connect() and the following read() both return the WSAENOTCONN error.
Do I need to close the socket and open a new one? If so, which errors require me to close the socket, since there are a lot of different errors, some of which I will probably never see on the test bench.
You can assume this is MS Winsock2, although it is actually Interval Zero RTX 2009, which is subtly different in some places.
Do I need to close the socket and open a new one?
Yes.
If so, which errors require me to close the socket, since there are a lot of different errors, some of which I will probably never see on the test bench.
Almost all errors are fatal to the connection and should result in you closing the socket. EAGAIN/EWOULDBLOCK s a prominent exception, as is EINTR, but I can't think of any others offhand.
Do I need to close the socket and open a new one?
Yes.
You should close the socket under all error conditions that results in connection gone for good (Say, like the peer has closed the connection)
I have some automated tests that I run in order to test a MongoDB-related library. In order to do that, I start a Mongo server with a temporary data directory and on an ephemeral port, connect to it, and run some tests.
This leads to a race condition, obviously. So in my first version of these tests, I paused for a fixed amount of time and waited to make sure mongod had time to start before the tests began.
This was frustrating (and inefficient), so I decided to monitor the standard output and wait for a line on mongod's standard output stream matching the regular expression:
/\[initandlisten\] waiting for connections/
This got it working. So good, then I prepared to circle back and try to find a more robust way to do it. I recalled that a Java library called "embedmongo" ran MongoDB-based tests, and figured it must solve the problem. And it does this (GitHub):
protected String successMessage() {
return "waiting for connections on port";
}
... and uses that to figure out whether the process has started correctly.
So, are we right? Is examining the mongod process output log (is it ever internationalized? could the wording of the message ever change?) the very best way to do this? Or is there something more robust that we're both missing?
What we do in a similar scenario is:
Try to connect to the configured port (simply new Socket(host, port)) in a loop until it works (10 ms delay) - this ensures, that the mongo client, which starts an internal monitoring thread, does not throw exceptions due to "connection refused"
Connect to the mongodb and query something. This is important, as all mongo client objects are lazy init. (Simple listDatabaseNames() on the client is enough, but make sure to actually read the result.)
All the time, check the process for not being terminated.
I just wrote a small untilMongod command which does just that, which you can use in bash scripting: https://github.com/FGM/untilMongod
Includes a bash + Node.JS example use case.
I'm writing a Unix domain socket server for Linux.
A peculiarity of Unix domain sockets I quickly found out is that, while creating a listening Unix socket creates the matching filesystem entry, closing the socket doesn't remove it. Moreover, until the filesystem entry is removed manually, it's not possible to bind() a socket to the same path again : bind() fails with EADDRINUSE if the path it is given already exists in the filesystem.
As a consequence, the socket's filesystem entry needs to be unlink()'ed on server shutdown to avoid getting EADDRINUSE on server restart. However, this cannot always be done (i.e.: server crash). Most FAQs, forum posts, Q&A websites I found only advise, as a workaround, to unlink() the socket prior to calling bind(). In this case however, it becomes desirable to know whether a process is bound to this socket before unlink()'ing it.
Indeed, unlink()'ing a Unix socket while a process is still bound to it and then re-creating the listening socket doesn't raise any error. As a result, however, the old server process is still running but unreachable : the old listening socket is "masked" by the new one. This behavior has to be avoided.
Ideally, using Unix domain sockets, the socket API should have exposed the same "mutual exclusion" behavior that is exposed when binding TCP or UDP sockets : "I want to bind socket S to address A; if a process is already bound to this address, just complain !" Unfortunately this is not the case...
Is there a way to enforce this "mutual exclusion" behavior ? Or, given a filesystem path, is there a way to know, via the socket API, whether any process on the system has a Unix domain socket bound to this path ? Should I use a synchronization primitive external to the socket API (flock(), ...) ? Or am I missing something ?
Thanks for your suggestions.
Note : Linux's abstract namespace Unix sockets seem to solve this issue, as there is no filesystem entry to unlink(). However, the server I'm writing aims to be generic : it must be robust against both types of Unix domain sockets, as I am not responsible for choosing listening addresses.
I know I am very late to the party and that this was answered a long time ago but I just encountered this searching for something else and I have an alternate proposal.
When you encounter the EADDRINUSE return from bind() you can enter an error checking routine that connects to the socket. If the connection succeeds, there is a running process that is at least alive enough to have done the accept(). This strikes me as being the simplest and most portable way of achieving what you want to achieve. It has drawbacks in that the server that created the UDS in the first place may actually still be running but "stuck" somehow and unable to do an accept(), so this solution certainly isn't fool-proof, but it is a step in the right direction I think.
If the connect() fails then go ahead and unlink() the endpoint and try the bind() again.
I don't think there is much to be done beyond things you have already considered. You seem to have researched it well.
There are ways to determine if a socket is bound to a unix socket (obviously lsof and netstat do it) but they are complicated and system dependent enough that I question whether they are worth the effort to deal with the problems you raise.
You are really raising two problems - dealing with name collisions with other applications and dealing with previous instances of your own app.
By definition multiple instances of your pgm should not be trying to bind to the same path so that probably means you only want one instance to run at a time. If that's the case you can just use the standard pid filelock technique so two instances don't run simultaneously. You shouldn't be unlinking the existing socket or even running if you can't get the lock. This takes care of the server crash scenario as well. If you can get the lock then you know you can unlink the existing socket path before binding.
There is not much you can do AFAIK to control other programs creating collisions. File permissions aren't perfect, but if the option is available to you, you could put your app in its own user/group. If there is an existing socket path and you don't own it then don't unlink it and put out an error message and letting the user or sysadmin sort it out. Using a config file to make it easily changeable - and available to clients - might work. Beyond that you almost have to go some kind of discovery service, which seems like massive overkill unless this is a really critical application.
On the whole you can take some comfort that this doesn't actually happen often.
Assuming you only have one server program that opens that socket.
Then what about this:
Exclusively create a file that contains the PID of the server process (maybe also the path of the socket)
If you succeed, then write your PID (and socket path) there and continue creating the socket.
If you fail, the socket was created before (most likely), but the server may be dead. Therefore read the PID from the file that exists, and then check that such a process still exists (e.g. using the kill with 0-signal):
If a process exists, it may be the server process, or it may be an unrelated process
(More steps may be needed here)
If no such process exists, remove the file and begin trying to create it exclusively.
Whenever the process terminates, remove the file after having closed (and removed) the socket.
If you place the socket and the lock file both in a volatile filesystem (/tmp in older ages, /run in modern times, then a reboot will clear old sockets and lock files automatically, most likely)
Unless administrators like to play with kill -9 you could also establish a signal handler that tries to remove the lock file when receiving fatal signals.
Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for multiplexing connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on their port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR?), which in the network context suggests that ...'the connection was reset by peer'; or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? I use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc.
*I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right, but of course in series not parallel.
There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU > 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. Any ideas?
I cannot figure-out whatever is wrong even with Valgrind, GDB or much logging. Kindly help where you can.
Yes, read/write are thread-safe. But beware of gethostbyname() and getservbyname() if you're using them - they return pointers to static data, and may not be thread-safe.
errno 104 is ECONNREFUSED (not EINTR). Use strerror or perror to get the textual error message (like 'Connection reset by peer') for a particular errno code.
The best way to figure out what's going wrong is often to do very detailed logging - log the results of every operation, plus details like the IP address/port connecting to, the number of bytes read/written, the thread id, and so forth. And, of course, make sure your logging code is thread-safe :-)
Getting an ECONNRESET after 10 minutes sounds like the result of your connection timing out. Either the web server isn't sending the data or your app isn't receiving it.
To test the former, hookup a program like Wireshark to the local loopback device and look for traffic to and from the port you are using.
For the later, take a look at the epoll() man page. They mention a scenario where using edge triggered events could result in a lockup, because there is still data in the buffer, but no new data comes in so no new event is triggered.