Can recv return EHOSTUNREACH? - sockets

According Unix Network Programming by Stevens, EHOSTUNREACH can be returned when readline\recv is used.
However, in linux man pages, EHOSTUNREACH cannot be received by recv.
Who is right?

If an error occurs in the communication the error will be set on the socket and delivered with the next syscall related to the socket. The EHOSTUNREACH error can be (among other things) triggered by sending a UDP packet to a target and getting an ICMP unreachable back. Since this ICMP message comes back only after the send call was done it will not returned for the send but only on the next syscall on the socket which might also be a recv.
Thus I would suggest that this error can be returned in Linux too but I might be wrong. In generally Linux is not UNIX, systems evolve and documentation is often flawed. If you look at the documentation for recv on various platforms you will see that OpenBSD documents EHOSTUNREACHABLE as possible error while FreeBSD, NetBSD, Linux... do not. I would suggest you better expect the unexpected :)

Related

Network packet loss causes client code to act strange

I am facing some issues which I need some help on coming with a best way to resolve this.
here is the problem -
I have server code running which has a socket that is listening to accept new incoming connections.
I then attempt to start a client, which also has a socket that is listening to accept new incoming connections.
The client code begins with accepting a new connection on the listening socket file descriptor and gets a new socket file descriptor for I/O.
The server does the same thing and gets a new socket file descriptor for I/O.
Note: The client is not completely up, yet. It needs to receive some bytes from the server and send some before it can start.
I then introduce some packet loss over the TCP/IP network connection. This causes the certain errors (example: the recv() system call in the client process sees no received bytes and then closes the socket connection on the client side and the associated new socket file descriptor is closed.) However, this leaves the client process hanging since there are other descriptors in the FD_SET but none of them are I/O ready. So pselect() keeps returning 0 file descriptors ready for I/O. The client needs to send and receive certain bytes over the connection before it can start up.
My question is more of what should I do here ?
I did research on the SO_KEEPALIVE option when I create the new socket connection during the accept() system call. But I do not think that would resolve my problem here especially if the network packet loss is ongoing.
Should I kill the client process here if I realize there are no file descriptors ready for I/O and never will be ? Is there a better way to approach this ?
If I'm reading the question correctly, the core of the question is: "what should your client program do when a TCP connection that is central to its functionality has been broken?"
The answer to that question is really a matter of preference -- what would you like your client program to do in that case? Or to put it another way, what behavior would your users find most useful?
In many of my own client programs, I have logic included such that if the TCP connection to the server is ever broken, the client will automatically try to create a new TCP connection to the server and thereby recover its connectivity and useful functionality as soon as possible.
The other obvious option would be to just have the client quit when the connection is broken; perhaps with some sort of error indication so that the user will know why the client went away. (perhaps an error dialog that asks if the user would like to try to reconnect?)
SO_KEEPALIVE is probably not going to help you much in this scenario, by the way -- despite its name, its purpose is to help a program discover in a more timely manner that TCP connectivity has been lost, not to try harder to keep a TCP connection from being lost. (And it doesn't even serve that purpose particularly well, since in many TCP stacks only one keepalive packet is sent per hour, or so, which means that even with SO_KEEPALIVE enabled it can be a very long time before your program starts receiving error messages reflecting the loss of network connectivity)

Can bind() ever return EINPROGRESS in real systems?

According to POSIX, if you call bind on a non-blocking socket, it's allowed to return EINPROGRESS and complete asynchronously. (Reference.)
I checked the source to libuv and Twisted, and as far as I can tell they both call bind on non-blocking sockets without doing anything to handle this error. Neither the Linux or FreeBSD bind(2) man pages mention this as a possible outcome.
Does this actually happen on any real systems? And if so, how do you get notified when the bind has completed?

What should I do if closesocket fails with WSAENETDOWN?

Apparently, whenever the closesocket function fails with WSAENETDOWN, “The network subsystem has failed”. But what exactly does that mean? When does it happen? If it does happen, is the socket descriptor still closed? How should I handle it?
Regarding my first question, the Windows Sockets Error Codes page says that WSAENETDOWN means
Network is down.
A socket operation encountered a dead network. This could indicate a serious failure of the network system (that is, the protocol stack that the Windows Sockets DLL runs over), the network interface, or the local network itself.
But that does not really help me either.
Note that POSIX ENETDOWN is only specified for connect, send, sendto, sendmsg, write, not for close. The Winsock counterpart is more ubiquitous.

should I be using sockets or packet capture? perl

I'm trying to spec out the foundations for a server application who's purpose will be to..
1 'receive' tcp and/or udp packets
2 interpret the contents (i.e. header values)
To add more detail, this server will receive 'sip invites' and respond with a '302 redirect'.
I have experience with Net::Pcap and perl, and know I could achieve this by looping for filtered packets, decoding and then using something like Net::SIP to respond.
However, there's a lot of bloat in both of these modules/applications I don't need. The server will be under heavy load, and if I run TCPDUMP on it's own, it loses packets in the kernel due to server load, so worry it wont be appropriate :(
Should I be able to achieve the same thing by 'listening' on a socket (using IO::Socket for example) and decoding a packet?
Unfortunatly by debugging, it's hard to tell if IO::Socket will give me the opportunity to see a raw packet? And instead it automatically decodes the message to a readable format!
tl;dr: I want to capture lots of SIP Invites, analyse the head values, and respond with a SIP 302 redirect. Is there a better way than using tcpdump (via Net::Pcap) to achieve this?
Thanks,
Moose
Is there a better way than using tcpdump (via Net::Pcap) to achieve this?
Yes. Using libpcap (that's what you meant instead of tcpdump in that question) is a bad way to implement a TCP-based service, as you will have to reimplement much of TCP yourself (libpcap gives you raw network-layer packets), and the packets your program gets will also get delivered to the Internet protocol stack on your machine, so:
if there's nothing on your machine listening on the TCP port to which the other machines are trying to connect, the connection requests will get a RST from the TCP code and think the connection attempt failed;
if there is something on your machine listening on that port, it'll probably accept the connection, and it and your program will both try to communicate with the other machine, which will probably confuse its TCP stack and cause various bad and random things to happen.
It's not much better for UDP:
if there's nothing on your machine listening on the UDP port to which the other machines are trying to connect, the connection requests will probably get an ICMP Port Unreachable message from the UDP code, which may make it think the connection attempt failed;
if there is something on your machine listening on that port, it'll probably accept the connection, and it and your program will both try to communicate with the other machine, which will probably confuse its SIP stack and cause various bad and random things to happen.
IO:Socket will probably not give you raw packets, and that's a good thing; you won't have to implement your own IP and TCP/UDP stack. If your goal is to implement a redirect server on your machine, you have no need to receive raw packets; you want to receive SIP INVITEs with all the lower-level processing done for you by your machine's IP/TCP/UDP stack.
If you already have a SIP implementation on your machine, and you want to act as a "firewall" for it, so that, for some INVITEs, you send back a 302 redirect and prevent the SIP implementation on your machine from ever seeing the INVITEs in question, you will need to use the same mechanism that your particular OS uses to implement firewalls. There is no libpcap-like wrapper for those mechanisms, as far as I know.

Will a TCP RST cause a host to drop the receive buffer?

Upon receiving a TCP RST packet, will the host drop all the remaining data in the receive buffer that has already been ACKed by the remote host but not read by the application process using the socket?
I'm wondering if it's dangerous to close a socket as soon as I'm not interested in what the other host has to say anymore (e.g. to conserver resources); e.g. if that could cause the other party to lose any data I've already sent, but he has not yet read.
Should RSTs generally be avoided and indicate a complete, bidirectional failure of communication, or are they a relatively safe way to unidirectionally force a connection teardown as in the example above?
I've found some nice explanations of the topic, they indicate that data loss is quite possible in that case:
http://blog.olivierlanglois.net/index.php/2010/02/06/tcp_rst_flag_subtleties
http://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable also gives some more information on the topic, and offers a solution that I've used in my code. So far, I've not seen any RSTs sent by my server application.
Application-level close(2) on a socket does not produce an RST but a FIN packet sent to the other side, which results in normal four-way connection tear-down. RSTs are generated by the network stack in response to packets targeting not-existing TCP connection.
On the other hand, if you close the socket but the other side still has some data to write, its next send(2) will result in EPIPE.
With all of the above in mind, you are much better off designing your own protocol on top of TCP that includes explicit "logout" or "disconnect" message.