While trying to do some socket programming I came across some interesting detail, that detail being the sockaddr_in struct size, it's 14 bytes in size
From my understanding that extra padding is used for typecasting the different types of IP addresses but in theory, couldn't you sneak in malicious code if there's any padding left after the type casting?
Extra padding to me seems like unused memory.
The sockaddr and sockaddr_in structures are typically only used for communication between an application and the kernel/runtime. They are constructed by functions like inet_aton(), and passed to functions like bind(). They are generally not sent over the network, or stored in files. As such, there's no way to "sneak in" a "malicious" (or otherwise faulty) value. Even if this did occur, the extra data would simply be ignored by the recipient.
The padding was probably added in an attempt to support future network address formats which might be larger than IPv4 addresses. Ironically, this was insufficient to support IPv6 addresses, which are 16 bytes long.
Related
I have worked with network programming before. But this is my first foray into netlink sockets.
I have chosen to study the 'connector' type of netlink sockets. As with any kernel component, it has a user counterpart as well. The linux kernel has a sample program called ucon.c which can be used to build userspace programs based on the aforementioned connector netlink sockets.
So here I wish to pin-point parts of the program that I want to confirm my understanding of and of parts of the program that I do not follow the logic of. Enough talking. Here we go. Please correct me wherever I go astray.
As far as I have understood, netlink sockets are a IPC method used to connect processes on the same machine and hence process ID is used as an identifier. And since netlink messages can be ideally multicast, another identifier that is needed by the netlink socket is the message group. All components that are connected to the same message group are in fact related. So while in case of IPv4, we use a sockaddr_in in place of the sockaddr, here we use a sockaddr_nl which contains the above mentioned identifiers.
Now, since we are not going to use the TCP/IP stack of the kernel, in case of netlink messages, netlink packets can be considered to be raw (please correct me here if I am wrong). Hence the only encapsulation that the netlink packet goes through is the netlink message header defined as nlmsghdr.
Now coming on to our program ucon, main() first creates a NETLINK family socket with the connector protocol. Then it fills up the aforementioned netlink socketaddress structure with the relevant information. In order to be a little experimental here, I have added an entry in the connector.h file. Now here comes my first question.
A connector message has a certain type defined in connector.h. Now this connector message structure is something that is completely internal to netlink right? As in, as far as netlink is concerned, this is all but payload. Right?
Moving on, what exactly does the nl-group field mean within the netlink message header structure? The definition does not really contain an element of this name. So are we using overlay techniques to fill certain fields of the netlink message header? And if so, what exactly is the correspondence? I cannot seem to find it anywhere.
So after binding the socket address to the socket, it is sending 10,000 unique pieces of connector based data, which as far as netlink is concerned, is pure payload. But what is strange as far as these messages are concerned is, that all of them seem to have the same sequence number.
Moving on, we find ourselves in the netlink_send subroutine to send these packets via the socket that we are bound to above. This subroutine uses a variety of netlink helper macros to manipulate the data to send. As we say above, the main() function sends 10,000 pieces of data, each of whom is zero-length and requires no acknowledgement, since the ack field is 0 (please correct me if I am wrong here). So each 'packet' is nothing but a connector message header without anything in it. Right?
Now what is surprising is that the netlink_Send function uses the same sequence number as the main() since it is a global variable. However, after the post increment in main(), it is now '1'. So basically our netlink talk is starting with a sequence number of '1'. Is that fine?
Looking into some of the helper macros defined in linux/netlink.h, I will try to summarize my understanding of the ones that are directly or indirectly being used in this program.
#define NLMSG_LENGTH(len) ((len)+NLMSG_ALIGN(NLMSG_HDRLEN))
So this macro will first align the netlink message header length and then add the payload length to it. For our case the netlink payload is a connector header without any payload of its own.
In our case, this micro is used like so
nlh->nlmsg_len = NLMSG_LENGTH(size - sizeof(*nlh));
Here, what I do not understand is the actual payload of the netlink message. In the above case, it is the size of the connector message header (since the connector message itself contains no payload of its own) minus the pointer (which is pointing to the first byte of the netlink message and thereby the netlink message header). And this pointer is (like any other pointer variable) equal to the machine word size which in my case is 4 bytes. Why are we substracting this from the connector message header?
After that, we send the message over this netlink socket just like any other IPv4 socket. hope to hear from you fellows out there with regards to the above mentioned questions. Including some sentences before the actual quesion would help as my post is rather long. But I hope it would be useful to people more than just myself.
Regards.
bind() and accept() let you specify the size of the struct in the 2nd parameter. But I've only seen the size of the whole struct being passed. Why do they make you specify the size? Are there any instances where you would use a different number?
Different socket protocol families use different types of structures. For example, TCP and UDP sockets using IPv4 addresses utilize a sockaddr_in structure, which is 16 bytes in size, whereas IPv6 addresses utilize a sockaddr_in6 structure instead, which is 28 bytes.
The size of the sockaddr struct can vary, for instance depending on if you use IPv4 or IPv6.
The size is specified because these are system calls, which execute in kernel mode and the kernel's address space, and the kernel doesn't otherwise know how much data to copy between kernel address space and user address space. It can't see for example whether you are using an IPv4 or IPv6 address structure.
The size could depend on the implementation, the type of socket and/or platform. So if you pass this size with the call the same code would work on different platforms, no matter what extra fields or padding is used.
This is due to historical reasons: poor man's function overloading to accept different types of socket addresses, like IPv4, UNIX, IPv6. See page 68 of UNIX Network Programming: The sockets networking API for more details.
I'm trying to do some IP agnostic coding and as suggested by various sources I tried to use sockaddr_storage. However all the API calls (getaddrinfo, getnameinfo) still depend on struct sockaddr. And casting between them isn't exactly a good option, gves rise to a lot of other problems.
And casting to sockaddr_in and sockaddr_in6 separately sort of defeats the purpose of me trying to use sockaddr_storage.
Anybody who has effectively used sockaddr_storage in devloping a simple client server socket application.
The problem with jointly doing IPV6 and IPV4 programming is that a pure sockaddr struct itself is not big enough to hold a sockaddr_in6. So if you need to blindly pass around an address that could be either sockaddr_in or sockaddr_in6, sockaddr_storage is a bit easier to use.
At the end of the day, whether you are using sockaddr_in, sockaddr_in6, or sockaddr_storage, you'll have to cast those pointers to make a call to sendto, recvfrom, connect, accept, and many other socket functions. It's just a known nuance of socket programming. Just let go of the feeling of doing something unsafe. Your code will be ok.
Now when writing network code that is meant to work for both IPV4 and IPV6, you can easily get into a trap of having an abundance of switch statements to handle the different network types. Code then gets messy like the following:
if (addr.ss_family == AF_INET)
sendto(sock, buffer, len, 0, (sockaddr*)&addr, sizeof(sockaddr_in))
else (addr.ss_family == AF_INET6)
sendto(sock, buffer, len, 0, (sockaddr*)&addr, sizeof(sockaddr_in6));
And then that type of "if family == AF_INET" expression can easily start to repeat itself over and over. That's what you want to avoid.
Assuming you are using C++, you'll find that an abstraction class for a socket address object is incredibly useful. I have an example on github here and here. The CSocketAddress class is backed by a union of {sockaddr, sockaddr_in, sockaddr_in6} and can be constructed with a sockaddr_storage. If I had known about sockaddr_storage before I had started this class, I would have used that instead of the union thing. In any case, it allows me to write code as follows:
CSocketAddress addr;
...
sendto(sock, buffer, len, 0, addr.GetSockAddr(), addr.GetSockAddrLength());
Likewise, an "accept" statement looks like this:
sockaddr_storage addrstorage = {};
int len = sizeof(sockaddr_storage);
accept(sock, (sockaddr*)&addrstorage, &len);
CSocketAdddress addr(addrstorage); // construct an address object to pass around everywhere else
This was incredibly helpful for the code paths that call bind, send, and recv. Now my STUN server and client code paths no longer have to know anything about the family type of the socket address. They just work with "CSocketAddress" objects. The only IPV4 and IPV6 specific code is during the client and server initialization - when the address objects are actually constructed. Fortunately, that was partially abstracted out as well.
You might also want to peruse the helper functions here. There's some more useful stuff for resolving hostnames, enumerating adapters, etc... in an IP agnostic way. It's Linux code, but some of it should map ok to Windows and winsock.
I am almost done adding TCP support to this code base. In the process of adding support for SOCK_STREAM, I have not had to make a single change nor add any new code to deal with the differences in IPV4 and IPV6 address structures.
I don't generally see the need for struct sockaddr_storage. It's purpose is to allocate enough space for the sockaddr structure of any given protocol, but how often do you need to do that in IP-version-agnostic code? Generally you call getaddrinfo() and it gives you a bunch of struct sockaddr *s, and you don't care whether they're sockaddr_in or sockaddr_in6, you just pass them along as-is to bind() and connect() (no casting required).
In typical client/server code, the main place I can think of where struct sockaddr_storage is useful is to reserve space for the second parameter to accept(). In this case I agree it's ugly to have to cast it to struct sockaddr * once for accept() and again for getnameinfo(). But I can't see a way around those casts. This is C. Structure inheritance always involves lots of casts.
I am studying a simple web server using c, and came up with some of these questions. How does IPv6 used in TCP? To use IPv6, do we have to use some form of modified version of TCP?? If we have to used the modified version of TCP, what do we have to change?? I think I read about Little Endian, as well as Big Endian, but I am not sure if there should be some special cases for IPv6.
As you'll probably be wanting the more gory details of the API changes, it's here: http://www.faqs.org/rfcs/rfc2553.html
Mostly it's a couple of longer address structures to pass in that can take a longer number and a new Family and Protocol name specified so the API can destiguish which struct you are using. Byte ordering is the same.
The actual TCP SYN, SYN/ACK, ACK stuff and all that is identical, it is literally a different IP layer frame with a longet number and other changes.
Recently, while reading a Socket Programming HOWTO the following section jumped out at me:
But if you plan to reuse your socket for further transfers, you need to realize that there is no "EOT" (End of Transfer) on a socket. I repeat: if a socket send or recv returns after handling 0 bytes, the connection has been broken. If the connection has not been broken, you may wait on a recv forever, because the socket will not tell you that there's nothing more to read (for now). Now if you think about that a bit, you'll come to realize a fundamental truth of sockets: messages must either be fixed length (yuck), or be delimited (shrug), or indicate how long they are (much better), or end by shutting down the connection. The choice is entirely yours, (but some ways are righter than others).
This section highlights 4 possibilities for how a socket "protocol" may be written to pass messages. My question is, what is the preferred method to use for real applications?
Is it generally best to include message size with each message (presumably in a header), as the article more or less asserts? Are there any situations where another method would be preferable?
The common protocols either specify length in the header, or are delimited (like HTTP, for instance).
Keep in mind that this also depends on whether you use TCP or UDP sockets. Since TCP sockets are reliable you can be sure that you get everything you shoved into them. With UDP the story is different and more complex.
These are indeed our choices with TCP. HTTP, for example, uses a mix of second, third, and forth option (double new-line ends request/response headers, which might contain the Content-Length header or indicate chunked encoding, or it might say Connection: close and not give you the content length but expect you to rely on reading EOF.)
I prefer the third option, i.e. self-describing messages, though fixed-length is plain easy when suitable.
If you're designing your own protocol then look at other people's work first; there might already be something similar out there that you could either use 'as is' or repurpose and adjust. For example; ISO-8583 for financial txns, HTTP or POP3 all do things differently but in ways that are proven to work... In fact it's worth looking at these things anyway as you'll learn a lot about how real world protocols are put together.
If you need to write your own protocol then, IMHO, prefer length prefixed messages where possible. They're easy and efficient to parse for the receiver but possibly harder to generate if it is costly to determine the length of the data before you begin sending it.
The decision should depend on the data you want to send (what it is, how is it gathered). If the data is fixed length, then fixed length packets will probably be the best. If data can be easily (no escaping needed) split into delimited entities then delimiting may be good. If you know the data size when you start sending the data piece, then len-prefixing may be even better. If the data sent is always single characters, or even single bits (e.g. "on"/"off") then anything different than fixed size one character messages will be too much.
Also think how the protocol may evolve. EOL-delimited strings are good as long as they do not contain EOL characters themselves. Fixed length may be good until the data may be extended with some optional parts, etc.
I do not know if there is a preferred option. In our real-world situation (client-server application), we use the option of sending the total message length as one of the first pieces of data. It is simple and works for both our TCP and UDP implementations. It makes the logic reasonably "simple" when reading data in both situations. With TCP, the amount of code is fairly small (by comparison). The UDP version is a bit (understatement) more complex but still relies on the size that is passed in the initial packet to know when all data has been sent.