netlink connector sockets - sockets

I have worked with network programming before. But this is my first foray into netlink sockets.
I have chosen to study the 'connector' type of netlink sockets. As with any kernel component, it has a user counterpart as well. The linux kernel has a sample program called ucon.c which can be used to build userspace programs based on the aforementioned connector netlink sockets.
So here I wish to pin-point parts of the program that I want to confirm my understanding of and of parts of the program that I do not follow the logic of. Enough talking. Here we go. Please correct me wherever I go astray.
As far as I have understood, netlink sockets are a IPC method used to connect processes on the same machine and hence process ID is used as an identifier. And since netlink messages can be ideally multicast, another identifier that is needed by the netlink socket is the message group. All components that are connected to the same message group are in fact related. So while in case of IPv4, we use a sockaddr_in in place of the sockaddr, here we use a sockaddr_nl which contains the above mentioned identifiers.
Now, since we are not going to use the TCP/IP stack of the kernel, in case of netlink messages, netlink packets can be considered to be raw (please correct me here if I am wrong). Hence the only encapsulation that the netlink packet goes through is the netlink message header defined as nlmsghdr.
Now coming on to our program ucon, main() first creates a NETLINK family socket with the connector protocol. Then it fills up the aforementioned netlink socketaddress structure with the relevant information. In order to be a little experimental here, I have added an entry in the connector.h file. Now here comes my first question.
A connector message has a certain type defined in connector.h. Now this connector message structure is something that is completely internal to netlink right? As in, as far as netlink is concerned, this is all but payload. Right?
Moving on, what exactly does the nl-group field mean within the netlink message header structure? The definition does not really contain an element of this name. So are we using overlay techniques to fill certain fields of the netlink message header? And if so, what exactly is the correspondence? I cannot seem to find it anywhere.
So after binding the socket address to the socket, it is sending 10,000 unique pieces of connector based data, which as far as netlink is concerned, is pure payload. But what is strange as far as these messages are concerned is, that all of them seem to have the same sequence number.
Moving on, we find ourselves in the netlink_send subroutine to send these packets via the socket that we are bound to above. This subroutine uses a variety of netlink helper macros to manipulate the data to send. As we say above, the main() function sends 10,000 pieces of data, each of whom is zero-length and requires no acknowledgement, since the ack field is 0 (please correct me if I am wrong here). So each 'packet' is nothing but a connector message header without anything in it. Right?
Now what is surprising is that the netlink_Send function uses the same sequence number as the main() since it is a global variable. However, after the post increment in main(), it is now '1'. So basically our netlink talk is starting with a sequence number of '1'. Is that fine?
Looking into some of the helper macros defined in linux/netlink.h, I will try to summarize my understanding of the ones that are directly or indirectly being used in this program.
#define NLMSG_LENGTH(len) ((len)+NLMSG_ALIGN(NLMSG_HDRLEN))
So this macro will first align the netlink message header length and then add the payload length to it. For our case the netlink payload is a connector header without any payload of its own.
In our case, this micro is used like so
nlh->nlmsg_len = NLMSG_LENGTH(size - sizeof(*nlh));
Here, what I do not understand is the actual payload of the netlink message. In the above case, it is the size of the connector message header (since the connector message itself contains no payload of its own) minus the pointer (which is pointing to the first byte of the netlink message and thereby the netlink message header). And this pointer is (like any other pointer variable) equal to the machine word size which in my case is 4 bytes. Why are we substracting this from the connector message header?
After that, we send the message over this netlink socket just like any other IPv4 socket. hope to hear from you fellows out there with regards to the above mentioned questions. Including some sentences before the actual quesion would help as my post is rather long. But I hope it would be useful to people more than just myself.
Regards.

Related

What guarantees does UDP give?

UDP packets obviously can arrive multiple times, not at all and out of order.
But if packets arrive, is it guaranteed, that any call to recvfrom and similar functions will return exactly one complete packet the sender sent via sendto (or similar)? In other words, is it possible to receive incomplete packets or multiple packets at once? Is it dependent on the OS, or does the standard mandate a certain behavior?
As I mentioned in a comment, the UDP specification (RFC 768) does not specify the behavior of the "interface" between an application program and the OS infrastructure that handles UDP messages.
However, POSIX specification does address this. The key section of the recvfrom spec says this:
The recvfrom() function shall return the length of the message written to the buffer pointed to by the buffer argument. For message-based sockets, such as SOCK_RAW, SOCK_DGRAM, and SOCK_SEQPACKET, the entire message shall be read in a single operation. If a message is too long to fit in the supplied buffer, and MSG_PEEK is not set in the flags argument, the excess bytes shall be discarded.
Note the use of the word "shall". Any OS <-> application API that claims to conform to the POSIX spec would be bound by that language.
In simple terms, any POSIX compliant recvfrom will return one complete UDP message in the buffer provided that the buffer space provided is large enough. If it is not large enough, "excess" bytes will be discarded.
(Some recvfrom implementations support a non-standard MSG_TRUNC flag that allows the application to find out the actual message length. Check the OS-specific manual page for details.)
The recv family of system calls don't behave like that. They do not return frames or packets, they transfer layer 3 payload bytes stored in the processor's internal receive buffers to the user applications buffer. In other words, what ultimately determines how many bytes are passed up is the size of the user's buffer. The behaviour is to try and fill this buffer and if that's not possible to send what data has arrived and if that's not possible then block or return no data depending on how the recv is configured.
From the recv man page (my emphasis)
If a message is too long to fit in the supplied buffer,
excess bytes may be discarded depending on the type of socket the
message is received from.
If no messages are available at the socket, the receive calls wait
for a message to arrive, unless the socket is nonblocking (see
fcntl(2)), in which case the value -1 is returned and the external
variable errno is set to EAGAIN or EWOULDBLOCK. The receive calls
normally return any data available, up to the requested amount,
rather than waiting for receipt of the full amount requested.
The one other factor that needs to be taken into account is the internal recive buffer size. This is a fixed size and attempting to add more data to an already full buffer can result in loss of data. The value can be set with the SO_RCVBUF flag - from the socket man page:
SO_RCVBUF Sets or gets the maximum socket receive buffer in bytes. The
kernel doubles this value (to allow space for bookkeeping
overhead) when it is set using setsockopt(2), and this doubled
value is returned by getsockopt(2). The default value is set
by the /proc/sys/net/core/rmem_default file, and the maximum
allowed value is set by the /proc/sys/net/core/rmem_max file.
The minimum (doubled) value for this option is 256.

How can I get the remote address from an incoming message on UDP listener socket?

Although it's possible to read from a Gio.Socket by wrapping it's file-descriptor in Gio.DataInputStream, using Gio.Socket.receive_from() in GJS to receive is not possible because as commented here:
GJS will clone array arguments before passing them to the C-code which will make the call to Socket.receive_from work and return the number of bytes received as well as the source of the packet. The buffer content will be unchanged as buffer actually read into is a freed clone.
Thus, input arguments are cloned and data will be written to the cloned buffer, not the instance of buffer actually passed in.
Although reading from a data stream is not a problem, Gio.Socket.receive_from() is the only way I can find to get the remote address from a UDP listener, since Gio.Socket.remote_address will be undefined. Unfortunately as the docs say for Gio.Socket.receive():
For G_SOCKET_TYPE_DATAGRAM [...] If the received message is too large to fit in buffer, then the data beyond size bytes will be discarded, without any explicit indication that this has occurred.
So if I try something like Gio.Socket.receive_from(new Uint8Array(0), null); just to get the address, the packet is swallowed, but if I read via the file-descriptor I can't tell where the message came from. Is there another non-destructive way to get the incoming address for a packet?
Since you’re using a datagram socket, it should be possible to use Gio.Socket.receive_message() and pass the Gio.SocketMsgFlags.PEEK flag to it. This isn’t possible for a stream-based socket, but you are not going to want the sender address for each read you do in that case.
If you want improved performance, you may be able to use Gio.Socket.receive_messages(), although I am not sure whether that’s completely introspectable at the moment.

How to determine who wrote what to a socket, given 2 writers?

Say one part of a program writes some stuff to a socket and another part of the same program reads stuff from that same socket. If an external tool writes to that very same socket, how would I differentiate who wrote what to the socket (using the part that reads it)? Would using a named pipe work?
If you are talking about TCP the situation you describe is impossible, because connections are 1-to-1. If you mean UDP, you can get the sender address by setting the appropriate flags in the recvmesg() function.

Erlang get_tcp:recv data length

I user gen_tcp:recv(Socket, 0). for data receiveng, but i can receive only 1418 bytes for 1 time. How can I receive how much data was sent?
in gen_tcp:recv(Socket, 0) you are asking the kernel: "Give me all data there is available right now in the receive buffer". The kernel is also free to give you less however. Even for a rather fast link, you will probably hit slow start on the TCP connection so in the beginning you will not get much data.
The solution is to do your own buffering. You will have to eat data from the underlying socket until you have enough to construct a message. It is quite common for binary protocols to implement their own kind of messaging on top of the stream due to this.
For the longer term record: A common message format is to encode a message as:
decode(Bin) when is_binary(Bin) ->
<<Len:32/integer, R/binary>> = Bin,
<<Payload:Len/binary, Remain/binary>>,
{msg, {Len, Payload}, Remaining}.
That is, messages are 4 bytes representing a 32-bit bigendian integer followed by the payload, where the length is given by the integer. This format, and others like it, are so common Erlang includes optimized parsers for it directly in the C-layer. To get access to these, you set options on the socket through inet/setops/2, in our case we set {packet, 4}. Then we can get messages by setting {active, once} on the socket and wait for the next message. When it arrives, we can {active, once} again on the socket to get the next message, and so on. There is an example in the documentation of gen_tcp (erl -man gen_tcp if you have the Erlang man-pages installed appropriately).
Other common formats are asn.1 or even http headers(!).
Tricks
It is often beneficial to create a process which is separate that can encode and decode your message format and then send on data to the rest of the system. Usually a good solution in Erlang is to demux incoming data as fast as possible and get the data to a process which can then handle the rest of the problem.

Socket Protocol Fundamentals

Recently, while reading a Socket Programming HOWTO the following section jumped out at me:
But if you plan to reuse your socket for further transfers, you need to realize that there is no "EOT" (End of Transfer) on a socket. I repeat: if a socket send or recv returns after handling 0 bytes, the connection has been broken. If the connection has not been broken, you may wait on a recv forever, because the socket will not tell you that there's nothing more to read (for now). Now if you think about that a bit, you'll come to realize a fundamental truth of sockets: messages must either be fixed length (yuck), or be delimited (shrug), or indicate how long they are (much better), or end by shutting down the connection. The choice is entirely yours, (but some ways are righter than others).
This section highlights 4 possibilities for how a socket "protocol" may be written to pass messages. My question is, what is the preferred method to use for real applications?
Is it generally best to include message size with each message (presumably in a header), as the article more or less asserts? Are there any situations where another method would be preferable?
The common protocols either specify length in the header, or are delimited (like HTTP, for instance).
Keep in mind that this also depends on whether you use TCP or UDP sockets. Since TCP sockets are reliable you can be sure that you get everything you shoved into them. With UDP the story is different and more complex.
These are indeed our choices with TCP. HTTP, for example, uses a mix of second, third, and forth option (double new-line ends request/response headers, which might contain the Content-Length header or indicate chunked encoding, or it might say Connection: close and not give you the content length but expect you to rely on reading EOF.)
I prefer the third option, i.e. self-describing messages, though fixed-length is plain easy when suitable.
If you're designing your own protocol then look at other people's work first; there might already be something similar out there that you could either use 'as is' or repurpose and adjust. For example; ISO-8583 for financial txns, HTTP or POP3 all do things differently but in ways that are proven to work... In fact it's worth looking at these things anyway as you'll learn a lot about how real world protocols are put together.
If you need to write your own protocol then, IMHO, prefer length prefixed messages where possible. They're easy and efficient to parse for the receiver but possibly harder to generate if it is costly to determine the length of the data before you begin sending it.
The decision should depend on the data you want to send (what it is, how is it gathered). If the data is fixed length, then fixed length packets will probably be the best. If data can be easily (no escaping needed) split into delimited entities then delimiting may be good. If you know the data size when you start sending the data piece, then len-prefixing may be even better. If the data sent is always single characters, or even single bits (e.g. "on"/"off") then anything different than fixed size one character messages will be too much.
Also think how the protocol may evolve. EOL-delimited strings are good as long as they do not contain EOL characters themselves. Fixed length may be good until the data may be extended with some optional parts, etc.
I do not know if there is a preferred option. In our real-world situation (client-server application), we use the option of sending the total message length as one of the first pieces of data. It is simple and works for both our TCP and UDP implementations. It makes the logic reasonably "simple" when reading data in both situations. With TCP, the amount of code is fairly small (by comparison). The UDP version is a bit (understatement) more complex but still relies on the size that is passed in the initial packet to know when all data has been sent.