C socket programming recv size - sockets

I am a newbie in socket programming(in C), maybe this question is a litter bit stupid. In C socket programming, how should I determine the size of buffer of the function recv()/read()? As in many cases, we don't know the size of data sent using send()/write(). Thanks a lot!

how should I determine the size of buffer of the function
recv()/read()
Ideally one shouldn't look at these buffers and keep to the olden TCP model: keep reading bytes while bytes are available.
If you are asking this question for things like: "how big should be the buffer into which I receive?", the simple answer is to pick a size and just pass that. If there's more data you can read again.
Back to your original question, different stacks give you different APIs. For example on some Unixes you have things like SIOCINQ and FIONREAD. These give you the amount of data the kernel has in its receive buffer, waiting for you to copy it out.

If you don't really know how many bytes are expected, use a large buffer and pass a large buffer size to recv/read. These functions will return how many bytes were put into the buffer. Then you can deal with this data printing it, for example.
But keep in mind that data is often either sent in chunks of known size or sent with a message-size in the first bytes, so the receiver side is able to identify how many bytes should be read.

Related

Incorrect len of msg in netlink socket

I tried to use netlink socket to send binary data from kernel space to user space. I followed the example from How to use netlink socket to communicate with a kernel module?
However at the receiving end in userspace, I am getting received data length to be greater than what was sent from kernel space. However the data is the same. Data gets appended with some garbage value.
Is there no guarantee in netlink socket that received data length will be same as data sent from kernel space ?
You might want to check the documentation to make sure that you are using macros like "NLMSG_SPACE", "NLMSG_PAYLOAD", and "NLMSG_DATA" correctly.
Extra data might be from the unused portions of the data frames and your program not reading the message length correctly. (In effect, not using the macros correctly.) For example, if you send 1 byte, I believe there will actually be 4 bytes sent because NLMSG_SPACE will round up to a multiple of 4 to "align" the data in the packet.
Reading it should be no problem though, just use the macros to get the real length of the data and only read that much.
Here's an example of getting a pointer to the buffer and the length of that buffer.
// Get a pointer to the start of the data in the buffer and the buffer (payload) length
buf = (u_char *) (NLMSG_DATA(nlh));
len = NLMSG_PAYLOAD(nlh, 0);
Here are the definitions of the macros. Look at those if you want. This here is probably more understandable.
The code you linked to is sending characters and gets away with it by "memset"-ing the data to 0, so printing that char array just works.
Hope this helps. Post some code if you can't get it working.

How do I retrieve file data over a socket in Go?

I've got two small programs communicating nicely over a socket where the receiving side is in Go. Everything works peachy when my messages are tiny enough to fit in the 1024 byte buffer and can be received in a single Read from the connection but now I want to transfer data from an image that is 100k+ or more. I'm assuming the correct solution is not to increase the buffer until any image can fit inside.
Pseudo-go:
var buf = make([]byte,1024)
conn, err := net.Dial("tcp", ":1234")
for {
r, err := conn.Read(buf[0:])
go readHandler(string(buf[0:r]),conn)
}
How can I improve my socket read routine to accept both simple messages of a few bytes and also larger data? Bonus points if you can turn the total image data into an io.Reader for use in image.Decode.
I have no direct experience with TCP in Go but to me it seems that you fell victim of a quite typical misunderstanding of what guarntees TCP offers.
The thing is, in contrast with, say, UDP and SCTP, TCP does not have the concept of message boundaries because it's stream-oriented. It means, TCP transports opaque streams of bytes and you have very little control of "chunking" that stream with regard to the receiving side.
I suspect what you observe as "sending a 100k+ message" is the runtime/network library on the sender side typically "deceiving" you by consuming your "message" into its internal buffers and then streaming it in whatever chunks OS's TCP stack allows it to (on ubiquitous hardware/software it's usually about 8k). The size of pieces the receiver gets that stream is completely undefined; the only thing defined is ordering of the bytes in the stream, which is preserved.
Hence it might turn out you have to resonsider your approach to receiving data. The exact approach varies depending on the nature of the data being streamed:
The easiest way (if you have the control over the application-level protocol) is to pass the length of the following "message payload" in a special length field of fixed format. Then destreaming the whole message is a two-step process: 1) receive that many bytes to get the length field, read it, check the value for sanity, then 2) read that many following bytes and be done with it.
If you have no control over the app-level protocol, parsing messages becomes more involved and usually requires some sort of complicated state machine.
For more info, look at this and this.
You can use io.ReadFull to read a []byte of a specific length. This assumes that you know beforehand how many bytes you need to read.
As for image.Decode, it should be possible to pass the conn directly to the image.Decode function. This assumes that you do not perform any reads from the connection until the image is decoded.
Your code
for {
r, err := conn.Read(buf[0:])
go readHandler(string(buf[0:r]),conn)
}
seems to be suggesting that the goroutine you are starting is reading from conn This doesn't seem like a good idea, because you will end up having multiple concurrent reads from the connection (without having control over the order in which the reads will happen): one in the for-loop, another one in readHandler.

Erlang get_tcp:recv data length

I user gen_tcp:recv(Socket, 0). for data receiveng, but i can receive only 1418 bytes for 1 time. How can I receive how much data was sent?
in gen_tcp:recv(Socket, 0) you are asking the kernel: "Give me all data there is available right now in the receive buffer". The kernel is also free to give you less however. Even for a rather fast link, you will probably hit slow start on the TCP connection so in the beginning you will not get much data.
The solution is to do your own buffering. You will have to eat data from the underlying socket until you have enough to construct a message. It is quite common for binary protocols to implement their own kind of messaging on top of the stream due to this.
For the longer term record: A common message format is to encode a message as:
decode(Bin) when is_binary(Bin) ->
<<Len:32/integer, R/binary>> = Bin,
<<Payload:Len/binary, Remain/binary>>,
{msg, {Len, Payload}, Remaining}.
That is, messages are 4 bytes representing a 32-bit bigendian integer followed by the payload, where the length is given by the integer. This format, and others like it, are so common Erlang includes optimized parsers for it directly in the C-layer. To get access to these, you set options on the socket through inet/setops/2, in our case we set {packet, 4}. Then we can get messages by setting {active, once} on the socket and wait for the next message. When it arrives, we can {active, once} again on the socket to get the next message, and so on. There is an example in the documentation of gen_tcp (erl -man gen_tcp if you have the Erlang man-pages installed appropriately).
Other common formats are asn.1 or even http headers(!).
Tricks
It is often beneficial to create a process which is separate that can encode and decode your message format and then send on data to the rest of the system. Usually a good solution in Erlang is to demux incoming data as fast as possible and get the data to a process which can then handle the rest of the problem.

Socket Protocol Fundamentals

Recently, while reading a Socket Programming HOWTO the following section jumped out at me:
But if you plan to reuse your socket for further transfers, you need to realize that there is no "EOT" (End of Transfer) on a socket. I repeat: if a socket send or recv returns after handling 0 bytes, the connection has been broken. If the connection has not been broken, you may wait on a recv forever, because the socket will not tell you that there's nothing more to read (for now). Now if you think about that a bit, you'll come to realize a fundamental truth of sockets: messages must either be fixed length (yuck), or be delimited (shrug), or indicate how long they are (much better), or end by shutting down the connection. The choice is entirely yours, (but some ways are righter than others).
This section highlights 4 possibilities for how a socket "protocol" may be written to pass messages. My question is, what is the preferred method to use for real applications?
Is it generally best to include message size with each message (presumably in a header), as the article more or less asserts? Are there any situations where another method would be preferable?
The common protocols either specify length in the header, or are delimited (like HTTP, for instance).
Keep in mind that this also depends on whether you use TCP or UDP sockets. Since TCP sockets are reliable you can be sure that you get everything you shoved into them. With UDP the story is different and more complex.
These are indeed our choices with TCP. HTTP, for example, uses a mix of second, third, and forth option (double new-line ends request/response headers, which might contain the Content-Length header or indicate chunked encoding, or it might say Connection: close and not give you the content length but expect you to rely on reading EOF.)
I prefer the third option, i.e. self-describing messages, though fixed-length is plain easy when suitable.
If you're designing your own protocol then look at other people's work first; there might already be something similar out there that you could either use 'as is' or repurpose and adjust. For example; ISO-8583 for financial txns, HTTP or POP3 all do things differently but in ways that are proven to work... In fact it's worth looking at these things anyway as you'll learn a lot about how real world protocols are put together.
If you need to write your own protocol then, IMHO, prefer length prefixed messages where possible. They're easy and efficient to parse for the receiver but possibly harder to generate if it is costly to determine the length of the data before you begin sending it.
The decision should depend on the data you want to send (what it is, how is it gathered). If the data is fixed length, then fixed length packets will probably be the best. If data can be easily (no escaping needed) split into delimited entities then delimiting may be good. If you know the data size when you start sending the data piece, then len-prefixing may be even better. If the data sent is always single characters, or even single bits (e.g. "on"/"off") then anything different than fixed size one character messages will be too much.
Also think how the protocol may evolve. EOL-delimited strings are good as long as they do not contain EOL characters themselves. Fixed length may be good until the data may be extended with some optional parts, etc.
I do not know if there is a preferred option. In our real-world situation (client-server application), we use the option of sending the total message length as one of the first pieces of data. It is simple and works for both our TCP and UDP implementations. It makes the logic reasonably "simple" when reading data in both situations. With TCP, the amount of code is fairly small (by comparison). The UDP version is a bit (understatement) more complex but still relies on the size that is passed in the initial packet to know when all data has been sent.

How can I get a callback when there is some data to read on a boost.asio stream without reading it into a buffer?

It seems that since boost 1.40.0 there has been a change to the way that the the async_read_some() call works.
Previously, you could pass in a null_buffer and you would get a callback when there was data to read, but without the framework reading the data into any buffer (because there wasn't one!). This basically allowed you to write code that acted like a select() call, where you would be told when your socket had some data on it.
In the new code the behaviour has been changed to work in the following way:
If the total size of all buffers in the sequence mb is 0, the asynchronous read operation shall complete immediately and pass 0 as the argument to the handler that specifies the number of bytes read.
This means that my old (and incidentally, the method shown in this official example) way of detecting data on the socket no longer works. The problem for me is that I need a way detecting this because I've layered my own streaming classes on-top of the asio socket streams and as such, I cannot just read data off the sockets that my streams will expect to be there. The only workaround I can think of right now is to read a single byte, store it and when my stream classes then request some bytes, return that byte if one is set: not pretty.
Does anyone know of a better way to implement this kind of behaviour under the latest boost.asio code?
My quick test with an official example with boost-1.41 works... So I think it still should work (if you use null_buffers)