How can I know if bytes have not been written by the ChannelWriter? - sockets

After I call the ChannelWriter.wite() method to send a message, it returns me a ChannelFuture. If the receiving end on the other side has disabled read in epoll_wait, I think the socket write is supposed to be partially complete since zero bytes are supposed to be written. I thought of using the returned ChannelFuture, but isSuccess() returns true.
How do I know if the ChannelWriter.write() was only partially complete or zero bytes written?

It will only notify the future once the whole bytes are even written or if the write failed.

Related

How can I get the remote address from an incoming message on UDP listener socket?

Although it's possible to read from a Gio.Socket by wrapping it's file-descriptor in Gio.DataInputStream, using Gio.Socket.receive_from() in GJS to receive is not possible because as commented here:
GJS will clone array arguments before passing them to the C-code which will make the call to Socket.receive_from work and return the number of bytes received as well as the source of the packet. The buffer content will be unchanged as buffer actually read into is a freed clone.
Thus, input arguments are cloned and data will be written to the cloned buffer, not the instance of buffer actually passed in.
Although reading from a data stream is not a problem, Gio.Socket.receive_from() is the only way I can find to get the remote address from a UDP listener, since Gio.Socket.remote_address will be undefined. Unfortunately as the docs say for Gio.Socket.receive():
For G_SOCKET_TYPE_DATAGRAM [...] If the received message is too large to fit in buffer, then the data beyond size bytes will be discarded, without any explicit indication that this has occurred.
So if I try something like Gio.Socket.receive_from(new Uint8Array(0), null); just to get the address, the packet is swallowed, but if I read via the file-descriptor I can't tell where the message came from. Is there another non-destructive way to get the incoming address for a packet?
Since you’re using a datagram socket, it should be possible to use Gio.Socket.receive_message() and pass the Gio.SocketMsgFlags.PEEK flag to it. This isn’t possible for a stream-based socket, but you are not going to want the sender address for each read you do in that case.
If you want improved performance, you may be able to use Gio.Socket.receive_messages(), although I am not sure whether that’s completely introspectable at the moment.

Abort socket operation Windows Phone

I am using pseudo-synchronous sockets in a Windows Phone 7 application. My socket code is based on the sample from http://msdn.microsoft.com/en-us/library/hh202858(v=vs.92).aspx.
The server's sending pattern is somewhat unpredictable. It starts with a fixed-size header that contains the length of the rest of the message. I first read in this header, and then I read the specified number of bytes from the socket.
Since I need to send messages to the server as well, and my attempts at duplexing the socket with a thread for receiving and another thread for sending caused lots of problems, I have a loop like this in my code:
while (KeepConnectionGoing)
{
byte[] Rcvd;
Rcvd = Socket.Receive();//Returns null if no message received in 50 ms
if (Rcvd != null)
{
ParseMessage(Rcvd);
}
if (HasMessageThatNeedsToBeSent())
{
byte[] Message = GetMessageToSend();
Socket.Send(Message);
}
}
This works fine for the majority of the time, but strange things happen when the message is null.
Because the timeout in the Receive method (see the linked sample) uses a ManualResetEvent, the receive request on the socket is never actually cancels. Even though the method returns, that request waits around somewhere, and when data is available on the socket, chomps up the header. The event handler has nothing to do with the data it received (since the method has returned and the variables in the method will never be used again), the data basically disappears. The read request I expect to return the header skips reads the bytes after the header, and I have no idea how long the message is.
I'd like to be able to cancel all outstanding requests if the socket times out. I am using anonymous methods like in the sample since it simplifies everything and prevents me from having to write all the state transfer code myself. Thus, I cannot unhook the event handler. I think though, that even if I were using a method as the event handler, but unhooking before the asynchronous operation is done, the callback method would still be called. (I haven't tested this, it's just my understanding)
Right now, the only solution I can see is hacking together some static byte arrays (ie. having a static byte[] Header and if it is null, I read the header, otherwise I read the message), but that seems like a really inelegant solution and very prone to race conditions.
Is there a better way?
Thanks
It appears there really is no good way to do this. A poll method would be nice, but Silverlight doesn't have it. I hacked together a solution using static flags to tell me what state I am in (Has the header been requested, has the message been requested), a static int for the length and a static buffer.
At the beginning of the method, either the header or the body can be requested. If the header has already been requested, the thread waits until a valid body length is available. If this wait times out, that means that the header receive operation is still pending, but there really is no message available. Otherwise, it reads in that length of a message.
If the header has not been requested, receive the header. In the event handler, after completion, check to see if the control flow has already continued (i.e. the receive operation took too long, so the function returned already, but is now actually done). Update the length, then request the body unless it timed out.

Does CFWriteStreamClose() Flush?

Suppose you have a CFWriteStream that you call CFWriteStreamClose() on immediately after calling CFWriteStreamWrite(). Both calls are made on the same thread. Will the close operation guarantee that any bytes written to/buffered by the stream are in fact sent to the recipient before the stream is destroyed?
In short, does calling CFWriteStreamClose() flush the stream?
According to the documentation, no it does not. It "terminates the flow of bytes" on the stream. While CFWriteStreamWrite is synchronous, it does not guarantee that all bytes you want to write will be written in one call. Thus it returns the number of bytes actually written; your job is to keep calling it until your data is exhausted or you otherwise decide to stop.
Calling the close function is designed to clean up any resources associated with the stream.

MSG_READALL is to recv() as ?? is to send()

From the recv(2) man page:
MSG_WAITALL
This flag requests that the operation block until the full request is satisfied. However, the call may still return less data than requested if a signal is caught, an error or disconnect occurs, or the next data to be received is of a different type than that returned.
It doesn't look like there's an equivalent flag for send(2), which strikes me as strange. Maybe send()s are guaranteed to always accept the whole buffer, but I don't see that written anywhere (and anyway, that seems unlikely to me).
Is there a way to tell send() to wait until it's sent the whole buffer before returning, equivalently to recv()'s MSG_WAITALL?
Edit: I understand that send() just copies data into a buffer in the operating system and that I can't force send() to put data onto the wire. My question is: Can I force send() to block until all the data I gave it has been copied into the OS's buffer?
You can't. send just offloads the buffer to the kernel, then returns.
To quote from the Unix standard:
The send() function shall initiate transmission of a message from the specified socket to its peer (...)
Successful completion of a call to send() does not guarantee delivery of the message.
Note the word "initiate". It doesn't actually send anything, rather tells the OS to send the message when it's ready for it (when its buffers are full or after some time has passed).
send(2) for TCP does not actually "send" anything on the wire, but places your bytes into the socket send buffer. It tells you how many bytes it was able to copy there in the return value.
Make the send buffer bigger (see setsockopt(2) and tcp(7)), pay attention to the syscall return value. In any case, TCP is a stream. You need to manage application-level protocol yourself.

How can I get a callback when there is some data to read on a boost.asio stream without reading it into a buffer?

It seems that since boost 1.40.0 there has been a change to the way that the the async_read_some() call works.
Previously, you could pass in a null_buffer and you would get a callback when there was data to read, but without the framework reading the data into any buffer (because there wasn't one!). This basically allowed you to write code that acted like a select() call, where you would be told when your socket had some data on it.
In the new code the behaviour has been changed to work in the following way:
If the total size of all buffers in the sequence mb is 0, the asynchronous read operation shall complete immediately and pass 0 as the argument to the handler that specifies the number of bytes read.
This means that my old (and incidentally, the method shown in this official example) way of detecting data on the socket no longer works. The problem for me is that I need a way detecting this because I've layered my own streaming classes on-top of the asio socket streams and as such, I cannot just read data off the sockets that my streams will expect to be there. The only workaround I can think of right now is to read a single byte, store it and when my stream classes then request some bytes, return that byte if one is set: not pretty.
Does anyone know of a better way to implement this kind of behaviour under the latest boost.asio code?
My quick test with an official example with boost-1.41 works... So I think it still should work (if you use null_buffers)