Does CFWriteStreamClose() Flush? - iphone

Suppose you have a CFWriteStream that you call CFWriteStreamClose() on immediately after calling CFWriteStreamWrite(). Both calls are made on the same thread. Will the close operation guarantee that any bytes written to/buffered by the stream are in fact sent to the recipient before the stream is destroyed?
In short, does calling CFWriteStreamClose() flush the stream?

According to the documentation, no it does not. It "terminates the flow of bytes" on the stream. While CFWriteStreamWrite is synchronous, it does not guarantee that all bytes you want to write will be written in one call. Thus it returns the number of bytes actually written; your job is to keep calling it until your data is exhausted or you otherwise decide to stop.
Calling the close function is designed to clean up any resources associated with the stream.

Related

How can I know if bytes have not been written by the ChannelWriter?

After I call the ChannelWriter.wite() method to send a message, it returns me a ChannelFuture. If the receiving end on the other side has disabled read in epoll_wait, I think the socket write is supposed to be partially complete since zero bytes are supposed to be written. I thought of using the returned ChannelFuture, but isSuccess() returns true.
How do I know if the ChannelWriter.write() was only partially complete or zero bytes written?
It will only notify the future once the whole bytes are even written or if the write failed.

Abort socket operation Windows Phone

I am using pseudo-synchronous sockets in a Windows Phone 7 application. My socket code is based on the sample from http://msdn.microsoft.com/en-us/library/hh202858(v=vs.92).aspx.
The server's sending pattern is somewhat unpredictable. It starts with a fixed-size header that contains the length of the rest of the message. I first read in this header, and then I read the specified number of bytes from the socket.
Since I need to send messages to the server as well, and my attempts at duplexing the socket with a thread for receiving and another thread for sending caused lots of problems, I have a loop like this in my code:
while (KeepConnectionGoing)
{
byte[] Rcvd;
Rcvd = Socket.Receive();//Returns null if no message received in 50 ms
if (Rcvd != null)
{
ParseMessage(Rcvd);
}
if (HasMessageThatNeedsToBeSent())
{
byte[] Message = GetMessageToSend();
Socket.Send(Message);
}
}
This works fine for the majority of the time, but strange things happen when the message is null.
Because the timeout in the Receive method (see the linked sample) uses a ManualResetEvent, the receive request on the socket is never actually cancels. Even though the method returns, that request waits around somewhere, and when data is available on the socket, chomps up the header. The event handler has nothing to do with the data it received (since the method has returned and the variables in the method will never be used again), the data basically disappears. The read request I expect to return the header skips reads the bytes after the header, and I have no idea how long the message is.
I'd like to be able to cancel all outstanding requests if the socket times out. I am using anonymous methods like in the sample since it simplifies everything and prevents me from having to write all the state transfer code myself. Thus, I cannot unhook the event handler. I think though, that even if I were using a method as the event handler, but unhooking before the asynchronous operation is done, the callback method would still be called. (I haven't tested this, it's just my understanding)
Right now, the only solution I can see is hacking together some static byte arrays (ie. having a static byte[] Header and if it is null, I read the header, otherwise I read the message), but that seems like a really inelegant solution and very prone to race conditions.
Is there a better way?
Thanks
It appears there really is no good way to do this. A poll method would be nice, but Silverlight doesn't have it. I hacked together a solution using static flags to tell me what state I am in (Has the header been requested, has the message been requested), a static int for the length and a static buffer.
At the beginning of the method, either the header or the body can be requested. If the header has already been requested, the thread waits until a valid body length is available. If this wait times out, that means that the header receive operation is still pending, but there really is no message available. Otherwise, it reads in that length of a message.
If the header has not been requested, receive the header. In the event handler, after completion, check to see if the control flow has already continued (i.e. the receive operation took too long, so the function returned already, but is now actually done). Update the length, then request the body unless it timed out.

MSG_READALL is to recv() as ?? is to send()

From the recv(2) man page:
MSG_WAITALL
This flag requests that the operation block until the full request is satisfied. However, the call may still return less data than requested if a signal is caught, an error or disconnect occurs, or the next data to be received is of a different type than that returned.
It doesn't look like there's an equivalent flag for send(2), which strikes me as strange. Maybe send()s are guaranteed to always accept the whole buffer, but I don't see that written anywhere (and anyway, that seems unlikely to me).
Is there a way to tell send() to wait until it's sent the whole buffer before returning, equivalently to recv()'s MSG_WAITALL?
Edit: I understand that send() just copies data into a buffer in the operating system and that I can't force send() to put data onto the wire. My question is: Can I force send() to block until all the data I gave it has been copied into the OS's buffer?
You can't. send just offloads the buffer to the kernel, then returns.
To quote from the Unix standard:
The send() function shall initiate transmission of a message from the specified socket to its peer (...)
Successful completion of a call to send() does not guarantee delivery of the message.
Note the word "initiate". It doesn't actually send anything, rather tells the OS to send the message when it's ready for it (when its buffers are full or after some time has passed).
send(2) for TCP does not actually "send" anything on the wire, but places your bytes into the socket send buffer. It tells you how many bytes it was able to copy there in the return value.
Make the send buffer bigger (see setsockopt(2) and tcp(7)), pay attention to the syscall return value. In any case, TCP is a stream. You need to manage application-level protocol yourself.

Asynchronous IO with CFWriteStream

I'm using CFWriteStreamScheduleWithRunLoop and CFWriteStreamWrite to do asynchronous IO. Here's the question: it seems that only one CFWriteStreamWrite call is safe (free of blocking) per each kCFStreamEventCanAcceptBytes notification, because from the second call, we can't guarantee that the socket is ready to accept more data. So if we want to make n CFWriteStreamWrite calls, we'll have to repeat "waiting for kCFStreamEventCanAcceptBytes" and "calling CFWriteStreamWrite" n times.
Is this correct?
Thanks!
Same answer as in the other question: call CFWriteStreamCanAcceptBytes() on the stream to see if it's still safe to write on it.

What's the difference with buffered synchronous I/O and asynchronous I/O?

When using a synchronous I/O such as fread which is buffered, the read operations are
postponed and combined, I think that isn't done synchronously.
So what's the difference between a buffered synchronous I/O and an asynchronous I/O?
My understanding of async I/O is that you're notified when it's done via an interrupt of some sort so you can do more I/O at that point. With buffered I/O, you do it and forget about it, you never hear about that particular I/O again.
At least that's how it is with the huge intelligent disk arrays we deal with.
The idea of async I/O is that you begin the I/O and return to doing other stuff. Then, when the I/O is finished, you're notified and can do more I/O - in other words, you're not waiting around for it to finish.
Specifically for the synchronous read case: you request some input and then wait around while it's read from the device. Buffering there just involves reading more than you need so it's available on the next read without going out to the device to get it.
Async reads, you simply start the process to read then you go off and do something else while it's happening. Whether by polling or an interrupt, you later discover that the read is finished and the data is available for you to use.
For writes, I'm not sure I can see the advantage of one over the other. Buffered sync writes will return almost immediately unless the buffer is full (that's the only time when an async write may have an advantage).
Synchronous I/O works on a polling basis: you poll, data is returned (when available---if not available, then: for blocking I/O, your program blocks until data is available; for non-blocking I/O, a status code is returned saying no data is available, and you get to retry).
Asynchronous I/O works on a callback basis: you pass in a callback function, and it gets called (from a different thread) when data becomes available.
From a programming point of view, synchronous IO would be handled in the same function/process eg.
var data0 = synchronousRead();
var data1 = synchronousRead();
whereas asynchronous IO would be handled by a callback.
asynchronousRead(callBack1);
doOtherStuff();
...
function callBack1(data)
{
data0 = data;
}
Synchronous IO is the "normal" kind where you call a routine, and flow of control continues when the routine has read into your local variable (ignoring writes).
Asynchronous IO involves setting up a buffer variable (static, global, or otherwise long lived / wide scope) and telling the system that you want data put into it when it eventually becomes available. Your program then continues. When the system has the data, it sends you a signal / event / message of some kind, telling you that you now have data in your buffer variable.
Async IO is usually used by GUI programs to avoid stalling the user interface while IO completes.
Look here. Evrything you want to know is explained.
link to wikipedia