How to run CFSocket on a background thread for more accurate ping time? - iphone

Background...
I am modifying Apple’s SimplePing example to do an ICMP ping for an iPhone app. The code wraps a native socket in a CFSocket object specifying a read callback, then adds it as a run loop source on the main thread. When a packet is sent to the socket, the callback is invoked to time the round trip, verify the contents, update the UI, etc.
Question...
What would be the best approach for moving this processing to a background thread so the ping time is as accurate as possible? I need to measure the precise time between the call to the socket “sendto” method and the callback invocation without interruption.
Any examples or pseudo code would be extremely helpful. I have done a lot of reading on threading in Cocoa (NSThread vs. NSOperation, NSRunLoop, etc.), but so far, I can’t quite piece it all together.
Thanks

Do you need to support iOS 3.x? If not, you could look into using Grand Central Dispatch; in this scenario, you would specify the socket as a source for a dispatch queue and give it the highest priority.

Related

Windows IOCP for real time

I have a question related to IOCP networking strategy for our C++ server application.
Our server simulates a lot of devices which speak over UDP with short (less than 2K) messages. The server is also bound by a soft real-time constraint of 70-100 milliseconds. Currently the networking part of the application was developed with a thread being started for every socket, which leads to hundreds of threads being started. Their job is to watch for the UDP sockets, and when the data arrives, copy it into the queue of our real-time thread.
We are being asked to support more and more devices and I was thinking that rewriting the communication module using IOCP our server would be more efficient. I developed a prototype based on the code I was able to find online, but the combination of
WSARecvFrom (Initiates receive)
GetQueuedCompletionStatus
OnDataRecieved (A method of my class that gets called when data is copied into my buffer)
does not seem efficient at all. The gaps between data arrival on a given socket are 500-600 milliseconds.
I only started prototyping and did not profile a whole lot.
My question are:
Can IOCP be used for my scenario or is it designed for high throughput only?
Will WSAAsyncSelect (with hidden windows) be more efficient for my use case?
Thanks in advance,
Michael
Edit:
I noticed while profiling that the problem starts with:
- WSASendTo
- GetQueuedCompletionStatus
- OnDataSent
Looks like GetQueuedCompletionStatus doesn't wake up fast enough.

.Net 4.5 TCP Server scale to thousands of connected clients

I need to build a TCP server using C# .NET 4.5+, it must be capable of comfortably handling at least 3,000 connected clients that will be send messages every 10 seconds and with a message size from 250 to 500 bytes.
The data will be offloaded to another process or queue for batch processing and logging.
I also need to be able to select an existing client to send and receive messages (greater then 500 bytes) messages within a windows forms application.
I have not built an application like this before so my knowledge is based on the various questions, examples and documentation that I have found online.
My conclusion is:
non-blocking async is the way to go. Stay away from creating multiple threads and blocking IO.
SocketAsyncEventArgs - Is complex and really only needed for very large systems, BTW what constitutes a very large system? :-)
BeginXXX methods will suffice (EAP).
Using TAP I can simplify 3. by using Task.Factory.FromAsync, but it only produces the same outcome.
Use a global collection to keep track of the connected tcp clients
What I am unsure about:
Should I use a ManualResetEvent when interacting with the TCP Client collection? I presume the asyc events will need to lock access to this collection.
Best way to detect a disconnected client after I have called BeginReceive. I've found the call is stuck waiting for a response so this needs to be cleaned up.
Sending messages to a specific TCP Client. I'm thinking function in custom TCP session class to send a message. Again in an async model, would I need to create a timer based process that inspects a message queue or would I create an event on a TCP Session class that has access to the TcpClient and associated stream? Really interested in opinions here.
I'd like to use a thread for the entire service and use non-blocking principals within, are there anythings I should be mindful of espcially in context of 1. ManualResetEvent etc..
Thank you for reading. I am keen to hear constructive thoughts and or links to best practices/examples. It's been a while since I've coded in c# so apologies if some of my questions are obvious. Tasks, async/await are new to me! :-)
I need to build a TCP server using C# .NET 4.5+
Well, the first thing to determine is whether it has to be base-bones TCP/IP. If you possibly can, write one that uses a higher-level abstraction, like SignalR or WebAPI. If you can write one using WebSockets (SignalR), then do that and never look back.
Your conclusions sound pretty good. Just a few notes:
SocketAsyncEventArgs - Is complex and really only needed for very large systems, BTW what constitutes a very large system? :-)
It's not so much a "large" system in the terms of number of connections. It's more a question of how much traffic is in the system - the number of reads/writes per second.
The only thing that SocketAsyncEventArgs does is make your I/O structures reusable. The Begin*/End* (APM) APIs will create a new IAsyncResult for each I/O operation, and this can cause pressure on the garbage collector. SocketAsyncEventArgs is essentially the same as IAsyncResult, only it's reusable. Note that there are some examples on the 'net that use the SocketAsyncEventArgs APIs without reusing the SocketAsyncEventArgs structures, which is completely ridiculous.
And there's no guidelines here: heavier hardware will be able to use the APM APIs for much more traffic. As a general rule, you should build a barebones APM server and load test it first, and only move to SAEA if it doesn't work on your target server's hardware.
On to the questions:
Should I use a ManualResetEvent when interacting with the TCP Client collection? I presume the asyc events will need to lock access to this collection.
If you're using TAP-based wrappers, then await will resume on a captured context by default. I explain this in my blog post on async/await.
There are a couple of approaches you can take here. I have successfully written a reliable and performant single-threaded TCP/IP server; the equivalent for modern code would be to use something like my AsyncContextThread class. It provides a context that will cause await to resume on that same thread by default.
The nice thing about single-threaded servers is that there's only one thread, so no synchronization or coordination is necessary. However, I'm not sure how well a single-threaded server would scale. You may want to give that a try and see how much load it can take.
If you do find you need multiple threads, then you can just use async methods on the thread pool; await will not have a captured context and so will resume on a thread pool thread. In this case, yes, you'd need to coordinate access to any shared data structures including your TCP client collection.
Note that SignalR will handle all of this for you. :)
Best way to detect a disconnected client after I have called BeginReceive. I've found the call is stuck waiting for a response so this needs to be cleaned up.
This is the half-open problem, which I discuss in detail on my blog. The best way (IMO) to solve this is to periodically send a "noop" keepalive message to each client.
If modifying the protocol isn't possible, then the next-best solution is to just close the connection after a no-communication timeout. This is how HTTP "persistent"/"keep-alive" connections decide to close. There's another possibile solution (changing the keepalive packet settings on the socket), but it's not as easy (requires p/Invoke) and has other problems (not always respected by routers, not supported by all OS TCP/IP stacks, etc).
Oh, and SignalR will handle this for you. :)
Sending messages to a specific TCP Client. I'm thinking function in custom TCP session class to send a message. Again in an async model, would I need to create a timer based process that inspects a message queue or would I create an event on a TCP Session class that has access to the TcpClient and associated stream? Really interested in opinions here.
If your server can send messages to any client (i.e., it's not just a request/response protocol; any part of the server can send messages to any client without the client requesting an update), then yes, you'll need a proper queue of outgoing requests because you can't (reliably) issue multiple concurrent writes on a socket. I wouldn't have the consumer be timer-based, though; there are async-compatible producer/consumer queues available (like BufferBlock<T> from TPL Dataflow, and it's not that hard to write one if you have async-compatible locks and condition variables).
Oh, and SignalR will handle this for you. :)
I'd like to use a thread for the entire service and use non-blocking principals within, are there anythings I should be mindful of espcially in context of 1. ManualResetEvent etc..
If your entire service is single-threaded, then you shouldn't need any coordination primitives at all. However, if you do use the thread pool instead of syncing back to the main thread (for scalability reasons), then you will need to coordinate. I have a coordination primitives library that you may find useful because its types have both synchronous and asynchronous APIs. This allows, e.g., one method to block on a lock while another method wants to asynchronously block on a lock.
You may have noticed a recurring theme around SignalR. Use it if you possibly can! If you have to write a bare-bones TCP/IP server and can't use SignalR, then take your initial time estimate and triple it. Seriously. Then you can get started down the path of painful TCP with my TCP/IP FAQ blog series.

Winsock: Can i call send function at the same time for different socket?

Let's say, I have a server with many connected clients via TCP, i have a socket for every client and i have a sending and receiving thread for every client. Is it safe and possible to call send function at the same time as it will not call send function for same socket.
If it's safe and ok, Can i stream data to clients simultaneously without blocking send function for other clients ?
Thank you very much for answers.
Yes it is possible and thread-safe. You could have tested it, or worked out for yourself that IS, IIS, SQL Server etc. wouldn't work very well if it wasn't.
Assuming this is Windows from the tag of "Winsock".
This design (having a send/receive thread for every single connected client), overall, is not going to scale. Hopefully you are aware of that and you know that you have an extremely limited number of clients (even then, I wouldn't write it this way).
You don't need to have a thread pair for every single client.
You can serve tons of clients with a single thread using non-blocking IO and read/write ready notifications (either with select() or one of the varieties of Overlapped IO such as completion routines or completion ports). If you use completion ports you can set a pool of threads to handle socket IO and queue the work for your own worker thread or threads/threadpool.
Yes, you can send and receive to many sockets at once from different threads; but you shouldn't need those extra threads because you shouldn't be making blocking calls to send/recv at all. When you make a non-blocking call the amount that could be written immediately is written and the function returns, you then note how much was sent and ask for notification when the socket is next writable.
I think you might want to consider a different approach as this isn't simple stuff; if you're using .Net you might get by with building this with TcpListener or HttpListener (both of which use completion ports for you), though be aware that you can't easily disable Nagle's algorithm with those so if you need interactivity (think of the auto-complete on Google's search page) then you probably won't get the performance you want.

How to implement a full duplex channel over TCP with a single thread?

The network lib I'm writing needs to send and receive messages through a TCP socket. Messages can be sent or received any time, i.e should work as a full duplex channel.
I was able to implement such scenario using two threads: main thread calling send() and a dedicated thread mostly blocked at recv() call.
My question is: is it possible to implement the same scenario with a single thread? I.e. by registering some callback function?
As a side note: I need implement this scenario in C++, Java and Python.
Yes, it possible. You need to use an API that allows multiplexed I/O. Under C/C++ and Python you can use select() and non-blocking I/O, so that the only networking call you ever block in is select(). There is also poll() and epoll() and a number of other similar APIs that accomplish the same thing, with varying degrees of efficiency and portability. Java has non-blocking I/O APIs also.
Another possibility is to use asynchronous I/O, where you tell the OS to start an I/O transaction and it notifies you (by some mechanism) when it has finished the operation. I'm not familiar with that style of network programming, however, so I won't try to give details.
In general, a library would do this by providing an interface into the application's main loop / event loop.
For example, most single-threaded network-using applications would have a main loop that blocks in select(), poll() or similar. Your library would just return a file descriptor that the application should include in its select() / poll() call, and a callback function that the application should call when that file descriptor is readable.

Creating a Delay in lua

I'm making a IRC client using LUA. I'm using the the libraries that came with "Lua for Windows ". So I'm using luasocket for the comms and IUP for the UI bits.
The problem I'm having is that I'm getting stuck in a loop when I read the IO. I tried the timer in IUP but that didn't seem to work.
I'm was looking for a way to delay the IO read loop.
I set the time out for the reads to 0 and that worked.
You are probably making a blocking read on a TCP socket inside the GUI thread. That will lock up your whole application if you do not receive the expected data in a timely manner. Either perform the socket I/O in a separate thread (see Lua Lanes) or use non-blocking I/O (see settimeout).
The Kepler Project is a great resource for guidance on networking applications with Lua, but it is focused on web applications versus an IRC client. For example, the Copas library uses Lua coroutines to handle multiple TCP connections.
Now if you really just wanted to know how to create a delay in Lua, then the Sleep Function article in the lua-users wiki should provide all the information you need.