Way to discover which internet connection type I'm using on the iPhone - iphone

I need to know what internet connection is available when my application is running. I checked out the Reachability example from Apple, but this differs only between wifi and carrier network. What I need to know is what carrier network is selected, UMTS or EDGE or GPRS.

Currently, this information is not available. If you want this feature, file a new bug and mention that this is a duplicate of bug 6014806.

You could take a guess at what kind of network you are on by checking the latency of a round trip to your server. If you are getting figures of under 100ms, you are almost certainly on WiFi.
GPRS and EDGE run at around 600ms latency. UMTS/HSDPA is 100-200ms.
Source: my informal testing, and [AT&T][1] figures.

Rather than hardcoding different versions of your site for 3G, EDGE, GPRS, wifi broadband, why not build a framework which detects connection speed and bootstraps your site up to the appropriate level of bandwidth? That way you would get appropriate results on slow 3G / wifi, and it would naturally scale to the next generation of wireless broadband (e.g. WiMax and 802.11n) with a minimal amount of effort / disruption.
For example, you could determine different bandwidth "checkpoints" (which may correspond to 3G, EDGE, etc.), then you could do something like transfer some small bit of data or cache a small image (such as an icon) common to all bandwidth levels, benchmark the download speed in the background and set the bandwidth level accordingly.

File only
I like Wedge's answer. I'm not sure that the file wouldn't be cached by ISPs though. You could always keep generating a new file name or choose one big enough that you only test for long enough to get a result.
Simple latency
The idea of using latency is close but as Shivan mentioned it's inaccurate. A user in Australia to UK will get a latency of around 350ms vs the local user who could see it as low as 30-40ms
Solution: Mean deviation
If you ping your server with 3 packets and then look at the mean deviation (mdev) under 3G it's usually under 50ms. With 2G/EDGE it's almost always over 100ms. I got one outlier at 65ms to AUS.
My tests found a range of 4ms-38ms, with only one exception on a test to Australia from Belgium at 202ms.
Hope that's useful to someone..

Related

How to speed up slow / laggy Windows Phone 7 (WP7) TCP Socket transmit?

Recently, I started using the System.Net.Sockets class introduced in the Mango release of WP7 and have generally been enjoying it, but have noticed a disparity in the latency of transmitting data in debug mode vs. running normally on the phone.
I am writing a "remote control" app which transmits a single byte to a local server on my LAN via Wifi as the user taps a button in the app. Ergo, the perceived responsiveness/timeliness of the app is highly important for a good user experience.
With the phone connected to my PC via USB cable and running the app in debug mode, the TCP connection seems to transmit packets as quickly as the user taps buttons.
With the phone disconnected from the PC, the user can tap up to 7 buttons (and thus case 7 "send" commands with 1 byte payloads before all 7 bytes are sent.) If the user taps a button and waits a little between taps, there seems to be a latency of 1 second.
I've tried setting Socket.NoDelay to both True and False, and it seems to make no difference.
To see what was going on, I used a packet sniffer to see what the traffic looked like.
When the phone was connected via USB to the PC (which was using a Wifi connection), each individual byte was in its own packet being spaced ~200ms apart.
When the phone was operating on its own Wifi connection (disconnected from USB), the bytes still had their own packets, but they were all grouped together in bursts of 4 or 5 packets and each group was ~1000ms apart from the next.
btw, Ping times on my Wifi network to the server are a low 2ms as measured from my laptop.
I realize that buffering "sends" together probably allows the phone to save energy, but is there any way to disable this "delay"? The responsiveness of the app is more important than saving power.
This is an interesting question indeed! I'm going to throw my 2 cents in but please be advised, I'm not an expert on System.Net.Sockets on WP7.
Firstly, performance testing while in the debugger should be ignored. The reason for this is that the additional overhead of logging the stack trace always slows applications down, no matter the OS/language/IDE. Applications should be profiled for performance in release mode and disconnected from the debugger. In your case its actually slower disconnected! Ok so lets try to optimise that.
If you suspect that packets are being buffered (and this is a reasonable assumption), have you tried sending a larger packet? Try linearly increasing the packet size and measuring latency. Could you write a simple micro-profiler in code on the device ie: using DateTime.Now or Stopwatch class to log the latency vs. packet size. Plotting that graph might give you some good insight as to whether your theory is correct. If you find that 10 byte (or even 100byte) packets get sent instantly, then I'd suggest simply pushing more data per transmission. It's a lame hack I know, but if it aint broke ...
Finally you say you are using TCP. Can you try UDP instead? TCP is not designed for real-time communications, but rather accurate communications. UDP by contrast is not error checked, you can't guarantee delivery but you can expect faster (more lightweight, lower latency) performance from it. Networks such as Skype and online gaming are built on UDP not TCP. If you really need acknowledgement of receipt you could always build your own micro-protocol over UDP, using your own Cyclic Redundancy Check for error checking and Request/Response (acknowledgement) protocol.
Such protocols do exist, take a look at Reliable UDP discussed in this previous question. There is a Java based implementation of RUDP about but I'm sure some parts could be ported to C#. Of course the first step is to test if UDP actually helps!
Found this previous question which discusses the issue. Perhaps a Wp7 issue?
Poor UDP performance with Windows Phone 7.1 (Mango)
Still would be interested to see if increasing packet size or switching to UDP works
ok so neither suggestion worked. I found this description of the Nagle algorithm which groups packets as you describe. Setting NoDelay is supposed to help but as you say, doesn't.
http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.nodelay.aspx
Also. See this previous question where Keepalive and NoDelay were set on/off to manually flush the queue. His evidence is anecdotal but worth a try. Can you give it a go and edit your question to post more up to date results?
Socket "Flush" by temporarily enabling NoDelay
Andrew Burnett-Thompson here already mentioned it, but he also wrote that it didn't work for you. I do not understand and I do not see WHY. So, let me explain that issue:
Nagle's algorithm was introduced to avoid a scenario where many small packets had to been sent through a TCP network. Any current state-of-the-art TCP stack enables Nagle's algorithm by default!
Because: TCP itself adds a substantial amount of overhead to any the data transfer stuff that is passing through an IP connection. And applications usually do not care much about sending their data in an optimized fashion over those TCP connections. So, after all that Nagle algorithm that is working inside of the TCP stack of the OS does a very, very good job.
A better explanation of Nagle's algorithm and its background can be found on Wikipedia.
So, your first try: disable Nagle's algorithm on your TCP connection, by setting option TCP_NODELAY on the socket. Did that already resolve your issue? Do you see any difference at all?
If not so, then give me a sign, and we will dig further into the details.
But please, look twice for those differences: check the details. Maybe after all you will get an understanding of how things in your OS's TCP/IP-Stack actually work.
Most likely it is not a software issue. If the phone is using WiFi, the delay could be upwards of 70ms (depending on where the server is, how much bandwidth it has, how busy it is, interference to the AP, and distance from the AP), but most of the delay is just the WiFi. Using GMS, CDMA, LTE or whatever technology the phone is using for cellular data is even slower. I wouldn't imagine you'd get much lower than 110ms on a cellular device unless you stood underneath a cell tower.
Sounds like your reads/writes are buffered. You may try setting the NoDelay property on the Socket to true, you may consider trimming the Send and Receive buffer sizes as well. The reduced responsiveness may be a by-product of there not being enough wifi traffic, i'm not sure if adjusting MTU is an option, but reducing MTU may improve response times.
All of these are only options for a low-bandwidth solution, if you intend to shovel megabytes of data in either direction you will want larger buffers over wifi, large enough to compensate for transmit latency, typically in the range of 32K-256K.
var socket = new System.Net.Sockets.Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)
{
NoDelay = true,
SendBufferSize = 3,
ReceiveBufferSize = 3,
};
I didn't test this, but you get the idea.
Have you tried setting SendBufferSize = 0? In the 'C', you can disable winsock buffering by setting SO_SNDBUF to 0, and I'm guessing SendBufferSize means the same in C#
Were you using Lumia 610 and mikrotik accesspoint by any chance?
I have experienced this problem, it made Lumia 610 turn off wifi radio as soon as last connection was closed. This added perceivable delay, compared to Lumia 800 for example. All connections were affected - simply switching wifi off made all apps faster. My admin says it was some feature mikrotiks were not supporting at the time combined with WMM settings. Strangely, most other phones were managing just fine, so we blamed cheapness of the 610 at the beginning.
If you still can replicate the problem, I suggest trying following:
open another connection in the background and ping it all the time.
use 3g/gprs instead of wifi (requires exposing your server to the internet)
use different (or upgraded) phone
use different (or upgraded) AP

What software can I use to simulate cellular connection

We need to do some stress testing of our system, and we would like to be able to simulate non-ideal situations: things like latency, jitter, etc. In particular, we would like to simulate behavior of data over a cellular network.
Do you know of any hardware/software/both solutions that would work?
Thanks
Ideally you would get some idea about parametrization from a real simulator like ns3. Or write one yourself.
Additionally you could use the Linux kernels built-in QoS stack which provides the netem module which can be used for these purposes. netem provides network emulation functionality for testing protocols by emulating the properties of wide area networks. The current version emulates variable delay (jitter), loss, packet corruption, duplication and re-ordering. It supports distribution based opteration or you could script it to change certain values during run time.
Wifi card with an older access point/router, simply take the test station to the edge of the range and you should be able to reliably cause the connection to fail and reconnect. Only reason I sugest an older model is that the range generally weren't that fantastic on the older "802.11b" stuff.
But other than just being a lossy connection, I am not sure you'd be able to use this setup to test certain characteristics of a cellular connection, but it should work.
If you are in the US, an iPhone on AT&T would probably do it..
Probably need something along the lines of:
USRP Board, OpenBTS, TrixBox/Asterisk
You can check out OpenBTS(http://openbts.sourceforge.net/) and see if it will do what you need. You could have it use the USRP board as a tower, then use it similar to a loopback. I do know that the above combination will allow phones to connect to it like a cell tower(See BurningMan/DEFCON 18), so in theory it should allow you to broadcast out to saturate the spectrum.
OpenBTS-UMTS include 3G data http://openbts.org/w/index.php?title=OpenBTS-UMTS
You can download and compile on Ubuntu 16.04, there is some issue with dependencies on Ubuntu 18.04.
About hardware, i used both Ettus USRP N210 and X310.

Available bandwidth

I want write a code to get the available bandwidth.
Using one of the algorithm.ex.spruce / pathload.
I wanted to a code in C++ in Windows.
I have got linux code .
But i wanted a Windows based code , which can get me up and down bandwidth.
Bandwidth for what resource? If this is a network resource there isn't anything in any language or the OS that will give you any real estimation of bandwidth. You would need to call out to something at the other end of the link you need to traverse and get an estimation of bandwidth at that point in time.
Or better said... You would need to call a file on a web server to test the download speed of someone's home Internet connection. Keep in mind that the numbers obtained are only accurate for that point in time though. As the bandwidth on any resource can be higher or lower when you actually use them since external factors always affect bandwidth (other prorcesses, users, etc.)
Why do you need the bandwidth and for what resource?
If you asking, you not up to it. Converting linux to windows requires knowledge of both platform, which you clearly doesnt have.
In my experience, almost all network friendly bandwidth estimation algorithm (pathload, pathchirp etc) are unsuitable for high speed bandwidth. Those old algorithm are suitable and practical if the bandwidth is around 1mb. Also, these algorithm assume the network is 'clean'(no other traffic). Nowadays, almost all of these 'network friendly' algorithm is not practical.
Other variant bandwidth estimation tools like netperf, netcps is based on brute force method. Brute force method are not network friendly. Most of this algorithm have problem with latency(if tcp based) and reached hdd read/write speed(if write to hdd instead of memory).
IMO, the best bandwidth estimation tools is UDP based(not influenced by latency unlike tcp) brute force(not influenced by other traffic) with custom made control flow tuned for high speed networks.
Other problem you will encounter is code optimization. You must ensure that your code is highly optimized. If you use c#, GC will pose a possible problem.

Apple Push Notification Service server load?

I'm preparing to set up a APNS message server, and I was wondering if anybody has done any analysis on APNS server load that they would be able to share. Minimum server specs, maximum messages per second, anything like that.
Thanks!
edit: I'm planning to implement this with .NET, but info about any platform would be incredibly useful.
For my application (which has about 24,000 downloads) I am seeing an average of of about 1300 messages sent a day.
Those are low numbers, but then my client base isn't that large either. But I figure I might as well contribute some info. :-)
My notification provider is idle most of the time so there is MUCH more capacity available if I need it.
Its also using very little ram at this point (somewhere around 13 mb - I implemented my provider in Python and suspect most of that is taken up by the run time).
I am running on a Media Temple dv (specifically the Base configuration).
I haven't extrapolated out the numbers to find what my theoretical maximum would be, but because of the niche market of my application its not something that worries me at this point. I have lots of capacity to scale with.
Hope that helps a bit.
chris.
One of the Apple devs mentioned that 100,000 messages is not considered a large amount, that doesn't really answer your question, but I wouldn't expect that sending the actual messages would be the bottleneck.
Any server that can handle your database work should be fine for sending the messages out. The protocol is intentionally light-weight.
There are no maximum messages per second.
You should consider that every message must be smaller than 256 Byte. Otherwise Apple will be reject your messages. And you can also check MonoPush. AFAIK they are building their products top of the .Net Framework.

What's the best way to synchronize times to millisecond accuracy AND precision between machines?

From what I understand, the crystals on PC's are notorious for clock skew. If clocks are always skewing, what is the best way to synchronize clocks between machines with millisecond accuracy and precision? From what I've found, NTP and PTP are possible solutions, but I was wondering if anybody had any experience on stackoverflow.com!
I understand NTP is the popular choice, but am wondering if anybody has had any experience with PTP (IEEE1588)
Just run the standard NTP daemon.
It does have options to take input from several GPS devices as well as talking to network servers.
Edit: I was referring to http://www.ntp.org/, not the one that comes with Windows.
I don't have any suggestion as to what NTP clients are best for windows, but for Unix machines there's no real reason to not run NTP.
Here's some 15-year-old software that syncs to within a hundredth of a millisecond. (My team wrote it when NTP wasn't good enough for our lab.)
From the conference paper's abstract: "A distributed clock for networked commodity PC's. With no extra hardware, this clock correlates sensor data from multiple PC's with latency and jitter under 10 microseconds average, 100 microseconds worst case."
Source code: https://github.com/camilleg/clockkit
(Until 2020 Feb 13 it was at http://zx81.isl.uiuc.edu/clockkit/, now offline.)
You cannot synchronize machines to the level of milliseconds by exchanging data, because any data exchange itself already takes at least milliseconds to happen and thus spoils your result! Even protocols that try to first measure how long a data transfer takes and then sending out the time info (taking the measured delay into account) are just a bit better than average but they are still not good since not every data transfer takes equal time (just constantly ping a server on the Internet and see how every ping has a different delay).
The only way to really synchronize two computers in the milliseconds range is by having them both obtain the time from the same source via a transfer method that has no unknown or constantly changing delay. E.g. if both receive a satellite signal, that broadcasts the time. The signal will always have a constant delay (from satellite to earth) and they are both receiving it almost within the same nanosecond.
Germany for example has a radio controlled time. Somewhere in the country is an atomic clock (that has correct time to the nanosecond for hundreds of years) and some sender permanently broadcast the current time on a given frequency all over the country. Alarm clocks and even wristwatches exist that can receive this time and permanently synchronize with it (well, not really permanently, most models do that only once every 24 hours to save battery runtime). Such receiver devices also exist for computers and come with software that can permanently synchronize your computer clock with that time signal.
As far I know GPS also sends time information (either that, or the time can be calculated somehow from the GPS information, I'm not too familiar with the GPS protocol). So attaching a GPS receiver to both computers can probaly also get them synchronized to the millisecond. If your synchronization is done via the Internet however, don't expect a better synchronization than one computer being at most 20 milliseconds off.
To update on the commenter,
NTP is not that accurate as people love to claim here:
NTP can usually maintain time to within tens of milliseconds over the
public Internet, and can achieve better than one millisecond accuracy
in local area networks under ideal conditions.
Source: Wikipedia
I would rather keep them all in sync without any network involved and farther keeping them in sync to the official GMT time and here GPS is probably the only way to get really accurate results on all machines (and that not only down to the ms, actually down to microseconds).
I use NTP throughout the whole network at my company and it works rather well. The key is to have one authoritative server on a local network and have every machine on the network synchronize with it. The best is to have a radio clock installed on that server. NTP is great because it does not just correct the clock once in a while, but it actually calculates and correct clock frequency making it more accurate.
Once I had NTP setup on the network I opened like five VNC session to different server and sat there watching the clock. The clocks on all server were in sync withing milliseconds, and this is right after setup. It gets more accurate as it runs.
Solutions based on NTP or SNTP can work very well, but it strongly depends on how well the client is implemented.
Certainly, the answer to this question is not to use the default Windows time service if you want sub second precision. It is notoriously poor at maintaining a stable time base on a machine and will typically overshoot corrections and is almost unstable even, especially when machines have fairly inaccurate timebases to start with - which is common. Assume the standard built in Window's tools can hold accuracies reliably there to typically only several seconds between all machines and I typically see swings of as much as 30 seconds between machines - even if you tweak the registry settings.
The freeware tool Achron is a pretty good solution to get down into the plus/minus 500 millisecond kind of range. Doing better than that will require a more industrial strength solution such as something from Greyware
I've researched (read Googled) on this topic lately and here is what I have learn so far:
To get millisecond-accuracy (or better) you need hardware support. GPS source or hardware time-stamping (and a good time source) in PTP.
Hardware time-stamping in PTP is done with supported NIC - Intel has them.
Without hardware time-stamping the accuracy between NTP and PTP are similar.
(Not used PTP before) I read that NTP is easier to setup.
My limited experience with GPS time source (over serial) varies. It works great if you can get it to work but there is a device that we have in a data center that I never managed to get it to work...
If your machines are in colo ask your DC what they can provide - so you don't have to decide. :D
HTH
NTP is definitely the way to go. Basically fire-and-forget, as long as you let it out the firewall on your local master (which is typically the firewall or router machine.)
As you've already suggested, NTP is the industry standard solution to this problem, but it either requires Internet connectivity or a stratum 0 source (an accurate hardware clock, like a GPS receiver with a computer interface).
If you're using Internet connectivity, consider using the NTP Pool.
Keep in mind as well, that the hardware system clock (i.e. the inaccurate one) is only read when the machine starts up - if you're talking about server machines, you're not going to lose time because of them.