Apple Push Notification Service server load? - iphone

I'm preparing to set up a APNS message server, and I was wondering if anybody has done any analysis on APNS server load that they would be able to share. Minimum server specs, maximum messages per second, anything like that.
Thanks!
edit: I'm planning to implement this with .NET, but info about any platform would be incredibly useful.

For my application (which has about 24,000 downloads) I am seeing an average of of about 1300 messages sent a day.
Those are low numbers, but then my client base isn't that large either. But I figure I might as well contribute some info. :-)
My notification provider is idle most of the time so there is MUCH more capacity available if I need it.
Its also using very little ram at this point (somewhere around 13 mb - I implemented my provider in Python and suspect most of that is taken up by the run time).
I am running on a Media Temple dv (specifically the Base configuration).
I haven't extrapolated out the numbers to find what my theoretical maximum would be, but because of the niche market of my application its not something that worries me at this point. I have lots of capacity to scale with.
Hope that helps a bit.
chris.

One of the Apple devs mentioned that 100,000 messages is not considered a large amount, that doesn't really answer your question, but I wouldn't expect that sending the actual messages would be the bottleneck.
Any server that can handle your database work should be fine for sending the messages out. The protocol is intentionally light-weight.

There are no maximum messages per second.

You should consider that every message must be smaller than 256 Byte. Otherwise Apple will be reject your messages. And you can also check MonoPush. AFAIK they are building their products top of the .Net Framework.

Related

low connectivity protocols or technologies

I'm trying to enhance a server-app-website architecture in reliability, another programmer has developed.
At the moment, android smartphones start a tcp connection to a server component to exchange data. The server takes the data, writes them into a DB and another user can have a look on the data through a website. The problem is that the smartphones very regularly are in locations where connectivity is really bad. The consequence is that the smartphones lose the tcp connection and it's hard to reconnect. Now my question is, if there are any protocols that are so lightweight or accomodating concerning bad connectivity that the data exchange could work better or more reliable.
For example, I was thinking about replacing the raw TCP interface with a RESTful API, but I don't really know how well REST works in this scenario, as I don't have any experience in this area.
Maybe useful to know for answering this question: The server component is programmed in c#. The connecting components are android smartphones.
Please understand that I dont add some code to this question, because in my opinion its just a theoretically question.
Thank you in advance !
REST runs over HTTP which runs over TCP so it would have the same issues with connectivity.
Moving up the stack to the application you could perhaps think in terms of 'interference'. I quite often have to use technical stuff in remote areas with limited reception and it reminds of trying to communicate in a storm. If you think about it, if you're trying to get someone to do something in a storm where they can hardly hear you and the words get blown away (dropped signal), you don't read them the manual on how to fix something, you shout key words such as 'handle', 'pull', 'pull', 'PULL', 'ok'. So the information reaches them in small bursts you can repeat (pull, what? pull, eh? PULL! oh righto!)
Can you redesign the communications between the android app and the server so the server can recognise key 'words' with corresponding data and build up the request over a period of time? If you consider idempotency, each burst of data would not alter the request if it has already been received (pull, PULL!) and over time the android app could send/receive smaller chunks of the request. If the signal stays up, just keep sending. If it goes down, note which parts of the request haven't been sent and retry them when the signal comes back.
So you're sending the request jigsaw-style but the server knows how to reassemble the pieces in the right order. A STOP word at the end tells the server ok this request is complete, go work on it. Until that word arrives the server can store the incomplete request or discard it if no more data comes in.
If the server respond to the first request chunk with an id, the app can use the id to get the response and keep trying until the full response comes back, at which point the server can remove the response from its jigsaw cache. A fair amount of work though.

RTP/RTSP start up latency: Would this method help to reduce it, and if yes, why we don't have it

This is probably not the best forum for such a specialized question, but at the moment I don't know of a better one (open to suggestions/recommendations).
I work on a video product which for the last 10+ years has been using proprietary communications protocol (DCOM-based) to send the video across the network. A while ago we recognized the need to standardize and currently are almost at a point of ripping out all that DCOM baggage and replacing it with a fully compliant RTP/RTSP client/server framework.
One thing we noticed during testing over the last few months is that when we switch the client to use RTP/RTSP, there's a noticeable increase in start-up latency. The problem is that it's not us but RTSP.
BEFORE (DCOM): we would send one DCOM command and before that command even returned back to the client, the server would already be sending video. -- total latency 1 RTT
NOW (RTSP): This is the sequence of commands, each one being a separate network request: DESCRIBE, SETUP, SETUP, PLAY (assuming the session has audio and video) -- total of 4 RTTs.
Works as designed - unfortunately it feels like a step backwards because prior user experience was actually better.
Can this be improved? If you stay with the standard, short answer is, NO. However, my team fully controls our entire RTP/RTSP stack and I've been thinking we could introduce a new RTSP command (without touching any of existing commands so we are still fully inter-operable) as a solution: DESCRIBE_SETUP_PLAY.
We could send this one command, pass in types of streams interested in (typically, there's only one video and 0..1 audio). Response would include the full SDP text, as well as all the port information and just like before, server would start streaming instantly without waiting for anything else from the client.
Would this work? any downside that I may not be seeing? I'm curious why this wasn't considered (or was dropped) from official spec, since latency even in local intranet is definitely noticeable.
FYI, it is possible according to the RTSP 1.0 specification:
9.1 Pipelining
A client that supports persistent connections or connectionless mode
MAY "pipeline" its requests (i.e., send multiple requests without
waiting for each response). A server MUST send its responses to those
requests in the same order that the requests were received.
The RTSP 2.0 draft also contains support for pipelining.
However none of the clients/servers I've used implement it AFAIK.

How can I measure the breakdown of network time spent in iOS?

Uploads from my app are too slow, and I'd like to gather some real data as to where the time is being spent.
By way of example, here are a few stages a request goes through:
Initial radio connection (significant source of latency in EDGE)
DNS lookup (if not cached)
SSL/TLS handshake.
HTTP request upload, including data.
Server processing time.
HTTP response download.
I can address most of these (e.g. by powering up the radio earlier via a dummy request, establishing a dummy HTTP 1.1 connection, etc.), but I'd like to know which ones are actually contributing to network slowness, on actual devices, with my actual data, using actual cell towers.
If I were using WiFi, I could track a bunch of these with Wireshark and some synchronized clocks, but I need cellular data.
Is there any good way to get this detailed breakdown, short of having to (gak!) use very low level socket functions to reproduce my vanilla http request?
Ok, the method I would use is not easy, but it does work. Maybe you're already tried this, but bear with me.
I get a time-stamped log of the sending time of each message, the time each message is received, and the time it is acted upon. If this involves multiple processes or threads, I have each one generate a log, and then merge them into a common timeline.
Then I plot out the timeline. (A tool would be nice, but I did it by hand.)
What I look for is things like 1) messages re-transmitted due to timeouts, 2) delays between the time a message is received and the time it's acted upon.
Usually, this identifies problems that I can fix in the code I can control. This improves things, but then I do it all over again, because chances are pretty good that I missed something the last time.
The result was that a system of asynchronous message-passing can be made to run quite fast, once preventable sources of delay have been eliminated.
There is a tendency in posting questions about performance to look for magic fixes to improve the situation. But, the real magic fix is to refine your diagnostic technique so it tells you what to fix, because it will be different from anyone else's.
An easy solution to this would be once the application get's fired, make a Long Polling connection with the server (you can choose when this connection need's to establish prior hand, and when to disconnect), but that is a kind of a hack if you want to avoid all the sniffing of packets with less api exposure iOS provides.

MSMQ scalability

We're looking at setting up a MSMQ system with ~8000 clients and one queue per client. On average the system needs to handle ~2000 messages daily from each client, where the message size will range from 1K to MSMQ Max size (4MB).
Is this at all possible with MSMQ?
I know I'm not providing a lot of details here, but I just want feedback on whether or not anyone has been able to run a similar setup.
Well, broadbrush answer is yes, it will scale out no problem, as its a mature product, on the go for over 10 years.
There are a number of very large implementations out there, banks mostly, like Barclays use it, for I think between 60-90k desktops, but only if it has been correctly designed, and each of your processing boxes has enough memory, and suitable network bandwidth.
As regards messaging throughput, 2k messages a day, is nothing really. I was working in the City a few years ago, where one derivatives f/x app was processing 1600 message/sec.
I can't offer you any advice without specifics, but I hope that helps.
Bob.
In theory you can do this and you would have a maintenance nightmare. Employ one/few customer facing queue(s) and deploy Content Routing and/or Competing Consumers patterns downstream.
Throughput is not an issue with your projected volumes but remember that there are fundamental disk files supporting your queues. If you deploy 8,000 queues you may risk getting disk I/O issues unless you have a RAID solution.

Way to discover which internet connection type I'm using on the iPhone

I need to know what internet connection is available when my application is running. I checked out the Reachability example from Apple, but this differs only between wifi and carrier network. What I need to know is what carrier network is selected, UMTS or EDGE or GPRS.
Currently, this information is not available. If you want this feature, file a new bug and mention that this is a duplicate of bug 6014806.
You could take a guess at what kind of network you are on by checking the latency of a round trip to your server. If you are getting figures of under 100ms, you are almost certainly on WiFi.
GPRS and EDGE run at around 600ms latency. UMTS/HSDPA is 100-200ms.
Source: my informal testing, and [AT&T][1] figures.
Rather than hardcoding different versions of your site for 3G, EDGE, GPRS, wifi broadband, why not build a framework which detects connection speed and bootstraps your site up to the appropriate level of bandwidth? That way you would get appropriate results on slow 3G / wifi, and it would naturally scale to the next generation of wireless broadband (e.g. WiMax and 802.11n) with a minimal amount of effort / disruption.
For example, you could determine different bandwidth "checkpoints" (which may correspond to 3G, EDGE, etc.), then you could do something like transfer some small bit of data or cache a small image (such as an icon) common to all bandwidth levels, benchmark the download speed in the background and set the bandwidth level accordingly.
File only
I like Wedge's answer. I'm not sure that the file wouldn't be cached by ISPs though. You could always keep generating a new file name or choose one big enough that you only test for long enough to get a result.
Simple latency
The idea of using latency is close but as Shivan mentioned it's inaccurate. A user in Australia to UK will get a latency of around 350ms vs the local user who could see it as low as 30-40ms
Solution: Mean deviation
If you ping your server with 3 packets and then look at the mean deviation (mdev) under 3G it's usually under 50ms. With 2G/EDGE it's almost always over 100ms. I got one outlier at 65ms to AUS.
My tests found a range of 4ms-38ms, with only one exception on a test to Australia from Belgium at 202ms.
Hope that's useful to someone..