I have looked at several variations on the Reachability example such as the Donoho change and erica saduns UIApplication extension, but none of these allow you to determine the quality of your 3g connection.
Is there a programatic way to see signal strength and link quality?
I think you need to decide exactly what you mean by quality and also understand that it constantly changes.
The only really accurate measure is unfortunately historical - i.e. you can do a download or upload test and measure the time it took and any packet loss, jitter, delay etc and this will let you know what the quality was when your test was run.
The reason I say it is historical is that this does not guarantee that it will stay like this for any given time - for example you may move between cells (or rooms in the case of WiFi), or several other users in your area may start utilizing the bandwidth heavily.
It may be that a simple download or upload test is sufficient for your purposes to (I am guessing...) decide if your want to run your application in a certain way, and then you can build in further checks into the application itself to see if you need to adapt to a change in the network (e.g. you could trigger on the time for a particular application message transaction to complete)
Related
I would say that the following are valid use cases for Early Media when used for IVRs:
Filtering
Disconnect incoming calls based on certain criteria (such as callers from a specific area code, or to play a message before hanging up on after hour callers).
Rate limiting
For example to limit the number of simultaneous callers, where the excess calls are either disconnected, or placed in a queue. (I'm not sure if the queue example is possible though without answering the call.)
Are there more?
Some links that were helpful:
https://www.dialogic.com/webhelp/csp1010/8.4.1_ipn3/sip_software_chap_-_early_media.htm
https://wiki.asterisk.org/wiki/display/AST/Early+Media+and+the+Progress+Application
https://freeswitch.org/confluence/display/FREESWITCH/Early+Media
3) Legal: in some countries it is illegal for a call center using payed line (caller pay per minute) to accept a call without sending it straight to an agent. So you can't accept the call, give the user some waiting music for 15 minutes (and make them pay for the privilege of waiting as well).
Result: you don't accept the call. However, this creates a new problem: if the caller only hears the ring back tone for those 15 minutes, he/she will assume no one will answer and hangs up.
Using early media, you can give them the traditional "your call is very important to us, please hold on"-type of experience without accepting the call and without starting to charge money. Of course this also depends somewhat on how much the provider is willing to tolerate, as this can also affects their income (depending on their own business model).
4) Comfort: you may not be aware of this, but the sound you hear as caller when the other side is ringing (ring-back tone), is not universally the same throughout the world. A company with a global number may wish to use early media to provide a ring-back tone more familiar to you, depending on where you are calling from. It was always a bit niche of a concept but some target audiences are statistically more likely to hangup if they hear an unfamiliar sound. 15-20 year ago this might have been a concern to some, but in the era of smartphones and internet calls, I doubt anyone really worries about it anymore.
we're attempting to track a streaming video with SiteCatalyst.The issue comes in as this video has obsviously no end and the s.media Module can't know how to set the seconds or milestones segment views.This is resulting in no tracking calls except for the starting one.Could a possible solution be the usage of s.media.monitor custom functions?Here's explained how to use them together with the basic Media module settings.Maybe a timing deployment of "sendRequest()" method could help...?I use this occasion to ask a brief how-to example of media.monitor methods, because I've been just using the basic settings till now, as below:
s.loadModule("Media");
s.Media.autoTrack = false;
s.Media.trackMilestones = "25,50";
s.Media.segmentByMilestones = true;... ...Thanks a lot
Yeah.. i really, really dislike the Media module. Video tracking is getting more and more popular with the clients, so it has become the biggest thorn in my side, because the nature of videos over the internet is a big mess with all kinds of moving parts internally, that make it extremely difficult to get truly accurate tracking beyond basic "start" and "stop". (actually I take that back.. I think mobile/sdk tracking is quickly becoming the thing i shake my angry fist at the most, but that's a different post!)
I think Adobe has made some heroic efforts to automate video tracking and it more or less works okay if you just have a regular (not flash) object or html5 tag embedded on the page but in practice, MOST of the time, sites implement their videos through 3rd party scripts (e.g. jwplayer, vimeo, youtube api) and the Media module automation basically goes down the drain on that count.
I understand that it needs to know how long a video is to know when to autopop the events, but I swear, 99% of the time in practice, the way Media module expects things to pop in certain orders etc.. it just doesn't align with how videos work in the real world. Even if you attempt to do it the "manual" way, more often than not it's still buggy,e.g. autoplay and buffering ALWAYS seem to screw up the open+play sequence that MUST happen in that order.
Basically, the Media module desperately needs to be rewritten to better handle streaming videos, and also just "manually" using it in general. Anyways..
Two things I have done in your situation. Overall, neither one of these options are a perfect 1:1 to normal videos with a duration, but then, streaming videos aren't really the same, so it doesn't really make sense to treat them the same.
Option #1: Use an estimated duration for your streaming video. So you said it yourself: your streaming videos have no end. Well as I mentioned, you can't calculate percent viewed unless you have a duration, pretty basic math. So, estimate a duration.
I have clients that have streaming webinars or whatever and it's true that there's technically no duration according to the player, but in reality they don't really conduct that webinar 24/7 forever. In reality it's for a set amount of time like 30 minutes or an hour or something. So, just specify the duration as that.
Yes, this will require extra custom work on your end to store/associate an estimated duration. And yes, this does have the potential for being misleading (e.g. if a webinar ends early or runs late). This option is generally good for sites that have set windows for the stream to actually be active.
Option #2: Ditch the notion of % viewed, record it as n time consumed. So the overall point of the milestones is to know how much of a video was actually watched, yes? Well, who said it has to be measured by % viewed?
How about instead, you just record n seconds consumed every n seconds. You can do this with an incrementor eVar, and/or counter event. (Part of the normal video tracking actually does include a counter event "Video Time", or a.media.timePlayed).
So basically, you'd basically just pop the events/props/eVars yourself, and ignore milestone/segment reports.
Note: This option only really works if you are using the older style video tracking that has events/props/eVars assigned for it. If you are using the newer style video tracking that does not use events/props/eVars.. well, AA does not currently offer an official way to manually pop that stuff directly. It is surely possible to unofficially do so, but I have not yet reverse engineered the latest Media module to figure out how to do that. So, in this case your only option is #1.
Developing an app that uses data synchronization. Sending images (even resized) takes time if it's done over slow internet connection. Thinking, it would be best first to check internet speed, and sent data only if the speed is acceptable. There's Apple Reachability class but it only has method for checking wifi is on. I guess for me, it would be enough to ping a host and grab delay information out of the ping cmd result. That delay info I would use for deciding is it worth to send data now or to wait.
Reachability or ping won't tell you how fast or slow the file will be transmitted. That is a function of ping_time + (file_size / bandwidth). For any large file, the ping_time is much smaller than file_size/bandwidth.
The easiest way to measure this is for the app to download and upload a not-too-small-not-too-large file and decide if in fact the upload and download speeds are "fast enough".
Doing this is fairly involved, however Apple has a complete working example program here:
https://developer.apple.com/library/mac/#samplecode/SimplePing/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000716-Intro-DontLinkElementID_2
Right now I'm trying to make a mmorpg for the iPhone. I have it set up so that the iPhone requests for the player positions several times a second. How it does this is that the client sends a request using asynchronous NSURLConnection to a php page that loads the positions from a mysql database and returns it in json. However, it takes about .5 seconds from when the positions are requested to when they actually get loaded. This seems really high, are there any obvious things that could cause this?
Also, this causes the player movement on the client to be really choppy too. Are there any algorithms or ways to reduce the choppiness of the player movement?
Start measuring how long the database query takes when you run it outside your iPhone.
Then measure how long it takes when you send the same http request from something other than your iPhone(It's e.g. a 10-15 line c# program to figure this out).
If none of the above show any sort of significant latency, the improvements need to be done on the iPhone side. Some things to look out for:
GPRS/3G has rather high latency
GPRS/3G has rather high bit error rates - meaning there's going to be quite a few dropped packets now and then which will cause tcp to retransmit and you'll experience even higher latency
HTTP has a lot of overhead.
JSON adds a lot of overhead.
Maybe you'll need to come up with a compact binary format for your messages, and drop HTTP in favor of a custom protocol - maybe even revert to UDP
The above points generally don't apply, but they do if you need to provide a smooth experience over high latency,low bandwidt, flaky connections.
At the very least, make sure you're not setting up a new TCP connection for every request. You need to use http keep-alive.
I don't have any specific info on player movement algorithms, but what is often used is some sort of movement prediction.
You know the direction the player is moving, you can derive the speed if it's not always constant - this means you can interpolate over time and guess his new position, adjust the on screen position while you're querying for the actual position, and adjust back to the actual position when you get the query response.
The trick is to always interpolate over time within certain boundaries. If your prediction was a bit off compared to what the query returned, don't immediatly snap the positon back to the real position. Do interpolation between the current position and the desired postion over a handful of frames.
On the server side you should be using some system that keep running and that keeps the database connection open all the time. Preferably it would also cache things instead of requesting them from database all the time.
Also, do not make a new HTTP request for every update. It would be best if you hadn't need to use HTTP at all, as it really isn't suitable for realtime comminunication.
GPRS typically has 600 ms ping time, 3G has 300 ms and HSPA has 100 ms. See which mode is being used. Notice that some devices (I don't know of iPhone) drop from HSPA to regular 3G for power-saving reasons whenever there is not enough traffic to justify the faster mode.
As for position, a rather common practice is to apply a linear prediction, i.e. make the character continue movement in current direction, at the current speed, even when no data from server is available yet.
Most importantly: benchmark/profile to see where the latencies are. Is it your server, the network connection or the application.
Loading the player positions that fast has downsides.
It hammers your server.
3G isn't really meant to support low-latency applications
So I don't see a mmorpg working without some necessary shortcuts at this time, e.g. extrapolating paths based on their velocity and position. Loading positions will not work as fast as you want, especially with a server based on PHP of all things.
Either way, when developing for a mobile platform you're going to have to make sacrifices in terms of features versus a fully-featured desktop implementation.
I might also reimplement some of the more critical stuff if not the whole server in a faster language, e.g. C++.
I am attempting to stream video using Apple's http streaming technology. I am beginning to suspect that either the player on the iPhone or the Apple tools used to segment the videos is buggy.
http://developer.apple.com/iphone/library/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html
I am getting really terrible behavior. The app never seems to do a good job of choosing what quality stream to use. It always starts at the lowest quality and often will job to the highest very suddenly and not be able to keep up. I have tried various ways of altering the bandwidth settings to test it.
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=5000
3/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=10000
4/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=459319
5/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=90268800
I have used very large and small setting to make certain streams the obvious choice, but it doesn't matter. Obviously I also have used default the values set by Apple's variantplaylistcreator tool. It always starts at the lowest quality and will jump to seaming random other qualities.
Anyone know whats going on with this?
Have you tried the sample reference streams provided at the bottom of the page here? Apple tests against these, so if it works there, you know it's on your end.