iPhone iOS how to implement a sequence of background network send operations in the fastest way possible? - iphone

I'm trying to stream data to a server at regular intervals and do so in a way that is fast and does not block the UI. The UI is also pretty busy trying to display the data. Currently my implementation uses NSTimer that fires every 50ms, picks a network packet out of a circular array and sends it over:
//this is the timer
networkWriteTimer = [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:#selector(sendActivity:) userInfo:nil repeats:YES];
-(void)sendActivityInBackground:(id)sender
{
[[AppConfig getInstance].activeRemoteRoom.connection sendNetworkPacket:[circularArray objectAtIndex:arrayIndex%arrayCapacity]];
}
-(void)sendActivity:(NSTimer*)timer
{
// Send it out
[self performSelectorInBackground:#selector(sendActivityInBackground:) withObject:nil];
}
I'm not satisfied with the performance of this method. Time profiling has revealed that there's overhead associated with performing background selectors, and the performance can get quite choppy.
I'm thinking of additional ways to improve performance:
Improve the performance of the current timer based code
Try grand central dispatch
Implement NSOperationsQueue with a single operation.
Use a dedicated thread that wakes up, checks for update and sends it over if needed
Ideally, I would send data at even faster intervals (10ms or even for each activity update). This poses the question: What is the fastest way to implement a sequence of background send requests, where order matters? I want to make sure that one packet gets send before the next one is being sent.

Try a recurring dispatch timer (that is part of gcd):
self.synchronizerQueue = dispatch_queue_create("synchronizer queue", DISPATCH_QUEUE_SERIAL);
self.synchronizeTimer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, self.synchronizerQueue);
dispatch_source_set_timer(self.synchronizeTimer, DISPATCH_TIME_NOW, NSEC_PER_MSEC * 10, NSEC_PER_MSEC * 1);
dispatch_source_set_event_handler(self.synchronizeTimer, ^
{
// do your processing here
});
dispatch_resume(self.synchronizeTimer);

Related

Objective-c/iOS: setting status text in async function is slow

In my app I'm doing some communication with a remote server and as this might be slow I thought it would be a good idea to run that code asynchronously. I have my communication code in a block that I pass to dispatch_async. This code does the communication and when it's done it sets the text of a label. This last part is the problem. The text is set, but it occurs after a delay of a few seconds. This is my code.
- (void)doNetworkingTask {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Slow network task goes here.
// Slow network task done, notify the user.
[self.myLabel setText:#"task done."];
NSLog(#"task done.");
});
}
What happens here is that my network task completes, the NSLog-text is logged and after a couple of seconds, the text of the label is updated. My question is 1) why does the label text not update instantly? and 2) what is a proper way of doing what I want to do? (do the slow network task without blocking anything else, update the user through a text label once I'm done.)
UI updates must be on the main thread. Update your code to something like this:
- (void)doNetworkingTask {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Slow network task goes here.
// Slow network task done, notify the user.
dispatch_async(dispatch_get_main_queue(), ^{
[self.myLabel setText:#"task done."];
});
NSLog(#"task done.");
});
}

How to update parse data after some period

I have xml http://weather.yahooapis.com/forecastrss?w=20070458&u=c and I want that when xml is updated my data also gets updated.
Thanks.
As you can see this XML has ttl node, which tells that Time To Live is 60 seconds. So, you can periodically (once in a minute, according to the TTL value) check this URL and stay up to date.
Read this tutorial for xmlparser and NSXMLParser Class Reference. I think it will be helpful to you.
You can poll on it.
static void timerHandler(CFRunLoopTimerRef timer, void *info)
{
//request the xml here and compare it with the previous one
}
- (void)weatherMonitor
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CFRunLoopTimerContext context = {0, self, NULL, NULL, NULL};
CFRunLoopTimerRef timer = CFRunLoopTimerCreate(kCFAllocatorDefault, CFAbsoluteTimeGetCurrent(), 1, 0, 0, timerHandler, &context);//use your own time interval
CFRunLoopAddTimer(CFRunLoopGetCurrent(), timer, kCFRunLoopDefaultMode);
CFRelease(timer);
CFRunLoopRun();
[pool drain];
}
Run weatherMonitor in a background thread.
You have 2 options:
Implement Easy APNS which will notify your app about any changes.
And you may deliver xml data directly along the notification
message, or you may launch a request to pull up xml as soon as you
get notified.
Set a timer in your app that will launch requests to check for xml
updates each 1-10-60 minutes, whatever.
Both have pros and cons, depending on your requirements and abilities. One thing is clear: you CANNOT receive data from exterior without sending requests, other than implementing Push Notifications. Implementing Easy APNS will provide your app with data even if the application is not running. On the other side, with the timer, will be the fastest/easiest way. You decide. Cheers!

When are a methods GUI operations actually carried out?

I am working on a web-services data processing app and I am trying to make the app run as quickly as possible. When a certain 3 finger pan gesture is performed, I call a method that sends updated information off to the server to get a new batch of images to update the existing ones with.
So lets say there are 15 images in an array, I filter through them with a 2 finger gesture, and then if I want to change something about them, I can do the 3 finger gesture, and I get that same set back, just tweaked a bit (contrast/brightness, etc.).
Is what I want though is to be able to update the imageView that is displaying the images after the first image has been retrieved, so as to give the user a feel for what the rest in the series are going to look like. But no matter what I try, and no matter how many different threads I try and implement, I can't get the imageView to update before the entire download is complete. Once the batch download is done (which is handled on a separate thread) the imageView updates with the new images and everything is great.
The first step in the process is this:
if(UIGestureRecognizerStateEnded == [recognize state]){
[self preDownload:windowCounter Level:levelCounter ForPane:tagNumber];// Where this method is what gets the first image, and tries to set it to the imageView
[self downloadAllImagesWithWL:windowCounter Level:levelCounter ForPane:tagNumber]; //And this method goes and gets all the rest of the images
}
This is my preDownload method:
-(void)preDownload:(int)window Level:(int)level ForPane:(int) pane{
int guidIndex = [[globalGuids objectAtIndex:pane] intValue];
UIImage *img = [DATA_CONNECTION getImageWithSeriesGUID:[guids objectAtIndex:guidIndex] ImageID:counter Window:window Level:level];
if(pane==0){
NSLog(#"0");
[imageView3 setImage:img];
}else if(pane==1){
NSLog(#"1");
[imageView31 setImage:img];
}else if(pane==2){
NSLog(#"2");
[imageView32 setImage:img];
}else if(pane==3){
NSLog(#"3");
[imageView33 setImage:img];
}
}
So by separating this out into two different methods (there are no threads being implemented at this point, these methods are being called before all that) I was thinking that after the preDownload method completed, that the imageView would update, and then control would continue on down into the downloadAllImagesWithWL method, but that doesn't appear to be the case.
Am I missing something simple here? What can I do to update my GUI elements before that second method is through running?
You are right. However the viewn won't refresh until your code reaches runloop. You can do 2 things:
Make your downloadAllImagesWithWL method async, so it will return after you called it, your main thread reaches runloop, gui updates, and the download method will tell your logic through a callback when its done.
OR
A simplier hackier (and bad) solution would be to run runloop for some time before you call your download method. Something like this: [[NSRunloop currentRunLoop] runUnitlDate: [Date dateWithTimeIntervalSinceNow: 0.1]]; It will run runloop for 0.1 second.
When the image is set, the image view will mark itself as needing display. The actual display won't occur until the beginning of the next run loop. In OS X, you can use -display to draw the view immediately, but I don't think Apple created a public method to do this on iOS. However, if the next method simply creates the background thread, then it will return quickly and the display update will probably occur before the thread finishes.

Basic iphone timer example

Okay, I have searched online and even looked in a couple of books for the answer because I can't understand the apple documentation for the NSTimer. I am trying to implement 2 timers on the same view that each have 3 buttons (START - STOP - RESET).
The first timer counts down from 2 minutes and then beeps.
The second timer counts up from 00:00 indefinitely.
I am assuming that all of the code will be written in the methods behind the 3 different buttons but I am completely lost trying to read the apple documentation. Any help would be greatly appreciated.
Basically what you want is an event that fires every 1 second, or possibly at 1/10th second intervals, and you'll update your UI when the timer ticks.
The following will create a timer, and add it to your run loop. Save the timer somewhere so you can kill it when needed.
- (NSTimer*)createTimer {
// create timer on run loop
return [NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:#selector(timerTicked:) userInfo:nil repeats:YES];
}
Now write a handler for the timer tick:
- (void)timerTicked:(NSTimer*)timer {
// decrement timer 1 … this is your UI, tick down and redraw
[myStopwatch tickDown];
[myStopwatch.view setNeedsDisplay];
// increment timer 2 … bump time and redraw in UI
…
}
If the user hits a button, you can reset the counts, or start or stop the ticking. To end a timer, send an invalidate message:
- (void)actionStop:(id)sender {
// stop the timer
[myTimer invalidate];
}
Hope this helps you out.
I would follow Jonathan's approach except you should use an NSDate as your reference for updating the UI. Meaning instead of updating the tick based on the NSTimer, when the NSTimer fires, you take the difference between NSDate for now and your reference date.
The reason for this is that the NSTimer has a resolution of 50-100 ms which means your timer can become pretty inaccurate after a few minutes if there's a lot going on to slow down the device. Using NSDate as a reference point will ensure that the only lag between the actual time and the time displayed is in the calculation of that difference and the rendering of the display.

How to program a real-time accurate audio sequencer on the iphone?

I want to program a simple audio sequencer on the iphone but I can't get accurate timing. The last days I tried all possible audio techniques on the iphone, starting from AudioServicesPlaySystemSound and AVAudioPlayer and OpenAL to AudioQueues.
In my last attempt I tried the CocosDenshion sound engine which uses openAL and allows to load sounds into multiple buffers and then play them whenever needed. Here is the basic code:
init:
int channelGroups[1];
channelGroups[0] = 8;
soundEngine = [[CDSoundEngine alloc] init:channelGroups channelGroupTotal:1];
int i=0;
for(NSString *soundName in [NSArray arrayWithObjects:#"base1", #"snare1", #"hihat1", #"dit", #"snare", nil])
{
[soundEngine loadBuffer:i fileName:soundName fileType:#"wav"];
i++;
}
[NSTimer scheduledTimerWithTimeInterval:0.14 target:self selector:#selector(drumLoop:) userInfo:nil repeats:YES];
In the initialisation I create the sound engine, load some sounds to different buffers and then establish the sequencer loop with NSTimer.
audio loop:
- (void)drumLoop:(NSTimer *)timer
{
for(int track=0; track<4; track++)
{
unsigned char note=pattern[track][step];
if(note)
[soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
}
if(++step>=16)
step=0;
}
Thats it and it works as it should BUT the timing is shaky and instable. As soon as something else happens (i.g. drawing in a view) it goes out of sync.
As I understand the sound engine and openAL the buffers are loaded (in the init code) and then are ready to start immediately with alSourcePlay(source); - so the problem may be with NSTimer?
Now there are dozens of sound sequencer apps in the appstore and they have accurate timing. I.g. "idrum" has a perfect stable beat even in 180 bpm when zooming and drawing is done. So there must be a solution.
Does anybody has any idea?
Thanks for any help in advance!
Best regards,
Walchy
Thanks for your answer. It brought me a step further but unfortunately not to the aim. Here is what I did:
nextBeat=[[NSDate alloc] initWithTimeIntervalSinceNow:0.1];
[NSThread detachNewThreadSelector:#selector(drumLoop:) toTarget:self withObject:nil];
In the initialisation I store the time for the next beat and create a new thread.
- (void)drumLoop:(id)info
{
[NSThread setThreadPriority:1.0];
while(1)
{
for(int track=0; track<4; track++)
{
unsigned char note=pattern[track][step];
if(note)
[soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
}
if(++step>=16)
step=0;
NSDate *newNextBeat=[[NSDate alloc] initWithTimeInterval:0.1 sinceDate:nextBeat];
[nextBeat release];
nextBeat=newNextBeat;
[NSThread sleepUntilDate:nextBeat];
}
}
In the sequence loop I set the thread priority as high as possible and go into an infinite loop. After playing the sounds I calculate the next absolute time for the next beat and send the thread to sleep until this time.
Again this works and it works more stable than my tries without NSThread but it is still shaky if something else happens, especially GUI stuff.
Is there a way to get real-time responses with NSThread on the iphone?
Best regards,
Walchy
NSTimer has absolutely no guarantees on when it fires. It schedules itself for a fire time on the runloop, and when the runloop gets around to timers, it sees if any of the timers are past-due. If so, it runs their selectors. Excellent for a wide variety of tasks; useless for this one.
Step one here is that you need to move audio processing to its own thread and get off the UI thread. For timing, you can build your own timing engine using normal C approaches, but I'd start by looking at CAAnimation and especially CAMediaTiming.
Keep in mind that there are many things in Cocoa that are designed only to run on the main thread. Don't, for instance, do any UI work on a background thread. In general, read the docs carefully to see what they say about thread-safety. But generally, if there isn't a lot of communication between the threads (which there shouldn't be in most cases IMO), threads are pretty easy in Cocoa. Look at NSThread.
im doing something similar using remoteIO output. i do not rely on NSTimer. i use the timestamp provided in the render callback to calculate all of my timing. i dont know how acurate the iphone's hz rate is but im sure its pretty close to 44100hz, so i just calculate when i should be loading the next beat based on what the current sample number is.
an example project that uses remote io can be found here have a look at the render callback inTimeStamp argument.
EDIT : Example of this approach working (and on the app store, can be found here)
I opted to use a RemoteIO AudioUnit and a background thread that fills swing buffers (one buffer for read, one for write which then swap) using the AudioFileServices API. The buffers are then processed and mixed in the AudioUnit thread. The AudioUnit thread signals the bgnd thread when it should start loading the next swing buffer. All the processing was in C and used the posix thread API. All the UI stuff was in ObjC.
IMO, the AudioUnit/AudioFileServices approach affords the greatest degree of flexibility and control.
Cheers,
Ben
You've had a few good answers here, but I thought I'd offer some code for a solution that worked for me. When I began researching this, I actually looked for how run loops in games work and found a nice solution that has been very performant for me using mach_absolute_time.
You can read a bit about what it does here but the short of it is that it returns time with nanosecond precision. However, the number it returns isn't quite time, it varies with the CPU you have, so you have to create a mach_timebase_info_data_t struct first, and then use it to normalize the time.
// Gives a numerator and denominator that you can apply to mach_absolute_time to
// get the actual nanoseconds
mach_timebase_info_data_t info;
mach_timebase_info(&info);
uint64_t currentTime = mach_absolute_time();
currentTime *= info.numer;
currentTime /= info.denom;
And if we wanted it to tick every 16th note, you could do something like this:
uint64_t interval = (1000 * 1000 * 1000) / 16;
uint64_t nextTime = currentTime + interval;
At this point, currentTime would contain some number of nanoseconds, and you'd want it to tick every time interval nanoseconds passed, which we store in nextTime. You can then set up a while loop, something like this:
while (_running) {
if (currentTime >= nextTime) {
// Do some work, play the sound files or whatever you like
nextTime += interval;
}
currentTime = mach_absolute_time();
currentTime *= info.numer;
currentTime /= info.denom;
}
The mach_timebase_info stuff is a bit confusing, but once you get it in there, it works very well. It's been extremely performant for my apps. It's also worth noting that you won't want to run this on the main thread, so dishing it off to its own thread is wise. You could put all the above code in its own method called run, and start it with something like:
[NSThread detachNewThreadSelector:#selector(run) toTarget:self withObject:nil];
All the code you see here is a simplification of a project I open-sourced, you can see it and run it yourself here, if that's of any help. Cheers.
Really the most precise way to approach timing is to count audio samples and do whatever you need to do when a certain number of samples has passed. Your output sample rate is the basis for all things related to sound anyway so this is the master clock.
You don't have to check on each sample, doing this every couple of msec will suffice.
One additional thing that may improve real-time responsiveness is setting the Audio Session's kAudioSessionProperty_PreferredHardwareIOBufferDuration to a few milliseconds (such as 0.005 seconds) before making your Audio Session active. This will cause RemoteIO to request shorter callback buffers more often (on a real-time thread). Don't take any significant time in these real-time audio callbacks, or you will kill the audio thread and all audio for your app.
Just counting shorter RemoteIO callback buffers is on the order of 10X more accurate and lower latency than using an NSTimer. And counting samples within an audio callback buffer for positioning the start of your sound mix will give you sub-millisecond relative timing.
By measuring the time elapsed for the "Do some work" part in the loop and subtracting this duration from the nextTime greatly improves accuracy:
while (loop == YES)
{
timerInterval = adjustedTimerInterval ;
startTime = CFAbsoluteTimeGetCurrent() ;
if (delegate != nil)
{
[delegate timerFired] ; // do some work
}
endTime = CFAbsoluteTimeGetCurrent() ;
diffTime = endTime - startTime ; // measure how long the call took. This result has to be subtracted from the interval!
endTime = CFAbsoluteTimeGetCurrent() + timerInterval-diffTime ;
while (CFAbsoluteTimeGetCurrent() < endTime)
{
// wait until the waiting interval has elapsed
}
}
If constructing your sequence ahead of time is not a limitation, you can get precise timing using an AVMutableComposition. This would play 4 sounds evenly spaced over 1 second:
// setup your composition
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
NSDictionary *options = #{AVURLAssetPreferPreciseDurationAndTimingKey : #YES};
for (NSInteger i = 0; i < 4; i++)
{
AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
NSURL *url = [[NSBundle mainBundle] URLForResource:[NSString stringWithFormat:#"sound_file_%i", i] withExtension:#"caf"];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:options];
AVAssetTrack *assetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
CMTimeRange timeRange = [assetTrack timeRange];
Float64 t = i * 1.0;
NSError *error;
BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMake(t, 4) error:&error];
NSAssert(success && !error, #"error creating composition");
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];
// later when you want to play
[self.avPlayer seekToTime:kCMTimeZero];
[self.avPlayer play];
Original credit for this solution: http://forum.theamazingaudioengine.com/discussion/638#Item_5
And more detail: precise timing with AVMutableComposition
I thought a better approach for the time management would be to have a bpm setting (120, for example), and go off of that instead. Measurements of minutes and seconds are near useless when writing/making music / music applications.
If you look at any sequencing app, they all go by beats instead of time. On the opposite side of things, if you look at a waveform editor, it uses minutes and seconds.
I'm not sure of the best way to implement this code-wise by any means, but I think this approach will save you a lot of headaches down the road.