MPMoviePlayerController % of data bufferd - iphone

While using MPMoviePlayerController is there any way to find how much Percentage of data finished while buffering the video?
My aim is to show the progress bar that showing how much percentage is loaded and show its numeric count of percentage.
Thanks in advance.

Have you checked out the Apple documentation for the MPMoviePlayerController ?
http://developer.apple.com/library/ios/#documentation/mediaplayer/reference/MPMoviePlayerController_Class/Reference/Reference.html
Here you can find two properties that might help you. duration and playableDuration, it's not an exact fit, but pretty close. One thing you will need to implement yourself is a way to intelligently query these properties, for example maybe you might want to use an NSTimer and fetch the info from your MPMovePlayerController instance every 0.5 seconds.
For example assume you have a property called myPlayer of type MPMoviePlayerController, you will initiate it in your init method of the view controller etc...
Then followed by this:
self.checkStatusTimer = [NSTimer timerWithTimeInterval:0.5
target:self
selector:#selector(updateProgressUI)
userInfo:nil
repeats:YES];
And a method like this to update the UI:
- (void)updateProgressUI{
if(self.myPlayer.duration == self.myPlayer.playableDuration){
// all done
[self.checkStatusTimer invalidate];
}
int percentage = roundf( (myPlayer.playableDuration / myPlayer.duration)*100 );
self.progressLabel.text = [NSString stringWithFormat:#"%d%%", percentage];
}
Note the double percentage sign in our -stringWithFormat, this is another format specifier to resolve to a % sign. For more on Format specifiers see here.

Related

How to Correctly Destroy ToneUnit after Tone Fades Out?

I'm generating tones on iPhone using AudioUnits based on Matt Gallagher's classic example. In order to avoid the chirps and clicks at the beginning/end, I'm fading the amplitude in/out in the RenderTone callback. I'd like to destroy the ToneUnit at the end of the fade out, that is, after the amplitude reaches zero. The only way I can think to do this is to call an instance method from within the callback:
if (PlayerState == FADING_OUT) {
amplitude -= stepsize;
if (amplitude <= 0) {
amplitude = 0;
PlayerState = OFF;
[viewController destroyToneUnit];
}
}
Unfortunately this is more challenging that I had thought. For one thing, I still get the click at the end that the fadeout was supposed to eliminate. For another, I get this log notice:
<AURemoteIO::IOThread> Someone is deleting an AudioConverter while it is in use.
What does this message mean and why am I getting it?
How should I kill the ToneUnit? I suspect that the click occurs because RenderTone and destroyToneUnit run on different threads. How can I get these synchronized?
In case it's helpful, here's my destroyToneUnit instance method:
- (void) destroyToneUnit {
AudioOutputUnitStop(toneUnit);
AudioUnitUninitialize(toneUnit);
AudioComponentInstanceDispose(toneUnit);
toneUnit = nil;
}
If I NSLog messages right before and right after AudioUnitUninitialize(toneUnit);, the notice appears between them.
I also ran into the same issue. When I called the destroyToneUnit from the main thread, the warning went away.
[viewController performSelectorOnMainThread:#selector(destroyToneUnit) withObject:nil waitUntilDone:NO];

Using Novocaine in an audio app

I'm building an iPhone app that generates random guitar music by playing back individual recorded guitar notes in "caf" format. These notes vary in duration from 3 to 11 seconds, depending on the amount of sustain.
I originally used the AVAudioPlayer for playback, and in the simulator at 120 bpm, playing 16th notes it sung beautifully, but on my handset, as soon as I
upped the tempo a little over 60 bpm playing just 1/4 notes, it ran like a dog and wouldn't keep in time. My elation was very short lived.
To reduce latency, I tried to implement playback via Audio Units using the Apple MixerHost project as a template for an audio engine, but kept getting a bad access error after I bolted it on and connected everything up.
After many hours of it doing my head in, I gave up on that avenue of thought and I bolted on the Novocaine audio engine instead.
I have now run into a brick wall trying to connect it up to my model.
On the most basic level, my model is a Neck object containing an NSDictionary of Note objects.
Each Note object knows what string and fret of the guitar neck it's on and contains its own AVAudioPlayer.
I build a chromatic guitar neck containing either 122 notes (6 strings by 22 frets) or 144 notes (6 strings by 24 frets) depending on the neck size selected in the user preferences.
I use these Notes as my single point of truth so all scalar Notes generated by the music engine are pointers to this chromatic note bucket.
#interface Note : NSObject <NSCopying>
{
NSString *name;
AVAudioPlayer *soundFilePlayer;
int stringNumber;
int fretNumber;
}
I always start off playback with the root Note or Chord of the selected scale and then generate the note to play next so I am always playing one note behind the generated note. This way, the next Note to play is always queued up ready to go.
Playback control of these Notes is a achieved with the following code:
- (void)runMusicGenerator:(NSNumber *)counter
{
if (self.isRunning) {
Note *NoteToPlay;
// pulseRate is the time interval between beats
// staticNoteLength = 1/4 notes, 1/8th notes, 16th notes, etc.
float delay = self.pulseRate / [self grabStaticNoteLength];
// user setting to play single, double or triplet notes.
if (self.beatCounter == CONST_BEAT_COUNTER_INIT_VAL) {
NoteToPlay = [self.GuitarNeck generateNoteToPlayNext];
} else {
NoteToPlay = [self.GuitarNeck cloneNote:self.GuitarNeck.NoteToPlayNow];
}
self.GuitarNeck.NoteToPlayNow = NoteToPlay;
[self callOutNoteToPlay];
[self performSelector:#selector(runDrill:) withObject:NoteToPlay afterDelay:delay];
}
- (Note *)generateNoteToPlayNext
{
if ((self.musicPaused) || (self.musicStopped)) {
// grab the root note on the string to resume
self.NoteToPlayNow = [self grabRootNoteForString];
//reset the flags
self.musicPaused = NO;
self.musicStopped = NO;
} else {
// Set NoteRingingOut to NoteToPlayNow
self.NoteRingingOut = self.NoteToPlayNow;
// Set NoteToPlaNowy to NoteToPlayNext
self.NoteToPlayNow = self.NoteToPlayNext;
if (!self.NoteToPlayNow) {
self.NoteToPlayNow = [self grabRootNoteForString];
// now prep the note's audio player for playback
[self.NoteToPlayNow.soundFilePlayer prepareToPlay];
}
}
// Load NoteToPlayNext
self.NoteToPlayNext = [self generateRandomNote];
}
- (void)callOutNoteToPlay
{
self.GuitarNeck.NoteToPlayNow.soundFilePlayer.delegate = (id)self;
[self.GuitarNeck.NoteToPlayNow.soundFilePlayer setVolume:1.0];
[self.GuitarNeck.NoteToPlayNow.soundFilePlayer setCurrentTime:0];
[self.GuitarNeck.NoteToPlayNow.soundFilePlayer play];
}
Each Note's AVAudioPlayer is loaded as follows:
- (AVAudioPlayer *)buildStringNotePlayer:(NSString *)nameOfNote
{
NSString *soundFileName = #"S";
soundFileName = [soundFileName stringByAppendingString:[NSString stringWithFormat:#"%d", stringNumber]];
soundFileName = [soundFileName stringByAppendingString:#"F"];
if (fretNumber < 10) {
soundFileName = [soundFileName stringByAppendingString:#"0"];
}
soundFileName = [soundFileName stringByAppendingString:[NSString stringWithFormat:#"%d", fretNumber]];
NSString *soundPath = [[NSBundle mainBundle] pathForResource:soundFileName ofType:#"caf"];
NSURL *fileURL = [NSURL fileURLWithPath:soundPath];
AVAudioPlayer *audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:fileURL error:nil];
return notePlayer;
}
Here is where I come a cropper.
According to the Novocaine Github page ...
Playing Audio
Novocaine *audioManager = [Novocaine audioManager];
[audioManager setOutputBlock:^(float *audioToPlay, UInt32 numSamples, UInt32 numChannels) {
// All you have to do is put your audio into "audioToPlay".
}];
But in the downloaded project, you use the following code to load the audio ...
// AUDIO FILE READING OHHH YEAHHHH
// ========================================
NSURL *inputFileURL = [[NSBundle mainBundle] URLForResource:#"TLC" withExtension:#"mp3"];
fileReader = [[AudioFileReader alloc]
initWithAudioFileURL:inputFileURL
samplingRate:audioManager.samplingRate
numChannels:audioManager.numOutputChannels];
[fileReader play];
fileReader.currentTime = 30.0;
[audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels)
{
[fileReader retrieveFreshAudio:data numFrames:numFrames numChannels:numChannels];
NSLog(#"Time: %f", fileReader.currentTime);
}];
Here is where I really start to get confused because the first method uses a float and the second one uses a URL.
How do you pass a "caf" file to a float? I am not sure how to implement Novocaine - it is still fuzzy in my head.
My questions that I hope someone can help me with are as follows ...
Are Novocaine objects similar to AVAudioPlayer objects, just more versatile and tweaked to the max for minimum latency? i.e. self contained audio playing (/recording/generating) units?
Can I use Novocaine in my model as it is? i.e. 1 Novocaine object per chromatic note or should I have 1 novocain object that contains all the Chromatic Notes? Or do I just store the URL in the note instead and pass that to a Novocaine player?
How can I put my audio into "audioToPlay" when my audio is a "caf" file and "audioToPlay" take a float?
If I include and declare a Novocaine property in Note.m do I then have to rename the class to Note.mm in order to use the Novocaine object?
How do I play multiple Novocaine objects concurrently in order to reproduce chords and intervals?
Can I loop a Novocaine object's playback?
Can I set the playback length of a note? i.e. play a 10 sec note for only 1 sec?
Can I modify the above code to use Novocaine?
Is the method I am using for runMusicGenerator the correct one to use in order to maintain a tempo that is up to professional standards?
Novocaine makes your life easier by eliminating the need for you to setup the RemoteIO AudioUnit manually. This includes having to painfully fill a bunch of CoreAudio structs and providing a bunch of callbacks such as this audio process callback.
static OSStatus PerformThru(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData);
Instead Novocaine handles that in its implementation and then calls your block, which you set by doing this.
[audioManager setOutputBlock: ^(float *audioToPlay, UInt32 numSamples, UInt32 numChannels){} ];
Whatever you write to audioToPlay gets played.
Novocaine sets up the RemoteIO AudioUnit for you. This is a low-level CoreAudio API, different from the high-level AVFoundation, and very low-latency as expected. You are right in that Novocaine is self-contained. You can record, generate, and process audio in realtime.
Novocaine is a singleton, you cannot have multiple Novocaine instances. One way to do it is to store your guitar sound/sounds in a separate class or array, and then write a bunch of methods, using Novocaine to play them.
You have a bunch of options. You can use Novocaine's AudioFileReader to play your .caf file for you. You do this by allocating an AudioFileReader and then passing the URL of the .caf file you want to play, as per example code. You then stick [fileReader retrieveFreshAudio:data numFrames:numFrames numChannels:numChannels] in your block, as per example code. Each time your block is called, AudioFileReader grabs and buffers a chunk of audio from disk and puts it in audioToPlay which subsequently gets played. There are some disadvantages with this. For short sounds (such as your guitar sound I'm assuming) repeatedly calling retrieveFreshAudio is a performance hit. It is generally a better idea (for short sounds) to perform a synchronous, sequential read of the entire file into memory. Novocaine does not provide a way to do this (yet). You will have to use ExtAudioFileServices to do this. The Apple example project MixerHost details how to do this.
If you are using AudioFileReader yes. You only rename to .mm when you are #import ing from Obj-C++ headers or #include ing C++ headers.
As mentioned earlier, only 1 Novocaine instance is allowed. You can achieve polyphony by mixing multiple audio sources. This is simply just adding buffers together. If you have made multiple versions of the same guitar sound at different pitches, just read them all in to memory, and mix away. If you only want to have one guitar sound, then you have to, in realtime, change the playback rate of however many notes you are playing and then mixdown.
Novocaine is agnostic to what you are actually playing and does not care how long you are playing a sample for. In order to loop a sound, you have to maintain a count of how many samples have elapsed, check if you are at the end of your sound, and then set that count back to 0.
Yes. Assuming a 44.1k sample rate, 1 sec of audio = 44100 samples. You would then reset your count when it reaches 44100.
Yes. It looks something like this. Assuming you have 4 guitar sounds which are mono and longer than 1 second long, and you have read them into memory float *guitarC, *guitarE, *guitarG, *guitarB; (jazzy CMaj7 chord w00t), and want to mix them down for 1 second and loop that back in mono:
[audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels){
static int count = 0;
for(int i=0; i<numFrames; ++i){
//Mono mix each sample of each sound together. Since result can be 4x louder, divide the total amp by 4.
//You should be using `vDSP_vadd` from the accelerate framework for added performance.
data[count] = (guitarC[count] + guitarE[count] + guitarG[count] + guitarB[count]) * 0.25;
if(++count >= 44100) count = 0; //Plays the mix for 1 sec
}
}];
Not exactly. Using performSelector or any mechanism scheduled on a runloop or thread is not guaranteed to be precise. You might experience timing irregularities when the CPU load fluctuates, for example. Use the audio block if you want sample accurate timing.

Cocos2D getting progress of CCAction

I have a Cocos2D game with Box2D physics. In my GameScene.mm, I'm working on a method to zoom to a given scale:
-(void) zoomToScale:(float)zoom withDuration:(ccTime)duration
{
id action = [CCScaleTo actionWithDuration:duration scale:zoom];
[scrollNode runAction:action];
currentZoomLevel = zoom;
}
The problem that I'm having is that currentZoomLevel (which is used in the Scene's update() method) is set to the zoom immediately, and isn't gradually adjusted as per the animation. So while the animation is in progress, the currentZoomLevel variable is totally wrong.
I'm trying to figure out a way to have the currentZoomLevel variable match the progress of the animation as it's happening. According to the CCAction API Reference, the CCAction's update method takes a ccTime that's between 0 and 1 based on the progress of the animation (0 is just started, 1 is just finished).
How can I access this ccTime from outside of the action? I want to have something in my Scene's update method like this:
if(animating)
{
float progress = [action getProgress]; // How do I do this?
// Do math to update currentZoomLevel based on progress
}
Am I missing something obvious here, or am I going to have to subclass CCScaleTo?
You should be able to access the scale directly as it animates.
instead of
float progress = [action getProgress];
try
float current_scale = some_node.scale ;
where "some_node" is the thing you're animating/scaling.
Actually, your best bet is to use the new Cocos2D extension "CCLayerPanZoom", which handles all of this marvellously for you! It should be part of any new cocos2D install (v.1.0+).

how to do a running score animation in iphone sdk

I wish to do a running score animation for my iphone app in xcode such that whenever I increase the score by an integer scoreAdded, the score will run up to the new score instead of being updated to the new score. I try some for loop with sleep but to no available. So I'm wondering if there's any way of doing it. Thank you.
Add a timer that will call a specific method every so often, like this:
NSTimer *tUpdate;
NSTimeInterval tiCallRate = 1.0 / 15.0;
tUpdate = [NSTimer scheduledTimerWithTimeInterval:tiCallRate
target:self
selector:#selector(updateScore:)
userInfo:nil
repeats:YES];
This will call your updateScore method 15 times a second
Then in the main part of your game, instead of simply adding the amount to currentScore, I would instead store the additional amount in a separate member variable, say addToScore. e.g.
addToScore = 10;
Your new method updateScore would have a bit of code like this:
if (addToScore)
{
addToScore--;
currentScore++;
// Now display currentScore
}
Try redrawing the view after each iteration where your score is being displayed:
for (/* loop conditions here */) {
score += 1;
[scoreView setNeedsDisplay:YES];
}

jump into "audioPlayerDidFinishPlaying" function unexpected

I used an AVAudioPlayer object to control playing multiple music files. I also created an UISlider to control seeking file. But i have a problem when seek the pointer. After seeking, AVAudioPlayer update time correct then jump into "audioPlayerDidFinishPlaying" function unexpected.
Here is the code that i used :
-(void)timeChange
{
_player.currentTime = _timeControl.value;
[self updateCurrentTimeForPlayer];
}
-(void)updateCurrentTimeForPlayer
{
if(_isNeedUpdate == NO) return;
_timeControl.maximumValue = _player.duration;
}
A long shot, but maybe the audio format doesn't support seeking?
Why the call to updateCurrentTimeForPlayer? Where is _isNeedUpdate set? (Why all the underscores?)
Can you add some debug NSLogs to find out what's going on?