I'm using the MPMoviePlayer to play a movie. I want to be able to play the movie and allow the user to skip to a certain time in the film at the press of a button.
I'm setting the currentPlaybackTime property of the player but it doesn't seem to work. Instead it simply stats the movie from the beginning no matter what value I use.
Also I log the currentPlaybackTime property through a button click, it always returns a large number but it sometimes returns a minus value?! is this expected? (e.g. -227361412)
Sample code below:
- (IBAction) playShake
{
NSLog(#"playback time = %d",playerIdle.currentPlaybackTime);
[self.playerIdle setCurrentPlaybackTime:2.0];
return;
}
I have successfully used this method of skipping to a point in a movie in the past. I suspect that your issue is actually with the video itself. When you set the currentPlaybackTime MPMoviePlayer will skip to the nearest keyframe in the video. If the video has few keyframes or you're only skipping forward a few seconds this could cause the video to start over from the beginning when you change the currentPlaybackTime.
-currentPlaybackTime returns a NSTimeInterval (double) which you are printing as a signed int. This will result in unexpected values. Try either casting to int (int)playerIdle.currentPlaybackTime or printing the double %1.3f.
Related
I need to play 2 sounds with 2 AVAudioPlayer objects at the same exact time... so I found this example on Apple AVAudioPlayer Class Reference (https://developer.apple.com/library/mac/#documentation/AVFoundation/Reference/AVAudioPlayerClassReference/Reference/Reference.html):
- (void) startSynchronizedPlayback {
NSTimeInterval shortStartDelay = 0.01; // seconds
NSTimeInterval now = player.deviceCurrentTime;
[player playAtTime: now + shortStartDelay];
[secondPlayer playAtTime: now + shortStartDelay];
// Here, update state and user interface for each player, as appropriate
}
What I don't understand is: why also the secondPlayer has the shorStartDelay?
Shouldn't it be without? I thought the first Player needed a 0.1 sec delay as it is called before the second Player... but in this code the 2 players have the delay...
Anyone can explain me if that is right and why?
Thanks a lot
Massy
If you only use the play method ([firstPlayer play];), firstPlayer will start before the second one as it will receive the call before.
If you set no delay ([firstPlayer playAtTime:now];), the firstPlayer will also start before de second one because firstPlayer will check the time at which it is supposed to start, and will see that it's already passed. Thus, it will have the same behaviour as when you use only the play method.
The delay is here to ensure that the two players start at the same time. It is supposed to be long enough to ensure that the two players receive the call before the 'now+delay' time has passed.
I don't know if I'm clear (English is not my native langage). I can try to be more clear if you have questions
Yeah what he said ^ The play at time will schedule both players to start at that time (sometime in the future).
To make it obvious, you can set "shortStartDelay" to 2 seconds and you will see there will be a two second pause before both items start playing.
Another tip to keep in mind here are that when you play/pause AVAudioPlayer they dont actually STOP at exactly the same time. So when you want to resume, you should also sync the audio tracks.
Swift example:
let currentDeviceTime = firstPlayer.deviceCurrentTime
let trackTime = firstPlayer.currentTime
players.forEach {
$0.currentTime = trackTime
$0.play(atTime: currentDeviceTime + 0.1)
}
Where players is a list of AVAudioPlayers and firstPlayer is the first item in the array.
Notice how I am also resetting the "currentTime" which is how many seconds into the audio track you want to keep playing. Otherwise every time the user plays/pauses the track they drift out of sync!
I have a program that uses a file player audio unit to play, pause and stop a audio file. The way I am accomplishing this is by initializing the file player audio unit to play the file at position zero and then when the user presses the pause button, I stop the AUGraph, capture the current position, and then use that position as the start position when the user presses the play button. Everything is working as it should, but every 3 or 4 times I hit pause and then play, the song starts playing a half to a full second BEFORE the point where I hit pause.
I can't figure out why this is happening, do any of you have any thoughts? here is a simplified version of my code.
//initialize AUGraph and File player Audio unit
...
...
...
//Start AUGraph
...
...
...
// pause playback
- (void) pauseAUGraph {
//first stop the AuGrpah
result = AUGraphStop (processingGraph);
// get current play head position
AudioTimeStamp ts;
UInt32 size = sizeof(ts);
result = AudioUnitGetProperty(filePlayerUnit,
kAudioUnitProperty_CurrentPlayTime, kAudioUnitScope_Global, 0, &ts,
&size);
//save our play head position for use later
//must add it to itself to take care of multiple presses of the pause button
sampleFrameSavedPosition = sampleFrameSavedPosition + ts.mSampleTime;
//this stops the file player unit from playing
AudioUnitReset(filePlayerUnit, kAudioUnitScope_Global, 0);
NSLog (#"AudioUnitReset - stopped file player from playing");
//all done
}
// Stop playback
- (void) stopAUGraph {
// lets set the play head to zero, so that when we restart, we restart at the beginning of the file.
sampleFrameSavedPosition = 0;
//ok now that we saved the current pleayhead position, lets stop the AUGraph
result = AUGraphStop (processingGraph);
}
May be you should use packet counts instead of timestamps, since you just want to pause and play the music, not display the time information.
See BufferedAudioPlayer for an example of using this method.
It may be due to rounding problems with your code:
For example, if every time you hit the pause button, your timer would record at a 0.5/4 seconds before your actual pause time, you would still see a desired result. But after repeating for four more times, the amount of space you have created is 0.5/4 times 4 which is the half of a second you seem to be experiencing.
Thus, I would pay careful attention to the object types you are using and make sure they don't round inappropriately. Try using a double float for your sample times to try to alleviate that problem!
Hope this is clear and helpful! :)
I have used seekToTime in my application and its working properly. But I want to have some more information about it. Which is, if I have a streaming video file of 1 minute now I want to play it from second 15th to second 45 (first 15 seconds and the last 15 seconds will not play).
How can I do it?
I know that by use of seekToTime I can play a video from 15th second but how to stop it at 45th second and also get noticed by the method that the video has played for the specified time period?
CMTime timer = CMTimeMake(15, 1);
[player seekToTime:timer];
The above code takes me to the 15th second of the streaming file but how to stop it on 45th second and get notified too?
I have searched a lot but couldn't get any info.
Thanks
EDIT:
As #codeghost suggested, simply use forwardPlaybackEndTime.
You can simply use:
yourAVPlayerItem.forwardPlaybackEndTime = CMTimeMake(10, 1);
Here 10 is the time till the AVPlayerItem will play.
Set the forwardPlaybackEndTime property on your AVPlayerItem.
I don't know if there is something build-in for this in AVPlayer, but what i would do is build an function and call it with :
[self performSelector:#selector(stopPlaying) withObject:nil afterDelay:45];
-(void)stopPlaying{
[player pause];
}
this will stop playing after 45 seconds,and of course you can put instead 45 any number of seconds that you want.
You can use [AVPlayer addPeriodicTimeObserverForInterval:queue:usingBlock:] for that purpose.
Get the AVPlayer.currentTime periodically, and call pause or stop on the exact time.
One of my apps has a simple metronome-style feature that plays a click sound a specified number of times per minute (bpm). I'm doing this by starting an NSTimer, with an interval calculated from the specified bpm, that calls a method that plays the sound.
If I put an NSLog line into the play method, I can see that NSTimer is firing accurately to about 1 millisecond. However, if I record the sound output into an audio editor and then measure the interval between clicks, I can see that they are not evenly spaced. For example, with 150 bpm, the timer fires every 400 milliseconds. But most of the sounds play after 395 milliseconds, with every third or fourth sound playing after 418 milliseconds.
So the sounds are not uniformly delayed, but rather, they follow a pattern of shorter and longer intervals. It seems as if the iOS has a lower resolution for timing of sounds, and is rounding each sound event to the nearest available point, rounding up or down as needed to keep on track overall.
I have tried this with system sounds, AVAudioPlayer and OpenAL and have gotten the exact same results with all three methods. With each method, I'm doing all the setup when the view loads, so each time I play the sound all I have to do is play it. With AVAudioPlayer, I tried calling prepareToPlay using a second timer after each time the sound plays, so it is initialized and ready to go next time, but got the same results.
Here's the code for setting up the OpenAL sound in viewDidLoad (adapted from this tutorial):
// set up the context and device
ALCcontext *context;
ALCdevice *device;
OSStatus result;
device = alcOpenDevice(NULL); // select the "preferred device"
if (device) {
context = alcCreateContext(device, NULL); // use the device to make a context
alcMakeContextCurrent(context); // set the context to the currently active one
}
// open the sound file
NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:#"TempoClick" ofType:#"caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
AudioFileID fileID;
result = AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID);
if (result != 0) DLog(#"cannot open file %#: %ld", soundFilePath, result);
// get the size of the file data
UInt32 fileSize = 0;
UInt32 propSize = sizeof(UInt64);
result = AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataByteCount, &propSize, &fileSize);
if (result != 0) DLog(#"cannot find file size: %ld", result);
DLog(#"file size: %li", fileSize);
// copy the data into a buffer, then close the file
unsigned char *outData = malloc(fileSize);
AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID); // we get a "file is not open" error on the next line if we don't open this again
result = AudioFileReadBytes(fileID, false, 0, &fileSize, outData);
if (result != 0) NSLog(#"cannot load data: %ld", result);
AudioFileClose(fileID);
alGenBuffers(1, &tempoSoundBuffer);
alBufferData(self.tempoSoundBuffer, AL_FORMAT_MONO16, outData, fileSize, 44100);
free(outData);
outData = NULL;
// connect the buffer to the source and set some preferences
alGenSources(1, &tempoSoundSource);
alSourcei(tempoSoundSource, AL_BUFFER, tempoSoundBuffer);
alSourcef(tempoSoundSource, AL_PITCH, 1.0f);
alSourcef(tempoSoundSource, AL_GAIN, 1.0f);
alSourcei(tempoSoundSource, AL_LOOPING, AL_FALSE);
And then in the play method I just call:
alSourcePlay(self.tempoSoundSource);
Can anyone explain what is happening here, and how I can work around it?
UPDATE 1:
I have another project that plays brief sounds with audio units, so as a quick test I added a timer to that project to play my click sound every 400 milliseconds. In that case, the timing is nearly perfect. So, it seems that NSTimer is fine but system sounds, AVAudioPlayer and OpenAL are less accurate in their playback than audio units.
UPDATE 2:
I just reworked my project to use audio units and now the audio is playing back much more accurately. It still occasionally drifts by up to four milliseconds in either direction, but this is better than the other audio methods. I'm still curious why the other methods all show a pattern of short, short, short, long intervals -- it's like the audio playback times are being rounded up or down to map to some kind of frame rate -- so I'll leave this question open for anyone who can explain that and/or offer a workaround for the other audio methods.
NSTimer does not guarantee when your method will actually get fired.
More info here: How to program a real-time accurate audio sequencer on the iphone?
Regarding your edits:
AVAudioPlayer takes some time to initialize itself. If you call prepareToPlay, it will initialize itself such that it can play the currently loaded sound immediately upon calling play. Once playback stops, it uninitializes itself, so you'd need to call prepareToPlay again to reinitialize. It's best to use this class for stream-y playback rather than discrete sound playback.
With OpenAL, once you've loaded the buffer, attaching it to a source and playing it should cause no delay at all.
You can encapsulate your audio units code into a .mm file and then call that from .m modules without having to compile those as C++.
Okay, I've figured it out. The real reason audio units worked better than the other audio methods is that my audio unit class, which I was adapting from another project, was setting a buffer duration property in the audio session, like this:
Float32 preferredBufferSize = .001;
UInt32 size = sizeof(preferredBufferSize);
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, size, &preferredBufferSize);
When I added this code to the OpenAL version, or even to the AVAudioPlayer version, I got accuracy to within a few milliseconds, the same as with audio units. (System Sounds, however, were still not very accurate.) I can verify the connection by increasing the buffer size and watching the playback intervals get less accurate.
Of course I only figured this out after spending an entire day adapting my project to use audio units -- tweaking it to compile under C++, testing the interruption handlers, etc. I hope this can save someone else from the same trouble.
When does the AVAudioPlayer's currentTime method return a negative value? The audio file is playing (I am putting in a check before getting currentTime) but making a call to currentTime returns a negative value.
Any ideas? Thanks
if(thePlayer != nil && [thePlayer isPlaying]){
double playerTime = [thePlayer currentTime];
NSLog(#"Player Time: %f", playerTime);
}
Output
Player Time: -0.019683
Are you testing this on the simulator? There are several bugs with AVAudioPlayer on the simulator. One is that the currentTime can be a very large positive or negative number. Even if you set the currentTime to a particular number, it will still often show something different. As far as I know this is only an issue on the simulator and not when running on a device.
Here is the code I use to set the currentTime property of an AVAudioPlayer instance:
- (void)safeSetCurentTime:(NSTimeInterval)newTime {
self._player.currentTime = newTime;
if (self._player.currentTime != newTime)
{
// code falls through to here all the time
// the second attempt _usually_ works.
[self prepareAudioForPlayback];
self._player.currentTime = newTime;
//NSLog(#"Set time failed");
}
}
I believe this issue is fixed the iOS4 beta 2 SDK release, so you shouldn't see it on the iPhone. See here. However, I think we're stuck with the problem on the iPad until iOS4 is available on that device.
Anyone know of a workaround? Any way to predict how the current time will be incorrectly reported, so a correction factor can be applied? What I'm seeing is that the current time is reported to be a few seconds behind the actual playback time (which could be negative, if you're near the start of he audio), and it tracks along with the correct position. So perhaps there's some offset that can be applied whenever the app is used on an earlier iOS version?
I found a workaround that fixes the problem on the iPad, which is workable until iOS4 is released for the iPad and fixes the issue.
Keep hold of the audio buffer, and when you're about to resume playback after pausing/stopping, reload the audio buffer into the AVAudioPlayer instance, set currentTime to where you want playback to resume from, and then resume playback.
Works perfectly for me, and reloading the audio buffer seems very fast.