For how long does the AVPlayer continues to buffer from an URL after a pause? - iphone

I was reading the AVPlayer class documentation and I couldn't find the answer for my question.
I'm playing a streamed audio from the Internet on my iPhone app and I'd like to know if after a [myAVPlayer pause]; invocation myAVPlayer will keep downloading the audio file on the background for a long time.
If the user pushes the "Pause" button, invoking [myAVPlayer pause]; and then leaves the app, will myAVPlayer keep downloading a large amount of data?
I'm concerned about this when the user is on 3G Network.

I am faced with the same question and have done some experimentation. My observations are only valid for video, but if they are also valid for audio, then AVPlayer will try to buffer around 30s of content when you press pause. If you have access to the webserver, you could run tcpdump/wireshark and see how long after you press pause that the server continues to send data.

You can manage how long AVPlayer continues to buffer.
You need to manage preferredForwardBufferDuration of avplayer currentItem. If you want to stop buffering set value to 1 because if you set it to 0 it will be set up automatically
self.avPlayer.currentItem.preferredForwardBufferDuration = 1;
From Apple documentation: This property defines the preferred forward buffer duration in seconds. If set to 0, the player will choose an appropriate level of buffering for most use cases. Setting this property to a low value will increase the chance that playback will stall and re-buffer, while setting it to a high value will increase demand on system resources.

Related

iphone html 5 video - how to start from different time

What is the correct way to begin playback of a video from a specific time?
Currently, the approach we use is to check at an interval whether it's possible to seek via currentTime and then seek. The problem with this is, when the video fullscreen view pops up, it begins playback from the beginning for up to a second before seeking.
I've tried events such as onloadmetadata and canplay, but those seem to happen too early.
Added information:
It seems the very best I can do is to set a timer that tries to set currentTime repeatedly as soon as play() is called, however, this is not immediate enough. The video loads from the beginning, and after about a second, depending on the device, jumps. This is a problem for me as it provides an unsatisfactory experience to the user.
It seems like there can be no solution which does better, but I'm trying to see if there is either:
a) something clever/undocumented which I have missed which allows you to either seek before loading or otherwise indicate that the video needs to start not from 00:00 but from an arbitrary point
b) something clever which allows you to hide the video while it's playing and not display it until it has seeked (So you would see a longer delay on the phone before the fullscreen video window pops up, but it would start immediately where I need it to instead of seeking)
do something like this;
var video = document.getElementsById("video");
video.currentTime = starttimeoffset;
more Information can be found on this page dedicated to video time offset howtos
For desktop browser Chrome/Safari, you can append #t=starttimeoffsetinseconds to your video src url to make it start from certain position.
For iOS device, the best we can do is to listen for the timeupdated event, and do the seek in there. I guess this is the same as your original approach of using a timer.
-R

How much video content is pre buffered when using AV player with HTTP Live Streaming and can that value be changed

I am writing a video app that plays streaming videos from the web and I am using AV player to do so. My question is how do I find out how much video content is pre buffered, in MPMoviePlayerController you can see the amount of buffered content on the UISlider. I would like to show the same using AV Player and also be able to change the amount of pre buffered content.
My ideal situation is - User streaming a movie file using my app, if he pauses the play button, the movie keeps buffering just like when you watch youtube videos.
Please Help !!
Thank you.
You can see the amount of data that has been loaded and buffered ahead of the playhead by looking at the AVPlayerItem loadedTimeRanges property.
e.g.
AVPlayer *player;
NSArray *loadedTimeRanges = player.currentItem.loadedTimeRanges;
NSLog(#"LoadedTimeRanges: %#", loadedTimeRanges);
In the case of my app I can see:
LoadedTimeRanges: (
"CMTimeRange: {{338070700809/1000000000 = 338.071}, {54651145016/1000000000 = 54.651, rounded}}"
)
where the second value (54.651) appears to be the amount of buffering that exists in front of the playhead. In the case of a stall this value decreases as playback continues until reaching approximately 0.
Between 55 and 60 seconds of pre-buffered content is all I've seen – you can only examine this value and cannot force the player to buffer any more data. You could, however, use this value to visually indicate the amount of data buffered to the user.

Recording Chunked Audio in Background iOS

I am trying to figure out a way to record audio in the background of an iOS application and have it streamed to the server.
I have pretty much got this working for when the app is in the foreground. I use AVAudioRecorder to record input for X seconds. Once I get the callback that this has finished, I record for another X seconds. Each recording session gets stored to a different file and I send these files asynchronously to the server.
However, this doesn't seem to work while my app goes into background mode.
When going into the background, the current record session continues recording until the X seconds are up, however my app gets suspended, before I can start another recording session.
Any ideas?
Here's the code for my callback:
- (void)audioRecorderDidFinishRecording:(AVAudioRecorder *)aRecorder successfully:(BOOL)flag {
NSLog(#"hello");
[self initRecorder];
[recorder recordForDuration:5];
}
You can't restart recording in the background.
So use the Audio Queue or Audio Unit RemoteIO APIs instead, which will give you smaller "chunks" (callback buffer blocks) of audio without stopping the audio recording.
Concatenate small audio callback chunks into larger file chunks if needed for your network protocol.
Background audio playing is supported with multitasking but it's not very clear that background audio recording is. However, I have not tried it. The Audio Unit API might let you continue to record audio while the application is in the background. However, this is kind of a trick and I Imagine it might get pulled out at some point.

Multitasking: Stop Background Audio at Specific Time

I am developing an iphone app which uses background audio (on an infinite loop) to continue playing after the app has entered the background.
My problem is I want to implement a "sleep timer" which stops playback after a specified period of time.
Is this possible? I have spent an hour looking for a method to do this with no avail.
EDIT: My current thought is to use a lower level API, the Audio Queue Services, and manually re-fill the queue with another instance of the loop during the AudioQueueOutputCallback. If the timer has expired I do not fill the loop. I'm assuming this should work since the documentation says audio callbacks are still fired when an app is playing multitasking background audio. Can anyone think of a better way or a reason why this wouldn't work?
While you queue sound data on the background your app remains fully functional and running as if it was in the foreground (well almost), so yes, you should just write a timer that stops the playback at a given time and it will be fired as expected.
Now to the second question: once you stop queueing things up, your app will be "frozen" until the user manually brings it to the foreground... So what you should do is start queueing audio data from the second file before the first one is done playing, and if you DO need to pause or stop, maybe a solution is to play 0 bytes (silence)?
I'm not actually sure this would be allowed in the App Store. An app is not allowed to execute at all in the background, with the exception of VoIP apps and push notifications.

AudioQueueNewInput decreases playback volume for AVAudioPlayer

I am using Stephen Celis' SCListener class to record iPhone microphone audio levels. I also am playing audio through the use of AVAudioPlayer. For example, the user presses 'Play' to kick off a sound playing in the background and then has the option to blow into the microphone to play additional, shorter sounds. The code all works fine, playing all the sounds when they should be played, however, the AVAudioPlayer sound volume greatly decreases when you begin listening with the SCListener. I have narrowed down the culprit to this line in the SCListener source code:
AudioQueueNewInput(&format, listeningCallback, self, NULL, NULL, 0, &queue);
I have racked my brain and can not find out how to keep the playback volume at it's highest level once this line has executed. I have spoken with Stephen Celis, too, and he does not know what is happening. It is possible, I suppose, that the iPhone turns down the output volume when the microphone is being used so that feedback isn't introduced, but it seems like there should be a way to disable that.
In summary:
Start playing long audio file with AVAudioPlayer - 100% volume (loud).
Enable SCListener and begin listening (which calls AudioQueueNewInput).
The output volume on the AVAudioPlayer sound greatly decreases
Call [[SCListener sharedListener] stop] to dispose of the queue
AVAudioPlayer sound resumes higher playback volume
Has anyone seen anything like this or have any ideas on how to keep the playback volume higher? I have explicitly set the volume parameter to 1.0f to ensure that the gain is at it's highest level.
you can try this:
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
I'm not familiar with Stephen Celis's work, but if tposchel's suggestion does not work, it might be of some value to look at the InputQueue's values versus the OutputQueue's audio values in debug mode (this is tricky, though, since the callbacks for these methods are realtime threads).
This may be informative in that it will tell you what the OS believes it is sending to your output device (headphone, or built-in speaker).
The brute force way to fix this problem is to manually normalize (or scale up, as it were) the values within your OutputQueue's callback. This doesn't address your root problem, perhaps, but may be a hack until you find the answer.