How do i pause video recording with iPhone SDK? - iphone

I see there is an app called iFile with a pause feature while recording video. How do they do this? I tried using AVMutableComposition classes and when the user pauses i cut a new video and then merge the video at the end, however the processing time to merge the videos is not desirable.
Can someone give me other good ideas on how to do this? I noticed the iFile way is very seamless.
Thanks

Here are some ideas. I have not tried either of these.
If you are using an AVAssetWriter to write your captured image then you can simply drop the frames while paused. You will need to keep track of the last presentation time stamp (PTS) that was used. Then you need to calculate the next image PTS based on this last time stamp when you start recording again. Doing this with audio as well might be a little trickier.
An alternate method would be to use empty edits. I am not sure how you would insert an empty edit in the middle of a track using AVAssetWriter. I know you can insert them at the beginning and end. Using AVMutableCompositionTrack you could use insertEmptyTimeRange: where the time range is constructed like
CMTime delta = CMTimeSubtract(new_sample_time, last_sample_time)
CMTimeRangeMake(last_sample_time, delta)
Where new_sample_time is the time of the first sample after un-pausing, and last_sample_time is the time of the last sample before pausing. Again with audio this may be a little tricky as the buffer for audio generally contains 1024 samples. The CMTime returned by CMSampleBufferGetPresentationTimeStamp is the time of the first sample.
Hope this helps or leads you to a solution.

Related

AVFoundation: How to write video to file in real time instead of using exportAsync?

AVFoundation has been quite a struggle for me because most of the examples and documentation out there are in Obj-c.. As my title states, I would like to write to file in real time instead of calling exportAsync once the user has finished recording their video.
If anyone can offer some advice or documentation on how to do this it would be greatly appreciated!
It's not clear where your video is coming from, butexportAsync makes it sound like you're using AVAssetExportSession with an existing file or composition.
capture your video (and audio?) frames
a. if from an existing composition or file, with AVAssetReader
b. if from the camera, with AVCaptureSession etc
progressively write the frames to file using AVAssetWriter & AVAssetWriterInput
If you're expecting the writing to file to be interrupted for some reason,
consider setting the AVAssetWriter's movieFragmentInterval property to something small .

AVAudioPlayerNode - Get Player State?

In an iOS-project I am using the AVAudioPlayerNode in conjunction with the AVAudioEngine and an AVAudioUnitTimePitch. Everything works peachy. However, I was wondering if there is a way to figure out what the current player's state (e.g. isPlaying, isPaused) or at least the playback position is.
While AVAudioPlayer at least allows you to get the currentTime-parameter, I could not yet figure out how to get that information with AVAudioPlayerNode. I tried playing around with the nodeTimeForPlayerTime and playerTimeForNodeTime methods described in the swift documentation but I couldn't make any progress.
Any help would be highly appreciated.
Since the AVAudioPlayerNode is designed as an audio stream, it doesn't necessarily keep track of the time within a particular file. However, the AVAudioPlayerNode does keep a running time of how long its been playing all audio. This timer doesn't reset with each file, in order to change it, you must explicitly tell it where you want to start counting from.
So to find the current time the player has been playing you must do the following:
player.lastRenderTime.sampleTime / file.fileFormat.sampleRate
Now in order to get the timer to reset after each file, you must explicitly reset the players current time. To do this use the player.playAtTime: function.
If you would like an example, check one out here: https://github.com/danielmj/AEAudioPlayer

iphone html 5 video - how to start from different time

What is the correct way to begin playback of a video from a specific time?
Currently, the approach we use is to check at an interval whether it's possible to seek via currentTime and then seek. The problem with this is, when the video fullscreen view pops up, it begins playback from the beginning for up to a second before seeking.
I've tried events such as onloadmetadata and canplay, but those seem to happen too early.
Added information:
It seems the very best I can do is to set a timer that tries to set currentTime repeatedly as soon as play() is called, however, this is not immediate enough. The video loads from the beginning, and after about a second, depending on the device, jumps. This is a problem for me as it provides an unsatisfactory experience to the user.
It seems like there can be no solution which does better, but I'm trying to see if there is either:
a) something clever/undocumented which I have missed which allows you to either seek before loading or otherwise indicate that the video needs to start not from 00:00 but from an arbitrary point
b) something clever which allows you to hide the video while it's playing and not display it until it has seeked (So you would see a longer delay on the phone before the fullscreen video window pops up, but it would start immediately where I need it to instead of seeking)
do something like this;
var video = document.getElementsById("video");
video.currentTime = starttimeoffset;
more Information can be found on this page dedicated to video time offset howtos
For desktop browser Chrome/Safari, you can append #t=starttimeoffsetinseconds to your video src url to make it start from certain position.
For iOS device, the best we can do is to listen for the timeupdated event, and do the seek in there. I guess this is the same as your original approach of using a timer.
-R

Best way to play a sound with an attack / sustain (loop) / decay with AVAudioPlayer

I am having a problem finding resources on playing an attack (start of sound) / sustain (looping sound) / decay (ending of sound) sequence with no transition breaks. Are there any good libraries for handling this, or should I roll my own with AVAudioPlayer? Is AudioQueue a better place to look? I used to use SoundEngine.cpp, but that's been long gone for a while. Is CAF still the best format to use for it?
Thanks!
From your description, it sounds as if you're trying to write a software synthesizer. The only way that you could use AVAudioPlayer for something like this would be to compose the entire duration of a note as a single WAV file and then play the whole thing with AVAudioPlayer.
To create a note sound of arbitrary duration, one that begins playing in response to a user action (like tapping a button) and then continues playing until a second user action (like tapping a "stop" button or lifting the finger off the first button) begins the process of ramping the looped region's volume down to zero (the "release" part), you will need to use AudioQueue (AVAudioPlayer can be used to play audio constructed entirely in memory, but the entire playback has to be constructed before play begins, meaning that you cannot change what is being played in response to user actions [other than to stop playback]).
Here's another question/answer that shows simply how to use AudioQueue. AudioQueue calls a callback method whenever it needs to load up more data to play - you would have to implement all the code that loops and envelope-wraps the original WAV file data.
creating your own envelope generator is very simple. the tough part will be updating your program to use lower level audio services in order to alter the signal directly.
to do this, you will need:
the audio file's samples
set up an AudioQueue (that's one approach, but i am going with it because it was mentioned in the OP, and it is relatively high level API for a user provided sample buffer)
provide a signal to the queue
determine if your program is best in realtime or pre-rendered
Realtime
Allows live variations
manage your loop points
manage your render position
be able to determine the amplitude to apply based on the sample position range you are reading
or
Prerendered
May require more memory
Requires less CPU
apply the envelope to your copy of the sample buffer
manage your render position
I also assume that you need only slow/simple transitions. If you want some crazy/fast LFO, without aliasing, you will have a lot more work to do. This approach should not produce audible aliasing unless your changes are too abrupt:
Writing a simple envelope generator (EG) is easy; check out Apple's SinSynth for a very basic EG if you need a push in that direction.

set seek time in AVQueuePlayer in iphone

I am developing an application in that I want to play two video at the same time, That is not possible using MPMoviePlayer. So I have used AVQueuePlayer To play video. As I am success to play video but the problem is in jumping to particular time. For that we have method call seekToTime and we need to variable of the CMTime datatype.
I am able to jump at time in 1,2,3 seconds etc, My problem that I want to jump at Time 1.2, 1.3 , 1.4 second etc. but I am not able to move the video at that time.
Can any one know the solution of this problem than please help me to solve the problem.
The seekToTime: method, however, is tuned for performance rather than precision. If you need to move the playhead precisely, instead you use seekToTime:toleranceBefore:toleranceAfter Reference
toleranceBefore: The accuracy of the time before to which you would like to move the playback cursor.
[time-beforeTolerance]
toleranceAfter: The accuracy of the time after to which you would like to move the playback cursor.
[time+afterTolerance]
Lets say both parameters represents the margin of in-accuracy.
I have seen other Question on SO using self.player.currentItem.asset.duration to get the duration.
Good Luck.