I can use AVMutableCompositionTrack to merge two video tracks together by simply calling insertTimeRange:ofTrack:atTime:error: and giving it an AVAssetTrack.
However, I have some video tracks and a few still images (jpg, png, etc) that I'd like to insert as still frames (with each image having a duration of a few seconds). I'm completely lost on how to insert the still images.
Is there no way to convert still images into an AVAssetTrack and insert them into an AVMutableCompositionTrack? Am I going to be forced into using the lower-level/more cumbersome (albeit more powerful) AVAssetReader/AVAssetWriter? If there was some way to add an image to an AVMutableCompositionTrack (and specify its duration) I'd really like to know how.
Related
I would like to generate video content from text/images with provided content size. Have tried many options from FFmpeg to h264 encoders. So a want to build a solution that takes one image or array of images and generate video stream, let's say rtmp. Also image is always changing, adding text, change color etc. I have tried with Golang.
I'm dealing with an app that should be able to mix multiple video and audio clips in a single movie file (.mp4).
At the momento the resulting movie has one video track, that is the concatenation of all imported video clips, and two audio tracks, the one originating from the imported video clips and the other originating from the imported audio clips.
What I'm trying to do is to merge those two audio tracks because I would like to have only one audio track in the resulting movie file. While we can use the instruction layer technique and merging multiple video tracks together I was not able to find something similar for audio.
I read about audio mixing object but I think it only mixes audio from both tracks. Instead I would like to have only One.
Obviously I tried to add video's audio and simple audios to the same track, but the resulting video remains black, that means to me that something has gone wrong with the asset building process. Naturally inserting different audios at the same time range is not a good think :-)
Any suggestions?
OK, finally I should have found the solution. Really using the AVMutableAudioMix resulting movie file has only one audio track instead of two.
EDIT
Answering to Justin comment, here is the trick:
let audioMix = AVMutableAudioMix()
let vip = AVMutableAudioMixInputParameters(track: self.videoAudioTrack!)
vip.trackID = self.videoAudioTrack!.trackID
vip.setVolume(self.videoAudioMixerVolume, at: .zero)
let aip = AVMutableAudioMixInputParameters(track: self.audioTrack!)
aip.trackID = self.audioTrack!.trackID
aip.setVolume(self.audioMixerVolume, at: .zero)
audioMix.inputParameters = [vip, aip]
easset.audioMix = audioMix
Where videoAudioTrack is the audio track for the video clip, wherease audioTrack is another simple audio track. easset is the AVAssetExporterSession object.
I see there is an app called iFile with a pause feature while recording video. How do they do this? I tried using AVMutableComposition classes and when the user pauses i cut a new video and then merge the video at the end, however the processing time to merge the videos is not desirable.
Can someone give me other good ideas on how to do this? I noticed the iFile way is very seamless.
Thanks
Here are some ideas. I have not tried either of these.
If you are using an AVAssetWriter to write your captured image then you can simply drop the frames while paused. You will need to keep track of the last presentation time stamp (PTS) that was used. Then you need to calculate the next image PTS based on this last time stamp when you start recording again. Doing this with audio as well might be a little trickier.
An alternate method would be to use empty edits. I am not sure how you would insert an empty edit in the middle of a track using AVAssetWriter. I know you can insert them at the beginning and end. Using AVMutableCompositionTrack you could use insertEmptyTimeRange: where the time range is constructed like
CMTime delta = CMTimeSubtract(new_sample_time, last_sample_time)
CMTimeRangeMake(last_sample_time, delta)
Where new_sample_time is the time of the first sample after un-pausing, and last_sample_time is the time of the last sample before pausing. Again with audio this may be a little tricky as the buffer for audio generally contains 1024 samples. The CMTime returned by CMSampleBufferGetPresentationTimeStamp is the time of the first sample.
Hope this helps or leads you to a solution.
I have multiple AVAssets, and I create individual AVMutableCompositionTracks for each. I then create an AVMutableComposition and add each AVMutableCompositionTrack to it and then create an AVAssetExportSession, init with the AVMutableComposition and run the exporter. This allows me to create a single audio file made up of many overlapping audio sources.
I can trim and delay each source audio file by setting the parameters when I insertTimeRange on each AVMutableCompositionTrack. What I can't figure out is how to fade in and out of each individual track. I can do a master fade on the export session by using setVolumeRampFromStartVolume via AVMutableAudioMixInputParameters, and I know how to do fades on AVPlayers using the same method, but I don't think AVMutableAudioMixInputParameters can be used on an AVMutableCompositionTrack, right?
So how can I add a fade to a AVMutableCompositionTrack?
Thanks!
AVMutableAudioMixInputParameters actually can be used with AVMutableCompositionTracks. I use them. It just isn't stored within the composition. Instead, you'll need to set the audioMix property of any AVPlayer or AVAssetExportSession you use.
I have successfully composed an AVMutableComposition with multiple video clips and can view it and export it, and I would like to be able to transition between them using a cross-fade, so I want to use AVMutableVideoComposition. I can't find any examples on how to even arrange and play a couple AVAsset videos in succession. Does anyone have an example of how to add tracks to an AVMutableVideoComposition with the equivalent of AVMutableComposition's insertTimeRange, or how to set up a cross-fade?
[self.composition insertTimeRange:CMTimeRangeMake(kCMTimeZero,asset.avAsset.duration)
ofAsset:asset.avAsset
atTime:self.composition.frameDuration
error:nil]
I found an example called AVEditDemo from Apple's WWDC 2010 Sample Code.
https://developer.apple.com/library/ios/samplecode/AVCustomEdit/Introduction/Intro.html
There is a lot of detail in the sample, but I'll summarize: You need to use both AVMutableComposition and AVMutableVideoComposition. Add tracks individually to AVMutableComposition instead of with the simpler insertTimeRange, as it allows you to set overlapping times on the tracks. The tracks also need to be added to the AVMutableVideoComposition as AVMutableVideoCompositionLayerInstructions with an opacity ramp. Finally, to play back in an AVPlayer, you need to create an AVPlayerItem using both the AVMutableComposition and AVMutableVideoComposition.
It seems like going each level deeper in the api – in this case from MPMoviePlayer with an asset to AVPlayer with an AVComposition and finally to an AVVideoComposition – increases necessary coding exponentially.