I'm dealing with an app that should be able to mix multiple video and audio clips in a single movie file (.mp4).
At the momento the resulting movie has one video track, that is the concatenation of all imported video clips, and two audio tracks, the one originating from the imported video clips and the other originating from the imported audio clips.
What I'm trying to do is to merge those two audio tracks because I would like to have only one audio track in the resulting movie file. While we can use the instruction layer technique and merging multiple video tracks together I was not able to find something similar for audio.
I read about audio mixing object but I think it only mixes audio from both tracks. Instead I would like to have only One.
Obviously I tried to add video's audio and simple audios to the same track, but the resulting video remains black, that means to me that something has gone wrong with the asset building process. Naturally inserting different audios at the same time range is not a good think :-)
Any suggestions?
OK, finally I should have found the solution. Really using the AVMutableAudioMix resulting movie file has only one audio track instead of two.
EDIT
Answering to Justin comment, here is the trick:
let audioMix = AVMutableAudioMix()
let vip = AVMutableAudioMixInputParameters(track: self.videoAudioTrack!)
vip.trackID = self.videoAudioTrack!.trackID
vip.setVolume(self.videoAudioMixerVolume, at: .zero)
let aip = AVMutableAudioMixInputParameters(track: self.audioTrack!)
aip.trackID = self.audioTrack!.trackID
aip.setVolume(self.audioMixerVolume, at: .zero)
audioMix.inputParameters = [vip, aip]
easset.audioMix = audioMix
Where videoAudioTrack is the audio track for the video clip, wherease audioTrack is another simple audio track. easset is the AVAssetExporterSession object.
Related
I am building a project can play audios with just_audio.
I have a list of audios put in AudioSource and I need to create a control dash (a play button and a progress bar) for each of audio in the list, instead of using a common for the playlist as usual.
But I have no idea how to get duration/duration stream of each audio in the list and every time I click the play button of a specific audio, only that audio will be played?
enter image description here
In the picture, I can only get duration of current state, using player.durationStream, and when I click the play button, only current audio of sequenceState be played.
Please help, tkx a lot!!!
This feature isn't yet supported, but there is an open feature request for it:
https://github.com/ryanheise/just_audio/issues/141
However, this may not be what you actually want. Be aware that querying the duration directly from the media file can be inefficient in some cases, and so if you actually know the duration in advance, it may be better for the app to maintain its own local database of metadata which includes the durations:
If the audio is coming from a podcast, note that the podcast feed should report a medium-fidelity duration for each item in the XML file, measured in seconds(*).
If the audio was recorded by your app, you could save the duration metadata into your database at the same time the recording is made.
If the audio is on the device, you can query it using flutter_audio_query.
If the audio is an asset packaged with the app, then the durations are known by implication and can also be packaged with the app (i.e. hard coded).
(*) If the podcast feed omitted the duration field, you can still query it by extracting just enough of the audio file to read its duration and then disposing of the temporary player:
final disposablePlayer = Player();
final duration = await disposablePlayer.setAudioSource(...);
disposablePlayer.dispose();
AVFoundation has been quite a struggle for me because most of the examples and documentation out there are in Obj-c.. As my title states, I would like to write to file in real time instead of calling exportAsync once the user has finished recording their video.
If anyone can offer some advice or documentation on how to do this it would be greatly appreciated!
It's not clear where your video is coming from, butexportAsync makes it sound like you're using AVAssetExportSession with an existing file or composition.
capture your video (and audio?) frames
a. if from an existing composition or file, with AVAssetReader
b. if from the camera, with AVCaptureSession etc
progressively write the frames to file using AVAssetWriter & AVAssetWriterInput
If you're expecting the writing to file to be interrupted for some reason,
consider setting the AVAssetWriter's movieFragmentInterval property to something small .
What I'm doing :
I need to play audio and video files that are not supported by Apple on iPhone/iPad for example mkv/mka files which my contain several audio channels.
I'm using libffmpeg to find audio and video streams in media file.
Video is being decoded with avcodec_decode_video2 and audio with avcodec_decode_audio3
the return values are following for each function are following
avcodec_decode_video2 - returns AVFrame structure which encapsulates information about the video video frame from the pakcage, specifically is has data field which is a pointer to the picture/channel planes.
avcodec_decode_audio3 - returns samples of type int16_t * which I guess is the raw audio data
So basically I've done all this and successfully decoding the media content.
What I have to do :
I've to play the audio and video accordingly using Apples services. The playback I need to perform should support mixing of audio channels while playing video, i.e. let say mkv file contains two audio channel and a video channel. So I would like to know which service will be the appropriate choice for me ? My research showed that AudioQueue service might be useful audio playback, and probably AVFoundation for video.
Please help to find the right technology for my case i.e. video playeback + audio playback with possible audio channel mixing.
You are on the right path. If you are only playing audio (not recording at all) then I would use AudioQueues. It will do the mixing for you. If you are recording then you should use AudioUnits. Take a look at the MixerHost example project from Apple. For video I recommend using OpenGL. Assuming the image buffer is in YUV420 then you can render this with a simple two pass shader setup. I do believe there is an Apple example project showing how to do this. In any case you could render any pixel format using OpenGL and a shader to convert the pixel format to RGBA. Hope this help.
Is it possible to play some audio files while my movie is playing. I don't want it to interrupt or pause he movie. Just the audio to play at same time at various predetermined points.
Many Thanks,
-Code
You shouldn't have any problems with it.
If you want to synchronize or delay these players you should use
the code they provide in AvAudioPlayer Class Reference using
playAtTime: method.
Sadly MPMediaPlayback protocol doesn't provide the same method
so exact synchronizing with video is a bit harder task.
EDIT: RoLYroLLs mentions using MPMoviePlayerLoadStateDidChangeNotification to achieve this here and this approach seems promising.
Use two or three Audioplayer at same time . Play that audio when u need it
I have successfully composed an AVMutableComposition with multiple video clips and can view it and export it, and I would like to be able to transition between them using a cross-fade, so I want to use AVMutableVideoComposition. I can't find any examples on how to even arrange and play a couple AVAsset videos in succession. Does anyone have an example of how to add tracks to an AVMutableVideoComposition with the equivalent of AVMutableComposition's insertTimeRange, or how to set up a cross-fade?
[self.composition insertTimeRange:CMTimeRangeMake(kCMTimeZero,asset.avAsset.duration)
ofAsset:asset.avAsset
atTime:self.composition.frameDuration
error:nil]
I found an example called AVEditDemo from Apple's WWDC 2010 Sample Code.
https://developer.apple.com/library/ios/samplecode/AVCustomEdit/Introduction/Intro.html
There is a lot of detail in the sample, but I'll summarize: You need to use both AVMutableComposition and AVMutableVideoComposition. Add tracks individually to AVMutableComposition instead of with the simpler insertTimeRange, as it allows you to set overlapping times on the tracks. The tracks also need to be added to the AVMutableVideoComposition as AVMutableVideoCompositionLayerInstructions with an opacity ramp. Finally, to play back in an AVPlayer, you need to create an AVPlayerItem using both the AVMutableComposition and AVMutableVideoComposition.
It seems like going each level deeper in the api – in this case from MPMoviePlayer with an asset to AVPlayer with an AVComposition and finally to an AVVideoComposition – increases necessary coding exponentially.