I have an iOS app that I want to record some of its visual output into a video. It looks like the way to create a video on iOS is to use AVMutableComposition and feed AVAssets to it via insertTimeRange.
All the documentation and examples that I can find only add video and audio assets to an AVMutableComposition. Is there a way to add image data to it (i.e. add an image for each frame of the video)? I can get this image data as straight RGB, PNG, JPG, UIImage, or whatever is easiest to feed to AV Foundation (if it's even possible).
If it's not possible to feed images into an AVMutableComposition for the video frames, is there another way to generate an .mp4 file from frames in iOS.
To generate movies from frame you can use AVAssetWriter, here is a question that sort of covers that here on SO, question
Related
What I'm doing :
I need to play audio and video files that are not supported by Apple on iPhone/iPad for example mkv/mka files which my contain several audio channels.
I'm using libffmpeg to find audio and video streams in media file.
Video is being decoded with avcodec_decode_video2 and audio with avcodec_decode_audio3
the return values are following for each function are following
avcodec_decode_video2 - returns AVFrame structure which encapsulates information about the video video frame from the pakcage, specifically is has data field which is a pointer to the picture/channel planes.
avcodec_decode_audio3 - returns samples of type int16_t * which I guess is the raw audio data
So basically I've done all this and successfully decoding the media content.
What I have to do :
I've to play the audio and video accordingly using Apples services. The playback I need to perform should support mixing of audio channels while playing video, i.e. let say mkv file contains two audio channel and a video channel. So I would like to know which service will be the appropriate choice for me ? My research showed that AudioQueue service might be useful audio playback, and probably AVFoundation for video.
Please help to find the right technology for my case i.e. video playeback + audio playback with possible audio channel mixing.
You are on the right path. If you are only playing audio (not recording at all) then I would use AudioQueues. It will do the mixing for you. If you are recording then you should use AudioUnits. Take a look at the MixerHost example project from Apple. For video I recommend using OpenGL. Assuming the image buffer is in YUV420 then you can render this with a simple two pass shader setup. I do believe there is an Apple example project showing how to do this. In any case you could render any pixel format using OpenGL and a shader to convert the pixel format to RGBA. Hope this help.
I am using AVfoundation framework to get video camera frames at real time and then modifying those frames using one algorithm(which gives new modified image).
now I want all modified frames to be save as a video to iPhone library. I found a way to save video for input(original) frames using AVCaptureMovieFileOutput but not for modified frames.
Is there any way to save modified frames to iPhone Library as a video ??
UISaveVideoAtPathToSavedPhotosAlbum
Adds the movie at the specified path to the user’s Camera Roll album.
I know that I can access raw video images from the iPhone's camera with AVCaptureVideoDataOutput. I also know that I can record video to a file with AVCaptureMovieFileOutput. But how can I first access the raw video images, manipulate them and then write the manipulated ones into the video file? I've already seen apps in the app store, which do this, so it must be possible.
Ok, I now know, that it's done with AVAssetWriter.
I have to make an iPhone project that can process video data in
realtime. This app has to be able to reconize the color of the
object in the video frame. After I found information relating to
video processing in iOS, I found that I can use AVFoundation
Framework to achieve this task but I don't know which APIs or functions
of AVFoundation that's able to do this video processing task.
Can anyone suggest me which function to use to get image frames or
raw image data out of a video streaming in real-time?
I'd appreciate if you can give me some example code
Thank you very much for helping me...
You can first of all make use of AVAsset class by initiating it with your video file URL.
You can then use an AVAssetReader object for obtaining media data of that asset.
This will help you obtain video frames which you can read using AVAssetReaderVideoCompositionOutput class object. Accessing RGB channel data from these frames is pertinent to CGImage classes and it's methods.
Hope this helps you to get started
I want to display a video frame buffer on a OpenGLES texture.
I have download and read the GLVideoFrame sample from apple.
It's great code, but i don't understand how it's possible to modify this code for use a movie file instead of video device.
You can use AVAssetReader to read frames from a file.