I need to convert the video into png images. I did that using ffmepg. But I need to do that quickly. Now it's taking lots of time to convert a video into images. Now to reduce the conversion time. I search lot But I got "ffmpeg -i video.mpg image%d.jpg" these codings as solution. Please teach me to use these kind of codings.
shoot and save the video with AVCaptureSession + AVCaptureMovieFileOutput
use AVAssetReader to extract the individual frames from the video as BGRA CVImageBufferRefs
save as PNG: CVImageBufferRef -> UIImage -> UIImagePNGRepresentation
This should be faster than ffmpeg because step 2 is hardware accelerated and also has the benefit of allowing you to discard a cumbersome LPGLed 3rd party library.
Enjoy!
with ffmpeg you can split video frame by frame and can mix audio with video and also check this
Related
I am working on a project in which I am receiving raw frames from some input video devices. I am trying to write those frames in a video files using FFMPEG library.
I have no control over the frame rate i am getting from my input sources. This frame rate varies in run-time also.
Now my problem is how do i sync between recorded video and coming video. Depending upon frame-rate i set in FFMPEG and actual frame rate that i am receiving playback of recorded video is either fast or slow than input video.
I tried to add timestamps (as numOfFrames) in encoded video as per following link
but that didn't help.
ffmpeg speed encoding problem
Please tell me a way to synchronize both. This is my first time with FFMPEG or any multimedia libraries so any examples will be highly appreciated.
I am using directshow ISampleGrabber interface to capture those frames.
Thank You
So Finally i figured out how to do this. Here is how..
First i was taking preview from PREVIEW pin of source filter which do not give timestamps to frames. So one should take frames from capture pin of the source filter. Than in SampleCB callback function we cant get time using IMediaSample::GetTime(). But this function will return time in unit of 100ns. FFMPEG requires it in units of 1/time_base. Here time_base is desired frame rate.
So directshow timestamp needs to be converted in FFMPEG units first. Than we can set pts in AVFrame::pts variable of ffmpeg. One more thing that needs to be considered is first frame of video shoul have timestamp of 0 in FFMPEG so that needs to be taken care of while converting from directshow timestamp to FFMPEG one.
Thank You
What I'm doing :
I need to play audio and video files that are not supported by Apple on iPhone/iPad for example mkv/mka files which my contain several audio channels.
I'm using libffmpeg to find audio and video streams in media file.
Video is being decoded with avcodec_decode_video2 and audio with avcodec_decode_audio3
the return values are following for each function are following
avcodec_decode_video2 - returns AVFrame structure which encapsulates information about the video video frame from the pakcage, specifically is has data field which is a pointer to the picture/channel planes.
avcodec_decode_audio3 - returns samples of type int16_t * which I guess is the raw audio data
So basically I've done all this and successfully decoding the media content.
What I have to do :
I've to play the audio and video accordingly using Apples services. The playback I need to perform should support mixing of audio channels while playing video, i.e. let say mkv file contains two audio channel and a video channel. So I would like to know which service will be the appropriate choice for me ? My research showed that AudioQueue service might be useful audio playback, and probably AVFoundation for video.
Please help to find the right technology for my case i.e. video playeback + audio playback with possible audio channel mixing.
You are on the right path. If you are only playing audio (not recording at all) then I would use AudioQueues. It will do the mixing for you. If you are recording then you should use AudioUnits. Take a look at the MixerHost example project from Apple. For video I recommend using OpenGL. Assuming the image buffer is in YUV420 then you can render this with a simple two pass shader setup. I do believe there is an Apple example project showing how to do this. In any case you could render any pixel format using OpenGL and a shader to convert the pixel format to RGBA. Hope this help.
I want to display a video frame buffer on a OpenGLES texture.
I have download and read the GLVideoFrame sample from apple.
It's great code, but i don't understand how it's possible to modify this code for use a movie file instead of video device.
You can use AVAssetReader to read frames from a file.
I am using avcapturesession with a preset AVCaptureSessionPresetMedium to capture video, i am applying effect on this video with opengl using shaders.
I use assetWriter to write the video to an mp4 file. The problem is that the resulted video is slow specially when I add audio output.
This is how my code works :
In -
(void)captureOutput:(AVCaptureOutput
*)captureOutput... function I apply the opengl filter to the captured
frames
then check if the captureoutput is
video or audio if it's video, I use
glReadPixels to create a
CVPixelBufferRef that I send to an
AVAssetWriterInputPixelBufferAdaptor
to write it
if it's audio, I write directly the
CMSampleBufferRef
If someone can tell me what's wrong with my approach or which part is supposed to make the resulted video slow?
I have an iOS app that I want to record some of its visual output into a video. It looks like the way to create a video on iOS is to use AVMutableComposition and feed AVAssets to it via insertTimeRange.
All the documentation and examples that I can find only add video and audio assets to an AVMutableComposition. Is there a way to add image data to it (i.e. add an image for each frame of the video)? I can get this image data as straight RGB, PNG, JPG, UIImage, or whatever is easiest to feed to AV Foundation (if it's even possible).
If it's not possible to feed images into an AVMutableComposition for the video frames, is there another way to generate an .mp4 file from frames in iOS.
To generate movies from frame you can use AVAssetWriter, here is a question that sort of covers that here on SO, question