ffmpeg-X264 encode --BGRA to AVFrame(ffmpeg) and viceversa? for IOS - iphone

Am working Video Processing in IOS(iphone/ipod/ipad) Using Objective c. i am using AVFoundation Framework to Capture Video . i want to Encode/decode those video frame using ffmpeg-libx264. i have compiled ffmpeg-x264 lib for ios. i got kCVPixelFormatType_32BGRA from AVFoundation.
my problem is
1.How to convert kCVPixelFormatType_32BGRA to AVFrame for enode using avcodec_encode_video?
2.How to convert AVFrame to kCVPixelFormatType_32BGRA # decode side from avcodec_decode_video2?
Please help me to start above process or give path for working tutorial .Thanks in advance.

If you're trying to use FFMpeg you'll need to use kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange rather than kCVPixelFormatType_32BGRA and then you can shove it into an AVFrame. You'll probably also want to convert what you're getting from the iOS camera (YUVNV12) to YUV420P so you can receive it on other devices that aren't iOS. If you are just using iOS devices and that's all you care about, you can skip this side of the color conversion and just pack it into the AVFrame.
Since you're already putting it into a YUV format, you can just use CVPixelBufferGetBaseAddressOfPlane(buf,0) and encode that address.
Once you decode the image, you'll need to change the colors to BGRA from YUV420P. If you didn't swap the colors properly in the first place before you encoded it, you'll just change YUVNV12 to BGRA.
Hope this helps a bit. You can find the proper color conversion algorithms online.

Related

Recording Audio and Video using AVFoundation frame by frame

How to record audio and video using AVFoundation frame by frame in iOS4?
The AVCamDemo you mention is close to what you need to do and should be able to use that as reference, among those these are the following classes you need to use in order to achive what you are trying... All the classes are part of AVFoundation, you need
AVCaptureVideoDataOutput and AVCaptutureAudioDataOutput - use these classes to get raw samples from the video camera and the microphone
Use AVAssetWriter and AVAssetWriterInput in order to encode the raw samples into a file - the following sample mac OS X project shows how to use these classes (the sample should work for ios too), however they use an AVAssetReader for input (it reencodes a movie file) instead of the Camera and microphone... You can use the outputs mentioned above as the input in your case to write what you want
That should be all you need in order to achieve what you want to do...
Heres a link showing how to use VideoDataOutput
Hope it helps
If you are a registered developer, look at the videos from the 2011 WWDC (which you can find by searching in the developer portal). There are two sessions relating to AVFoundation. There was also some sample code from one of the WWDC sessions, which was extremely useful.

FFmpeg decoding H264

I am decoding a H264 stream using FFmpeg on the iPhone. I know the H264 stream is valid and the SPS/PPS are correct as VLC, Quicktime, Flash all decode the stream properly. The issue I am having on the iPhone is best shown by this picture.
It is as if the motion vectors are being drawn. This picture was snapped while there was a lot of motion in the image. If the scene is static then there are dots in the corners. This always occurs with predictive frames. The blocky colors are also an issue.
I have tried various build settings for FFmpeg such as turning off optimizations, asm, neon, and many other combinations. Nothing seems to alter the behavior of the decoder. I have also tried the Works with HTML, Love and Peace releases, and also the latest GIT sources. Is there maybe a setting I am missing, or maybe I have inadvertently enabled some debug setting in the decoder.
Edit
I am using sws_scale to convert the image to RGBA. I have tried various different pixel formats with the same results.
sws_scale(convertCtx, (const uint8_t**)srcFrame->data, srcFrame->linesize, 0, codecCtx->height, dstFrame->data, dstFrame->linesize);
I am using PIX_FMT_YUV420P as the source format when setting up my codec context.
What you're looking at is ffmpeg's motion vector visualization. Make sure that none of the following debug flags are set:
avctx->debug & FF_DEBUG_VIS_QP
avctx->debug & FF_DEBUG_VIS_MB_TYPE
avctx->debug_mv
Also, keep in mind that decoding H264 video using the CPU will be MUCH slower and less power-efficient on iOS than using the hardware decoder.

How to Use ffmpeg codes in iPHone

I need to convert the video into png images. I did that using ffmepg. But I need to do that quickly. Now it's taking lots of time to convert a video into images. Now to reduce the conversion time. I search lot But I got "ffmpeg -i video.mpg image%d.jpg" these codings as solution. Please teach me to use these kind of codings.
shoot and save the video with AVCaptureSession + AVCaptureMovieFileOutput
use AVAssetReader to extract the individual frames from the video as BGRA CVImageBufferRefs
save as PNG: CVImageBufferRef -> UIImage -> UIImagePNGRepresentation
This should be faster than ffmpeg because step 2 is hardware accelerated and also has the benefit of allowing you to discard a cumbersome LPGLed 3rd party library.
Enjoy!
with ffmpeg you can split video frame by frame and can mix audio with video and also check this

How can i detect CODEC in MPMoviePlayerController in iphone-sdk

When Video is made with the Sorenson CODEC... MPMoviePlayerController just plays Audio(and not the Video), Instead i want to show my custom error message at this point. How can i detect which CODEC is used by particular File programmatically ... ?
EDIT: I am not using Quick time in my code so that solution won't work
Thanks
Check this documentation to understand the Quicktime file format :
http://developer.apple.com/library/mac/documentation/QuickTime/QTFF/qtff.pdf
The field you are looking for is the "vfmt" code that is containing the video fourcc code (there is one for each video track in your file, so take care if your file is containing several video tracks). The fourcc codes for Sorenson codec are "SVQ1" and "SVQ3".
Now you'll have to write some code to parse the QT file to find the correct atom, extract the "vfmt" value and compare it to SVQ1/SVQ3 !
Apple is providing some classes to easily parse quicktime files, but it is only available on Mac OS, not on iOS !

ffmpeg for extract a frame from iPhone video camera

I am trying to extract an image frame from a video taken from iPhone camera using ffmpeg. But it usually throws me a EXEC_BAD_ACCESS and the stacktrace is showing in another method calls that is never called (I know my code didn't call it)
I am using the ffmpeg built from the instruction of the iFrameExtractor website. If anybody do it successfully, please help me or if possible, send me some codes. I don't know why it crashes, although it works well on the simulator (which I manually import a video into the library). My guess is that ffmpeg cannot decode the iPhone video camera correctly.
I already tried to use all 3 sets of library files like arvm6, arvm7 and i386 but doesn't work. My iPhone is 3gs. My iphone sdk is 3.1.3
I think it is my fault in calling the VideoFrameExtractor. The example code doesn't work well. I have to change from videoExtractor.currentImage to [videoExtractor currentImage]
Why would you use ffmpeg? You can extract frames using the AVFoundation framework in iOS4. It's faster and easier to use.
Can you paste in your stack trace and possibly the code you are using to read the frames?