FFmpeg decoding H264 - iphone

I am decoding a H264 stream using FFmpeg on the iPhone. I know the H264 stream is valid and the SPS/PPS are correct as VLC, Quicktime, Flash all decode the stream properly. The issue I am having on the iPhone is best shown by this picture.
It is as if the motion vectors are being drawn. This picture was snapped while there was a lot of motion in the image. If the scene is static then there are dots in the corners. This always occurs with predictive frames. The blocky colors are also an issue.
I have tried various build settings for FFmpeg such as turning off optimizations, asm, neon, and many other combinations. Nothing seems to alter the behavior of the decoder. I have also tried the Works with HTML, Love and Peace releases, and also the latest GIT sources. Is there maybe a setting I am missing, or maybe I have inadvertently enabled some debug setting in the decoder.
Edit
I am using sws_scale to convert the image to RGBA. I have tried various different pixel formats with the same results.
sws_scale(convertCtx, (const uint8_t**)srcFrame->data, srcFrame->linesize, 0, codecCtx->height, dstFrame->data, dstFrame->linesize);
I am using PIX_FMT_YUV420P as the source format when setting up my codec context.

What you're looking at is ffmpeg's motion vector visualization. Make sure that none of the following debug flags are set:
avctx->debug & FF_DEBUG_VIS_QP
avctx->debug & FF_DEBUG_VIS_MB_TYPE
avctx->debug_mv
Also, keep in mind that decoding H264 video using the CPU will be MUCH slower and less power-efficient on iOS than using the hardware decoder.

Related

Agora's custom video source example code gives error

I'm trying to set up a custom video source for a video stream in Agora for Unity, following the instructions from Agora's developer center here (and particularly, the example code at the bottom):
https://docs.agora.io/en/Video/custom_video_unity?platform=Unity
THIS CODE WORKS. I can successfully send a video stream and watch it on another device and it looks correct.
However, the Unity console is reporting an error on every single frame, saying:
D3D11 unsupported ReadPixels destination texture format (14)
Unity's documentation for Texture2D.ReadPixels says that it works on RGBA32, ARGB32 and RGB24 texture formats, but Agora's example is using a texture in BGRA32 format.
If I alter the example to set the texture to RGBA32 format instead, then the program still works, except the colors are wrong--red and blue are swapped (unsurprisingly).
I tried to adjust the expected texture on Agora's end by modifying this line of the example:
externalVideoFrame.format = ExternalVideoFrame.VIDEO_PIXEL_FORMAT.VIDEO_PIXEL_BGRA;
But...there is no corresponding define for VIDEO_PIXEL_RGBA. The available options are VIDEO_PIXEL_UNKNOWN, VIDEO_PIXEL_I420, VIDEO_PIXEL_BGRA, VIDEO_PIXEL_NV12, VIDEO_PIXEL_I422
So....my app is functioning correctly, but I'm drowning in error messages of dubious significance, which seems like it's going to cause headaches for development and debugging down the road.
What can I do?
For the inverted color issue, make sure you have the same encoding format on the receiver side. If you are using the SDK script VideoSurface.cs, change the line where it instantiates the Texture (about line 172), where it should be like:
nativeTexture = new Texture2D((int)defWidth, (int)defHeight, TextureFormat.BGRA32, false);
(It was RGBA32 in the stock SDK code).
Update: This format issue has been resolved in version 3.0.1. If it hasn't been released in Asset Store, you may come grab the beta to try out. Check with slack channel here: https://agoraiodev.slack.com/messages/unity-help-me

How to sync between recording and input live video stream?

I am working on a project in which I am receiving raw frames from some input video devices. I am trying to write those frames in a video files using FFMPEG library.
I have no control over the frame rate i am getting from my input sources. This frame rate varies in run-time also.
Now my problem is how do i sync between recorded video and coming video. Depending upon frame-rate i set in FFMPEG and actual frame rate that i am receiving playback of recorded video is either fast or slow than input video.
I tried to add timestamps (as numOfFrames) in encoded video as per following link
but that didn't help.
ffmpeg speed encoding problem
Please tell me a way to synchronize both. This is my first time with FFMPEG or any multimedia libraries so any examples will be highly appreciated.
I am using directshow ISampleGrabber interface to capture those frames.
Thank You
So Finally i figured out how to do this. Here is how..
First i was taking preview from PREVIEW pin of source filter which do not give timestamps to frames. So one should take frames from capture pin of the source filter. Than in SampleCB callback function we cant get time using IMediaSample::GetTime(). But this function will return time in unit of 100ns. FFMPEG requires it in units of 1/time_base. Here time_base is desired frame rate.
So directshow timestamp needs to be converted in FFMPEG units first. Than we can set pts in AVFrame::pts variable of ffmpeg. One more thing that needs to be considered is first frame of video shoul have timestamp of 0 in FFMPEG so that needs to be taken care of while converting from directshow timestamp to FFMPEG one.
Thank You

ffmpeg-X264 encode --BGRA to AVFrame(ffmpeg) and viceversa? for IOS

Am working Video Processing in IOS(iphone/ipod/ipad) Using Objective c. i am using AVFoundation Framework to Capture Video . i want to Encode/decode those video frame using ffmpeg-libx264. i have compiled ffmpeg-x264 lib for ios. i got kCVPixelFormatType_32BGRA from AVFoundation.
my problem is
1.How to convert kCVPixelFormatType_32BGRA to AVFrame for enode using avcodec_encode_video?
2.How to convert AVFrame to kCVPixelFormatType_32BGRA # decode side from avcodec_decode_video2?
Please help me to start above process or give path for working tutorial .Thanks in advance.
If you're trying to use FFMpeg you'll need to use kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange rather than kCVPixelFormatType_32BGRA and then you can shove it into an AVFrame. You'll probably also want to convert what you're getting from the iOS camera (YUVNV12) to YUV420P so you can receive it on other devices that aren't iOS. If you are just using iOS devices and that's all you care about, you can skip this side of the color conversion and just pack it into the AVFrame.
Since you're already putting it into a YUV format, you can just use CVPixelBufferGetBaseAddressOfPlane(buf,0) and encode that address.
Once you decode the image, you'll need to change the colors to BGRA from YUV420P. If you didn't swap the colors properly in the first place before you encoded it, you'll just change YUVNV12 to BGRA.
Hope this helps a bit. You can find the proper color conversion algorithms online.

AVCaptureVideoOutput and compression

As of iOS 4.x, is AVCaptureVideoDataOutput configurable to return you compressed frames?
The documentation for AVCaptureVideoDataOutput says:
AVCaptureVideoDataOutput is a concrete
sub-class of AVCaptureOutput you use,
via its delegate, to process
uncompressed frames from the video
being captured, or to access
compressed frames.
One of the properties is 'videoSettings' which according to the SDK, is the compression settings for the output and it says the compression setting keys can be found in AVVideoSettings.h. But it also says that only CVPixelBufferPixelFormatTypeKey is supported.
Based on this, can I assume that all of the frames returned by AVCaptureVideoDataOutput to the sampleBufferDelegate method is uncompressed? Is there a way to get to the compressed frames?
The current version of the SDK only returns uncompressed frames from the camera. Maybe it'll change in the future, but it's like this right now. Therefore, the only way to get compressed images is to do it yourself. How to do it depends on your needs.
If you need to write them on disk, it will be long and ineffective (a movie would be better). If you need to send compressed images OTA, it's doable with a jpeg or png library for example.

how to play video in Open GLES

I am having problem with MPMoviePlayer( i want to customize the MPMoviePlayer).
Can anyOne tell me how to play video using Open GLES in iphone???
I want to do buffer level handling of the video streams....
Thanks in advance
The builtin frameworks do not provide support for that sort of customization, they expect you to use MPFramework as is.
If you want to do decompress your video into an OpenGL texture in a supported way you need to include your own decoder, decode the buffers, and blit them into a texture.
As Ben mentioned, this will bypass the builtin H.264 HW, which will result in substantially higher power use and reduce battery life. It may also make maintaining your target framerate difficult, depending on the size of your video and what else you are doing with CPU.