how to play video in Open GLES - iphone

I am having problem with MPMoviePlayer( i want to customize the MPMoviePlayer).
Can anyOne tell me how to play video using Open GLES in iphone???
I want to do buffer level handling of the video streams....
Thanks in advance

The builtin frameworks do not provide support for that sort of customization, they expect you to use MPFramework as is.
If you want to do decompress your video into an OpenGL texture in a supported way you need to include your own decoder, decode the buffers, and blit them into a texture.
As Ben mentioned, this will bypass the builtin H.264 HW, which will result in substantially higher power use and reduce battery life. It may also make maintaining your target framerate difficult, depending on the size of your video and what else you are doing with CPU.

Related

What can be the substitute for SDL to direct ffmpeg decoded videos to screen in IOS?

I making a iOS video player using ffmpeg, the flow likes this:
Video File---> [FFMPEG Decoder] --> decoded frames --> [a media director] --> /iphone screen (full and partial)/
A media director will handle the tasks of rendering decoded video frames to iOS ui (UIView, UIWindow etc), outputting audio samples to iOS speaker, and threads management.
SDL is one of those libs, but SDL is mainly made for game making purpose and seem to be not really mature for iOS.
What can be the substitute for SDL?
On Mac OS X I used CoreImage/CoreVideo for this, decoding frame into a CVImageBuffer and rendering them into a CoreImage context. I'm not sure CoreImage contexts are supported on iOS though. Maybe this thread will help on this: How to turn a CVPixelBuffer into a UIImage?
A better way on iOS might be to draw your frames with OpenGLES.
SDL uses opengl and FFMpeg, you can come pretty close using ffmpeg and apple native api's functions. We've done it with several video players.
This certainly will get you started.
https://github.com/mooncatventures-group

Real-time processing of video frames in AVPlayerLayer

I need to process the video frames from a remote video in real-time and present the processed frames on screen.
I have tried using AVAssetReader but because the AVURLAsset is accessing a remote URL, calling AVAssetReader:initWithAsset will result in a crash.
AVCaptureSession seems good, but it works with the camera and not a video file (much less a remote one).
As such, I am now exploring this: Display the remote video in an AVPlayerLayer, and then use GL ES to access what is displayed.
Questions:
How do I convert AVPlayerLayer (or a CALayer in general) to a CAEAGLLayer and read in the pixels using CVOpenGLESTextureCacheCreateTextureFromImage()?
Or is there some other better way?
Note: Performance is an important consideration, otherwise a simple screen capture technique would suffice.
As far as I know, Apple does not provide direct access to the h.264 decoder and there is no way around that. One API you can use is the asset interface, where you give it a URL and then that file on disk is read as CoreVideo pixel buffers. What you could try would be to download from your URL and then write a new asset (a file in the tmp dir) one video frame at a time. Then, once the download was completed and the new h264 file was fully written, close the writing session and then open the file as an asset reader. You would not be able to do streaming with this approach, the entire file would need to be downloaded first. Otherwise, you could try the AVPlayerLayer approach to see if that supports streaming directly. Be aware that the texture cache logic is not easy to implement, you need and OpenGL view already configured properly you would be better off just looking at an existing implementation that already does the rendering instead of trying to start from scratch.
This is now possible on modern iOS. If you're able to represent your real-time processing with Core Image—and you should be able to given Core Image's extensive support for custom filters nowadays—you can make use of AVAsynchronousCIImageFilteringRequest to pass into an AVPlayerItem per the documentation.
If you'd rather process things totally manually, you can check out AVPlayerItemVideoOutput and CVMetalTextureCache. With these, you can read sample buffers directly from a video and convert them into Metal textures from a texture buffer pool. From there, you can do whatever you want with the textures. Note with this approach, you are responsible for displaying the resultant textures (inside your own Metal or SceneKit rendering pipeline).
Here's a blog post demonstrating this technique.
Alternately, if you'd rather not manage your own render pipeline, you can still use AVPlayerItemVideoOutput to grab sample buffers, process them with something like vImage and Core Image (ideally using a basic Metal-backed CIContext for maximum performance!), and send them to AVSampleBufferDisplayLayer to display directly in a layer tree. That way you can process the frames to your liking and still let AVFoundation manage the display of the layer.

FFmpeg decoding H264

I am decoding a H264 stream using FFmpeg on the iPhone. I know the H264 stream is valid and the SPS/PPS are correct as VLC, Quicktime, Flash all decode the stream properly. The issue I am having on the iPhone is best shown by this picture.
It is as if the motion vectors are being drawn. This picture was snapped while there was a lot of motion in the image. If the scene is static then there are dots in the corners. This always occurs with predictive frames. The blocky colors are also an issue.
I have tried various build settings for FFmpeg such as turning off optimizations, asm, neon, and many other combinations. Nothing seems to alter the behavior of the decoder. I have also tried the Works with HTML, Love and Peace releases, and also the latest GIT sources. Is there maybe a setting I am missing, or maybe I have inadvertently enabled some debug setting in the decoder.
Edit
I am using sws_scale to convert the image to RGBA. I have tried various different pixel formats with the same results.
sws_scale(convertCtx, (const uint8_t**)srcFrame->data, srcFrame->linesize, 0, codecCtx->height, dstFrame->data, dstFrame->linesize);
I am using PIX_FMT_YUV420P as the source format when setting up my codec context.
What you're looking at is ffmpeg's motion vector visualization. Make sure that none of the following debug flags are set:
avctx->debug & FF_DEBUG_VIS_QP
avctx->debug & FF_DEBUG_VIS_MB_TYPE
avctx->debug_mv
Also, keep in mind that decoding H264 video using the CPU will be MUCH slower and less power-efficient on iOS than using the hardware decoder.

How to play video without AVPlayer or Movie Player on iOS

I need to play custom format video on iOS, with all the rendering done by myself.
My current choice is OpenGL ES, but it takes too much CPU from profiling result (mostly in glTexImage2D).
Is there any faster alternatives for my need?
Thanks!
AVPlayer is going to be fast because the code has been optimized to decompress using GPU acceleration and llvm optimizations. If you want to use OpenGL ES, you will probably end up using EAGLContext, creating texture shaders and doing other low-level optimizations. Our app can composite multiple layers on top of video at a high frame rate with very low CPU load.
I guess you can use openmax application layer integration using egl which will stream the video to opengl es 2.0 via egl and can be stored or displayed to opengl es screen. Can help you if i can have the snippet :)

How to apply "filters" to AVCaptureVideoPreviewLayer

My app is currently using AVFoundation to take the raw camera data from the rear camera of an iPhone and display it on an AVCaptureVideoPreviewLayer in real time.
My goal is to to conditionally apply simple image filters to the preview layer. The images aren't saved, so I do not need to capture the output. For example, I would like to toggle a setting that converts the video coming in on the preview layer to Black & White.
I found a question here that seems to accomplish something similar by capturing the individual video frames in a buffer, applying the desired transformations, then displaying each frame as an UIImage. For several reasons, this seems like overkill for my project and I'd like to avoid any performance issues this may cause.
Is this the only way to accomplish my goal?
As I mentioned, I am not looking to capture any of the AVCaptureSession's video, merely preview it.
Probably the most performant way of handling this would be to use OpenGL ES for filtering and display of these video frames. You won't be able to do much with an AVCaptureVideoPreviewLayer directly, aside from adjusting its opacity when overlaid with another view or layer.
I have a sample application here where I grab frames from the camera and apply OpenGL ES 2.0 shaders to process the video in realtime for display. In this application (explained in detail here), I was using color-based filtering to track objects in the camera view, but others have modified this code to do some neat video processing effects. All GPU-based filters in this application that display to the screen run at 60 FPS on my iPhone 4.
The only iOS device out there that supports video, yet doesn't have an OpenGL ES 2.0 capable GPU, is the iPhone 3G. If you need to target that device as well, you might be able to take the base code for video capture and generation of OpenGL ES textures, and then use the filter code from Apple's GLImageProcessing sample application. That application is built around OpenGL ES 1.1, support for which is present on all iOS devices.
However, I highly encourage looking at the use of OpenGL ES 2.0 for this, because you can pull off many more kinds of effect using shaders than you can with the fixed function OpenGL ES 1.1 pipeline.
(Edit: 2/13/2012) As an update on the above, I've now created an open source framework called GPUImage that encapsulates this kind of custom image filtering. It also handles capturing video and displaying it to the screen after being filtered, requiring as few as six lines of code to set all of this up. For more on the framework, you can read my more detailed announcement.
I would recommend looking at the Rosy Writer example from the ios development library. Brad Larson's GPUImage Library is pretty awesome but it seems a little overkill for this question.
If you are just interested in adding OpenGL Shaders (aka Filters) to a AVCaptureVideoPreviewLayer the workflow is to send the output of the capture session to an OpenGL view for rendering.
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
videoOut.videoSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(_renderer.inputPixelFormat) };
[videoOut setSampleBufferDelegate:self queue:_videoDataOutputQueue];
Then in the captureOutput: delegate send the sample buffer to OpenGL Renderer
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef sourcePixelBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
_renderer copyRenderedPixelBuffer:sourcePixelBuffer];
}
In OpenGL Renderer attach the sourcePixelBuffer to a texture and you can filter it within the OpenGL Shaders. The shader is a program that is run on a perpixel base. The Rosy Writer example also shows examples of using different filtering techniques other than OpenGL.
Apple's example AVCamFilter does it all