As of iOS 4.x, is AVCaptureVideoDataOutput configurable to return you compressed frames?
The documentation for AVCaptureVideoDataOutput says:
AVCaptureVideoDataOutput is a concrete
sub-class of AVCaptureOutput you use,
via its delegate, to process
uncompressed frames from the video
being captured, or to access
compressed frames.
One of the properties is 'videoSettings' which according to the SDK, is the compression settings for the output and it says the compression setting keys can be found in AVVideoSettings.h. But it also says that only CVPixelBufferPixelFormatTypeKey is supported.
Based on this, can I assume that all of the frames returned by AVCaptureVideoDataOutput to the sampleBufferDelegate method is uncompressed? Is there a way to get to the compressed frames?
The current version of the SDK only returns uncompressed frames from the camera. Maybe it'll change in the future, but it's like this right now. Therefore, the only way to get compressed images is to do it yourself. How to do it depends on your needs.
If you need to write them on disk, it will be long and ineffective (a movie would be better). If you need to send compressed images OTA, it's doable with a jpeg or png library for example.
Related
I am working on a project in which I am receiving raw frames from some input video devices. I am trying to write those frames in a video files using FFMPEG library.
I have no control over the frame rate i am getting from my input sources. This frame rate varies in run-time also.
Now my problem is how do i sync between recorded video and coming video. Depending upon frame-rate i set in FFMPEG and actual frame rate that i am receiving playback of recorded video is either fast or slow than input video.
I tried to add timestamps (as numOfFrames) in encoded video as per following link
but that didn't help.
ffmpeg speed encoding problem
Please tell me a way to synchronize both. This is my first time with FFMPEG or any multimedia libraries so any examples will be highly appreciated.
I am using directshow ISampleGrabber interface to capture those frames.
Thank You
So Finally i figured out how to do this. Here is how..
First i was taking preview from PREVIEW pin of source filter which do not give timestamps to frames. So one should take frames from capture pin of the source filter. Than in SampleCB callback function we cant get time using IMediaSample::GetTime(). But this function will return time in unit of 100ns. FFMPEG requires it in units of 1/time_base. Here time_base is desired frame rate.
So directshow timestamp needs to be converted in FFMPEG units first. Than we can set pts in AVFrame::pts variable of ffmpeg. One more thing that needs to be considered is first frame of video shoul have timestamp of 0 in FFMPEG so that needs to be taken care of while converting from directshow timestamp to FFMPEG one.
Thank You
I need to process the video frames from a remote video in real-time and present the processed frames on screen.
I have tried using AVAssetReader but because the AVURLAsset is accessing a remote URL, calling AVAssetReader:initWithAsset will result in a crash.
AVCaptureSession seems good, but it works with the camera and not a video file (much less a remote one).
As such, I am now exploring this: Display the remote video in an AVPlayerLayer, and then use GL ES to access what is displayed.
Questions:
How do I convert AVPlayerLayer (or a CALayer in general) to a CAEAGLLayer and read in the pixels using CVOpenGLESTextureCacheCreateTextureFromImage()?
Or is there some other better way?
Note: Performance is an important consideration, otherwise a simple screen capture technique would suffice.
As far as I know, Apple does not provide direct access to the h.264 decoder and there is no way around that. One API you can use is the asset interface, where you give it a URL and then that file on disk is read as CoreVideo pixel buffers. What you could try would be to download from your URL and then write a new asset (a file in the tmp dir) one video frame at a time. Then, once the download was completed and the new h264 file was fully written, close the writing session and then open the file as an asset reader. You would not be able to do streaming with this approach, the entire file would need to be downloaded first. Otherwise, you could try the AVPlayerLayer approach to see if that supports streaming directly. Be aware that the texture cache logic is not easy to implement, you need and OpenGL view already configured properly you would be better off just looking at an existing implementation that already does the rendering instead of trying to start from scratch.
This is now possible on modern iOS. If you're able to represent your real-time processing with Core Image—and you should be able to given Core Image's extensive support for custom filters nowadays—you can make use of AVAsynchronousCIImageFilteringRequest to pass into an AVPlayerItem per the documentation.
If you'd rather process things totally manually, you can check out AVPlayerItemVideoOutput and CVMetalTextureCache. With these, you can read sample buffers directly from a video and convert them into Metal textures from a texture buffer pool. From there, you can do whatever you want with the textures. Note with this approach, you are responsible for displaying the resultant textures (inside your own Metal or SceneKit rendering pipeline).
Here's a blog post demonstrating this technique.
Alternately, if you'd rather not manage your own render pipeline, you can still use AVPlayerItemVideoOutput to grab sample buffers, process them with something like vImage and Core Image (ideally using a basic Metal-backed CIContext for maximum performance!), and send them to AVSampleBufferDisplayLayer to display directly in a layer tree. That way you can process the frames to your liking and still let AVFoundation manage the display of the layer.
I know that I can access raw video images from the iPhone's camera with AVCaptureVideoDataOutput. I also know that I can record video to a file with AVCaptureMovieFileOutput. But how can I first access the raw video images, manipulate them and then write the manipulated ones into the video file? I've already seen apps in the app store, which do this, so it must be possible.
Ok, I now know, that it's done with AVAssetWriter.
I am curious about the new APIs for iPhone iOS: AVCapture...
Does this include a documented way to grab a screenshot of the camera preview? The doc seems a bit confusing to me, and since it is out of NDA now, I thought I would post my question here.
Many thanks,
Brett
With AVFoundation you can grab photos from the camera session...The way it works is you use one of the subclasses of AVCaptureOutput in order to get what you need, for still images you are going to want to use the AVCaptureSTillImageOutput subclass, here is a link AVCaptureStillImageOutput ref. Besides that you also have AVCaptureMovieFileOutput which is used to record a quicktime movie from the capture session to a file, AVCaptureVideoDataOutput which allows you to intercept uncompressed individual frames from the capture session, you also have audio outputs which you can use as well...hope this helps
I am having problem with MPMoviePlayer( i want to customize the MPMoviePlayer).
Can anyOne tell me how to play video using Open GLES in iphone???
I want to do buffer level handling of the video streams....
Thanks in advance
The builtin frameworks do not provide support for that sort of customization, they expect you to use MPFramework as is.
If you want to do decompress your video into an OpenGL texture in a supported way you need to include your own decoder, decode the buffers, and blit them into a texture.
As Ben mentioned, this will bypass the builtin H.264 HW, which will result in substantially higher power use and reduce battery life. It may also make maintaining your target framerate difficult, depending on the size of your video and what else you are doing with CPU.