AVAssetWriter change of quality on the fly - iphone

Hey I am currently working with the AVAssetWriter and the AVAssetWriterInput.
In my project I would like to use the hardware acceleration of the iPhone.
Is it possible to use the AVAssetWriter to create compressed images and change the quality on the fly? That means after I initialized the AVAssetWriterInput instance?

Related

Real-time processing of video frames in AVPlayerLayer

I need to process the video frames from a remote video in real-time and present the processed frames on screen.
I have tried using AVAssetReader but because the AVURLAsset is accessing a remote URL, calling AVAssetReader:initWithAsset will result in a crash.
AVCaptureSession seems good, but it works with the camera and not a video file (much less a remote one).
As such, I am now exploring this: Display the remote video in an AVPlayerLayer, and then use GL ES to access what is displayed.
Questions:
How do I convert AVPlayerLayer (or a CALayer in general) to a CAEAGLLayer and read in the pixels using CVOpenGLESTextureCacheCreateTextureFromImage()?
Or is there some other better way?
Note: Performance is an important consideration, otherwise a simple screen capture technique would suffice.
As far as I know, Apple does not provide direct access to the h.264 decoder and there is no way around that. One API you can use is the asset interface, where you give it a URL and then that file on disk is read as CoreVideo pixel buffers. What you could try would be to download from your URL and then write a new asset (a file in the tmp dir) one video frame at a time. Then, once the download was completed and the new h264 file was fully written, close the writing session and then open the file as an asset reader. You would not be able to do streaming with this approach, the entire file would need to be downloaded first. Otherwise, you could try the AVPlayerLayer approach to see if that supports streaming directly. Be aware that the texture cache logic is not easy to implement, you need and OpenGL view already configured properly you would be better off just looking at an existing implementation that already does the rendering instead of trying to start from scratch.
This is now possible on modern iOS. If you're able to represent your real-time processing with Core Image—and you should be able to given Core Image's extensive support for custom filters nowadays—you can make use of AVAsynchronousCIImageFilteringRequest to pass into an AVPlayerItem per the documentation.
If you'd rather process things totally manually, you can check out AVPlayerItemVideoOutput and CVMetalTextureCache. With these, you can read sample buffers directly from a video and convert them into Metal textures from a texture buffer pool. From there, you can do whatever you want with the textures. Note with this approach, you are responsible for displaying the resultant textures (inside your own Metal or SceneKit rendering pipeline).
Here's a blog post demonstrating this technique.
Alternately, if you'd rather not manage your own render pipeline, you can still use AVPlayerItemVideoOutput to grab sample buffers, process them with something like vImage and Core Image (ideally using a basic Metal-backed CIContext for maximum performance!), and send them to AVSampleBufferDisplayLayer to display directly in a layer tree. That way you can process the frames to your liking and still let AVFoundation manage the display of the layer.

Recording video with option of manipulating the pixels before writing to file

I know that I can access raw video images from the iPhone's camera with AVCaptureVideoDataOutput. I also know that I can record video to a file with AVCaptureMovieFileOutput. But how can I first access the raw video images, manipulate them and then write the manipulated ones into the video file? I've already seen apps in the app store, which do this, so it must be possible.
Ok, I now know, that it's done with AVAssetWriter.

How to Use ffmpeg codes in iPHone

I need to convert the video into png images. I did that using ffmepg. But I need to do that quickly. Now it's taking lots of time to convert a video into images. Now to reduce the conversion time. I search lot But I got "ffmpeg -i video.mpg image%d.jpg" these codings as solution. Please teach me to use these kind of codings.
shoot and save the video with AVCaptureSession + AVCaptureMovieFileOutput
use AVAssetReader to extract the individual frames from the video as BGRA CVImageBufferRefs
save as PNG: CVImageBufferRef -> UIImage -> UIImagePNGRepresentation
This should be faster than ffmpeg because step 2 is hardware accelerated and also has the benefit of allowing you to discard a cumbersome LPGLed 3rd party library.
Enjoy!
with ffmpeg you can split video frame by frame and can mix audio with video and also check this

Iphone saving a slow video with open gl filter

I am using avcapturesession with a preset AVCaptureSessionPresetMedium to capture video, i am applying effect on this video with opengl using shaders.
I use assetWriter to write the video to an mp4 file. The problem is that the resulted video is slow specially when I add audio output.
This is how my code works :
In -
(void)captureOutput:(AVCaptureOutput
*)captureOutput... function I apply the opengl filter to the captured
frames
then check if the captureoutput is
video or audio if it's video, I use
glReadPixels to create a
CVPixelBufferRef that I send to an
AVAssetWriterInputPixelBufferAdaptor
to write it
if it's audio, I write directly the
CMSampleBufferRef
If someone can tell me what's wrong with my approach or which part is supposed to make the resulted video slow?

possible to create a video file from RGB frames using AV Foundation

I have an iOS app that I want to record some of its visual output into a video. It looks like the way to create a video on iOS is to use AVMutableComposition and feed AVAssets to it via insertTimeRange.
All the documentation and examples that I can find only add video and audio assets to an AVMutableComposition. Is there a way to add image data to it (i.e. add an image for each frame of the video)? I can get this image data as straight RGB, PNG, JPG, UIImage, or whatever is easiest to feed to AV Foundation (if it's even possible).
If it's not possible to feed images into an AVMutableComposition for the video frames, is there another way to generate an .mp4 file from frames in iOS.
To generate movies from frame you can use AVAssetWriter, here is a question that sort of covers that here on SO, question