Is there any solution to create video stream from image/images? - encoding

I would like to generate video content from text/images with provided content size. Have tried many options from FFmpeg to h264 encoders. So a want to build a solution that takes one image or array of images and generate video stream, let's say rtmp. Also image is always changing, adding text, change color etc. I have tried with Golang.

Related

Flutter Multiple Image to Video Slideshow with Audio converter Using FFmpegKit

I want to render a video file using some images with animation.
I have tried some solution out there and i did not get any solution either I don't know how to write command of FFmpegKit. I also tried to first implement a single image with mp3 but it was also not working.

Real-time processing of video frames in AVPlayerLayer

I need to process the video frames from a remote video in real-time and present the processed frames on screen.
I have tried using AVAssetReader but because the AVURLAsset is accessing a remote URL, calling AVAssetReader:initWithAsset will result in a crash.
AVCaptureSession seems good, but it works with the camera and not a video file (much less a remote one).
As such, I am now exploring this: Display the remote video in an AVPlayerLayer, and then use GL ES to access what is displayed.
Questions:
How do I convert AVPlayerLayer (or a CALayer in general) to a CAEAGLLayer and read in the pixels using CVOpenGLESTextureCacheCreateTextureFromImage()?
Or is there some other better way?
Note: Performance is an important consideration, otherwise a simple screen capture technique would suffice.
As far as I know, Apple does not provide direct access to the h.264 decoder and there is no way around that. One API you can use is the asset interface, where you give it a URL and then that file on disk is read as CoreVideo pixel buffers. What you could try would be to download from your URL and then write a new asset (a file in the tmp dir) one video frame at a time. Then, once the download was completed and the new h264 file was fully written, close the writing session and then open the file as an asset reader. You would not be able to do streaming with this approach, the entire file would need to be downloaded first. Otherwise, you could try the AVPlayerLayer approach to see if that supports streaming directly. Be aware that the texture cache logic is not easy to implement, you need and OpenGL view already configured properly you would be better off just looking at an existing implementation that already does the rendering instead of trying to start from scratch.
This is now possible on modern iOS. If you're able to represent your real-time processing with Core Image—and you should be able to given Core Image's extensive support for custom filters nowadays—you can make use of AVAsynchronousCIImageFilteringRequest to pass into an AVPlayerItem per the documentation.
If you'd rather process things totally manually, you can check out AVPlayerItemVideoOutput and CVMetalTextureCache. With these, you can read sample buffers directly from a video and convert them into Metal textures from a texture buffer pool. From there, you can do whatever you want with the textures. Note with this approach, you are responsible for displaying the resultant textures (inside your own Metal or SceneKit rendering pipeline).
Here's a blog post demonstrating this technique.
Alternately, if you'd rather not manage your own render pipeline, you can still use AVPlayerItemVideoOutput to grab sample buffers, process them with something like vImage and Core Image (ideally using a basic Metal-backed CIContext for maximum performance!), and send them to AVSampleBufferDisplayLayer to display directly in a layer tree. That way you can process the frames to your liking and still let AVFoundation manage the display of the layer.

Recording video with option of manipulating the pixels before writing to file

I know that I can access raw video images from the iPhone's camera with AVCaptureVideoDataOutput. I also know that I can record video to a file with AVCaptureMovieFileOutput. But how can I first access the raw video images, manipulate them and then write the manipulated ones into the video file? I've already seen apps in the app store, which do this, so it must be possible.
Ok, I now know, that it's done with AVAssetWriter.

possible to create a video file from RGB frames using AV Foundation

I have an iOS app that I want to record some of its visual output into a video. It looks like the way to create a video on iOS is to use AVMutableComposition and feed AVAssets to it via insertTimeRange.
All the documentation and examples that I can find only add video and audio assets to an AVMutableComposition. Is there a way to add image data to it (i.e. add an image for each frame of the video)? I can get this image data as straight RGB, PNG, JPG, UIImage, or whatever is easiest to feed to AV Foundation (if it's even possible).
If it's not possible to feed images into an AVMutableComposition for the video frames, is there another way to generate an .mp4 file from frames in iOS.
To generate movies from frame you can use AVAssetWriter, here is a question that sort of covers that here on SO, question

Creating a video from the visible window or view

I have requirement to record video from the visible view.i,e not form camera. like in TalkingTom application. Can any suggest the solution.
You can take screenshot (using UIGetScreenImage) and store the images into an array.
Then convert it into video using mpeg coder.