approach for recording grayscale video on iphone? - iphone

I am building an iphone app that needs to record grayscale video and save it to the camera roll. I'm stumped at how best to approach this.
I am thinking along the following lines:
Use a shader and opengl to transform the video to grayscale
Use AVFoundation (AVAssetWriter with an AVAssetWriterInputPixelBufferAdaptor) to write the video to the file.
My questions are:
Is this the right approach (simplest, best performance)?
If so, what would be the best way to go from opengl output to a CVPixelBufferRef input for the AVAssetWriterInputPixelBufferAdaptor?
If not, what would be a better approach?
Any nudge in the right direction is much appreciated!

In general, I'd agree with this approach. Doing your processing in an OpenGL ES 2.0 shader should be the most performant way of doing video frame alteration like this, but it won't be very simple. Fortunately, you can start from a pre-existing template that already does this.
You can use the sample application I wrote here (and explained here) as a base. I use custom shaders in this example to track colors in an image, but you could easily alter this to convert the video frames to grayscale (I even saw someone do this once). The code for feeding camera video into a texture and processing it could be used verbatim from that sample.
In one of the display options within that application, I render the processed image first to a framebuffer object, then use glReadPixels() to pull the resulting image back into bytes that I can work with on the CPU. You could use this to get the raw image data back after the GPU has processed a frame, then feed those bytes into CVPixelBufferCreateWithBytes() to generate your CVPixelBufferRef for writing to disk.
(Edit: 2/29/2012) As an update to this, I just implemented this kind of video recording in my open source GPUImage framework, so I can comment on the specific performance for the encoding part of this. It turns out that you can capture video from the camera, perform live filtering on it, grab it from OpenGL ES using glReadPixels(), and write that out as live H.264 video in 640x480 frames on an iPhone 4 at 30 FPS (the maximum camera framerate).
There were a few things that I needed to do in order to get this recording speed. You need to make sure that you set your AVAssetWriterInputPixelBufferAdaptor to use kCVPixelFormatType_32BGRA as its color format for input pixel buffers. Then, you'll need to re-render your RGBA scene using a color-swizzling shader to provide BGRA output when using glReadPixels(). Without this color setting, your video recording framerates will drop to 5-8 FPS on an iPhone 4, where with it they are easily hitting 30 FPS. You can look at the GPUImageMovieWriter class source code to see more about how I did this.
Using the GPUImage framework, your above filtering and encoding task can be handled by simply creating a GPUImageVideoCamera, attaching a target of a GPUImageSaturationFilter with the saturation set to 0, and then attaching a GPUImageMovieWriter as a target of that. The framework will handle the OpenGL ES interactions for you. I've done this, and it works well on all iOS devices I've tested.

Related

How can I load a Gigapixel image as a material in SceneKit?

I’m trying to create an AR image to project on a wall from a Gigapixel image. Obviously Xcode crashes if I try to load the image as a material. Is there an efficient way to load only parts of the image that the user is looking at?
I'm using Swift 4.
This may not do exactly what you want, and you might need to roll-your-own way of parsing and passing data between Core Animation and SceneKit, but this is native, and is designed to handle large images and texture data sources, and feed them out asynchronously, and/or on a demand based basis:
https://developer.apple.com/documentation/quartzcore/catiledlayer

Apply custom camera filters on live camera preview - Swift

I'm looking to make a native iPhone iOS application in Swift 3/4 which uses the live preview of the back facing camera and allows users to apply filters like in the built in Camera app. The idea was for me to create my own filters by adjusting Hue/ RGB/ Brightness levels etc. Eventually I want to create a HUE slider which allows users to filter for specific colours in the live preview.
All of the answers I came across for a similar problem were posted > 2 years ago and I'm not even sure if they provide me with the relevant, up-to-date solution I am looking for.
I'm not looking to take a photo and then apply a filter afterwards. I'm looking for the same functionality as the native Camera app. To apply the filter live as you are seeing the camera preview.
How can I create this functionality? Can this be achieved using AVFoundation? AVKit? Can this functionality be achieved with ARKit perhaps?
Yes, you can apply image filters to the camera feed by capturing video with the AVFoundation Capture system and using your own renderer to process and display video frames.
Apple has a sample code project called AVCamPhotoFilter that does just this, and shows multiple approaches to the process, using Metal or Core Image. The key points are to:
Use AVCaptureVideoDataOutput to get live video frames.
Use CVMetalTextureCache or CVPixelBufferPool to get the video pixel buffers accessible to your favorite rendering technology.
Draw the textures using Metal (or OpenGL or whatever) with a Metal shader or Core Image filter to do pixel processing on the CPU during your render pass.
BTW, ARKit is overkill if all you want to do is apply image processing to the camera feed. ARKit is for when you want to know about the camera’s relationship to real-world space, primarily for purposes like drawing 3D content that appears to inhabit the real world.

Fast Texture Data Update on iOS

I need to update texture data for one texture at every frame. Is there any way of doing this very fast? The best option for me would be something similar with GL_APPLE_client_storage, but this is not supported on iOS. The obvious solution is to call glTexImage2D every frame, but this will copy the data and also I will have to keep same texture 2 times in memory.
Texture caches (added in iOS 5.0) provide the equivalent functionality to GL_APPLE_client_storage on iOS. On initial creation, they do seem to trigger glTexImage2D(), but I believe that subsequent updates behave in the manner of the Mac's GL_APPLE_client_storage.
They provide a particular performance boost when dealing with camera frames, as AV Foundation is optimized for this case. I describe how this works in detail within this answer for the camera. For raw data, you can create your own CVPixelBufferRef to be used for this, and then write to its internal contents to update the texture directly.
You have to be a little careful with this, as you can be overwriting texture data while a scene is being rendered, leading to tearing and other artifacts.
Maybe you should use glTexSubImage2D to update Texture. See here

Apply a filter algorithm for each frame of the camera

I am working on an Iphone application.
I need to do the following: when the user clicks on the "Camera Tab" the camera open inside the view with circle overlays.
I want to apply a filtering algorithm on the camera.
I am looking for the best way to do this. Is there a library that can help?
What I am doing currently:
I am using the OpenCV Library.
I define a timer.
For each timer tick I call cvCaptureFromCam() method from the OpenCV
framework (This will capture the picture with a camera and return
it).
I apply the algorithm on the image captured.
i display the image in a UIImageView
The idea is that on each timer tick I get the image, filter it and put it in the UIImageView. If the timer tick is fast enough it will appear as continuous.
However the cvCaptureFromCam is a little slow and this whole process is taking too much memory.
Any suggestions of a better way is greatly appreciated. Thanks
Anything that's based on CPU-bound processing, such as OpenCV, is probably going to be too slow for live video filtering on current iOS devices. As I state in this answer, I highly recommend looking to OpenGL ES for this.
As mentioned by CSmith, I've written an open source framework called GPUImage for doing this style of GPU-based filtering without having to worry about the underlying OpenGL ES involved. Most of the filters in this framework can be applied to live video at 640x480 at well over the 30 FPS framerate of the iOS camera. I've been gradually adding filters with the goal of replacing all of those present in Core Image, as well as most of the image processing functions of OpenCV. If there's something I'm missing from OpenCV that you need, let me know on the issues page for the project.
Build and run the FilterShowcase example application to see a full listing of the available filters and how they perform on live video sources, and look at the SimplePhotoFilter example to see how you can apply those filters to preview video and photos taken by the camera.

how to texture of images?

I am using too many fruits and vegetables images in iphone game, so how can I make texture of such images, because as per I read that every image in texture in OpenGL ES, so what should I do?
I have developed 6 iOS application, but this is my first game, so please guide me in proper way so that I can get idea.
You use glTexImage2D to upload raw pixel data to OpenGL in order to populate a texture. You can use Core Graphics and particularly CGBitmapContextCreate to get the raw pixel data to get the raw pixel data of (or convert to raw pixel data) anything else Core Graphics can draw — which for you probably means a CGImageRef, either through a C API load of a PNG or JPG, or just using the result of [someUIImage CGImage].
Apple's GLSprite sample (you'll need to be logged in, and I'm not sure those links work externally, but do a search in the Developer Library if necessary) is probably a good starting point. I'm not 100% behind the class structure, but if you look into EAGLView.m, lines 272 to 305, the code there loads a PNG from disk then does the necessary steps to post it off to OpenGL, with a decent amount of commenting.