How can I load a Gigapixel image as a material in SceneKit? - swift

I’m trying to create an AR image to project on a wall from a Gigapixel image. Obviously Xcode crashes if I try to load the image as a material. Is there an efficient way to load only parts of the image that the user is looking at?
I'm using Swift 4.

This may not do exactly what you want, and you might need to roll-your-own way of parsing and passing data between Core Animation and SceneKit, but this is native, and is designed to handle large images and texture data sources, and feed them out asynchronously, and/or on a demand based basis:
https://developer.apple.com/documentation/quartzcore/catiledlayer

Related

Apply custom camera filters on live camera preview - Swift

I'm looking to make a native iPhone iOS application in Swift 3/4 which uses the live preview of the back facing camera and allows users to apply filters like in the built in Camera app. The idea was for me to create my own filters by adjusting Hue/ RGB/ Brightness levels etc. Eventually I want to create a HUE slider which allows users to filter for specific colours in the live preview.
All of the answers I came across for a similar problem were posted > 2 years ago and I'm not even sure if they provide me with the relevant, up-to-date solution I am looking for.
I'm not looking to take a photo and then apply a filter afterwards. I'm looking for the same functionality as the native Camera app. To apply the filter live as you are seeing the camera preview.
How can I create this functionality? Can this be achieved using AVFoundation? AVKit? Can this functionality be achieved with ARKit perhaps?
Yes, you can apply image filters to the camera feed by capturing video with the AVFoundation Capture system and using your own renderer to process and display video frames.
Apple has a sample code project called AVCamPhotoFilter that does just this, and shows multiple approaches to the process, using Metal or Core Image. The key points are to:
Use AVCaptureVideoDataOutput to get live video frames.
Use CVMetalTextureCache or CVPixelBufferPool to get the video pixel buffers accessible to your favorite rendering technology.
Draw the textures using Metal (or OpenGL or whatever) with a Metal shader or Core Image filter to do pixel processing on the CPU during your render pass.
BTW, ARKit is overkill if all you want to do is apply image processing to the camera feed. ARKit is for when you want to know about the camera’s relationship to real-world space, primarily for purposes like drawing 3D content that appears to inhabit the real world.

Loading in a tileset with c#?

I've got a very large tileset with over 200 different 64x64 tiles. It would be practical to load them in a C# script, I've tried the following (for-loop):
tileset[x] = Resources.Load<Sprite>("Sprites/Tileset/Tileset_" + x);
Where in my asset folder, Sprites is the folder where Tileset.png is. Tileset.png is sliced in a grid (64x64) in I see Unity has sliced all the tiles correctly.
Is there a way I can load them, and put them in a Sprite array (Sprite[]) in code? What would be the correct path?
Looking for correctness? Then batching is caring...
If loaded individually, each tile will cost you a draw call. I would recommend that you look at creating spritesheets to batch the loading and rendering or these tiles. We have been using TexturePacker to load images for our apps and displaying 200+ images at once with one load call, one draw call and great performance. Alongside the app to create the spritesheets, there's a Unity Plugin on the asset store to load and manage them.
Note: we use the pro version.
Otherwise you can just load all the resources at a given path with Resources.LoadAll

Fast Texture Data Update on iOS

I need to update texture data for one texture at every frame. Is there any way of doing this very fast? The best option for me would be something similar with GL_APPLE_client_storage, but this is not supported on iOS. The obvious solution is to call glTexImage2D every frame, but this will copy the data and also I will have to keep same texture 2 times in memory.
Texture caches (added in iOS 5.0) provide the equivalent functionality to GL_APPLE_client_storage on iOS. On initial creation, they do seem to trigger glTexImage2D(), but I believe that subsequent updates behave in the manner of the Mac's GL_APPLE_client_storage.
They provide a particular performance boost when dealing with camera frames, as AV Foundation is optimized for this case. I describe how this works in detail within this answer for the camera. For raw data, you can create your own CVPixelBufferRef to be used for this, and then write to its internal contents to update the texture directly.
You have to be a little careful with this, as you can be overwriting texture data while a scene is being rendered, leading to tearing and other artifacts.
Maybe you should use glTexSubImage2D to update Texture. See here

Apply a filter algorithm for each frame of the camera

I am working on an Iphone application.
I need to do the following: when the user clicks on the "Camera Tab" the camera open inside the view with circle overlays.
I want to apply a filtering algorithm on the camera.
I am looking for the best way to do this. Is there a library that can help?
What I am doing currently:
I am using the OpenCV Library.
I define a timer.
For each timer tick I call cvCaptureFromCam() method from the OpenCV
framework (This will capture the picture with a camera and return
it).
I apply the algorithm on the image captured.
i display the image in a UIImageView
The idea is that on each timer tick I get the image, filter it and put it in the UIImageView. If the timer tick is fast enough it will appear as continuous.
However the cvCaptureFromCam is a little slow and this whole process is taking too much memory.
Any suggestions of a better way is greatly appreciated. Thanks
Anything that's based on CPU-bound processing, such as OpenCV, is probably going to be too slow for live video filtering on current iOS devices. As I state in this answer, I highly recommend looking to OpenGL ES for this.
As mentioned by CSmith, I've written an open source framework called GPUImage for doing this style of GPU-based filtering without having to worry about the underlying OpenGL ES involved. Most of the filters in this framework can be applied to live video at 640x480 at well over the 30 FPS framerate of the iOS camera. I've been gradually adding filters with the goal of replacing all of those present in Core Image, as well as most of the image processing functions of OpenCV. If there's something I'm missing from OpenCV that you need, let me know on the issues page for the project.
Build and run the FilterShowcase example application to see a full listing of the available filters and how they perform on live video sources, and look at the SimplePhotoFilter example to see how you can apply those filters to preview video and photos taken by the camera.

approach for recording grayscale video on iphone?

I am building an iphone app that needs to record grayscale video and save it to the camera roll. I'm stumped at how best to approach this.
I am thinking along the following lines:
Use a shader and opengl to transform the video to grayscale
Use AVFoundation (AVAssetWriter with an AVAssetWriterInputPixelBufferAdaptor) to write the video to the file.
My questions are:
Is this the right approach (simplest, best performance)?
If so, what would be the best way to go from opengl output to a CVPixelBufferRef input for the AVAssetWriterInputPixelBufferAdaptor?
If not, what would be a better approach?
Any nudge in the right direction is much appreciated!
In general, I'd agree with this approach. Doing your processing in an OpenGL ES 2.0 shader should be the most performant way of doing video frame alteration like this, but it won't be very simple. Fortunately, you can start from a pre-existing template that already does this.
You can use the sample application I wrote here (and explained here) as a base. I use custom shaders in this example to track colors in an image, but you could easily alter this to convert the video frames to grayscale (I even saw someone do this once). The code for feeding camera video into a texture and processing it could be used verbatim from that sample.
In one of the display options within that application, I render the processed image first to a framebuffer object, then use glReadPixels() to pull the resulting image back into bytes that I can work with on the CPU. You could use this to get the raw image data back after the GPU has processed a frame, then feed those bytes into CVPixelBufferCreateWithBytes() to generate your CVPixelBufferRef for writing to disk.
(Edit: 2/29/2012) As an update to this, I just implemented this kind of video recording in my open source GPUImage framework, so I can comment on the specific performance for the encoding part of this. It turns out that you can capture video from the camera, perform live filtering on it, grab it from OpenGL ES using glReadPixels(), and write that out as live H.264 video in 640x480 frames on an iPhone 4 at 30 FPS (the maximum camera framerate).
There were a few things that I needed to do in order to get this recording speed. You need to make sure that you set your AVAssetWriterInputPixelBufferAdaptor to use kCVPixelFormatType_32BGRA as its color format for input pixel buffers. Then, you'll need to re-render your RGBA scene using a color-swizzling shader to provide BGRA output when using glReadPixels(). Without this color setting, your video recording framerates will drop to 5-8 FPS on an iPhone 4, where with it they are easily hitting 30 FPS. You can look at the GPUImageMovieWriter class source code to see more about how I did this.
Using the GPUImage framework, your above filtering and encoding task can be handled by simply creating a GPUImageVideoCamera, attaching a target of a GPUImageSaturationFilter with the saturation set to 0, and then attaching a GPUImageMovieWriter as a target of that. The framework will handle the OpenGL ES interactions for you. I've done this, and it works well on all iOS devices I've tested.