Can metal implement a swap chain? - swift

I tried to make it similar to the swapchain using
renderPassDescriptor.colorAttachments[0].loadAction,
but it wasn't possible because it fetched a drawable that wasn't the previous drawable.
Because drawable's texture cannot swapped, i think swap chain cannot implemented.
Can metal implement a swap chain?

In Metal, swapchains are implemented by retrieving a new MTLDrawable from the CAMetalLayer on each frame, and retrieving the MTLTexture from that drawable.
The number of available drawables is very limited (typically 3), so you need to manage these carefully in your render loop.
Apple's documentation is not great on this topic, but you can find more information here. This document is written in terms of including the swapchain logic in a custom view, but you don't have to do so. You can write the swapchain as part of a separate renderer class, etc.

Related

How can I load a Gigapixel image as a material in SceneKit?

I’m trying to create an AR image to project on a wall from a Gigapixel image. Obviously Xcode crashes if I try to load the image as a material. Is there an efficient way to load only parts of the image that the user is looking at?
I'm using Swift 4.
This may not do exactly what you want, and you might need to roll-your-own way of parsing and passing data between Core Animation and SceneKit, but this is native, and is designed to handle large images and texture data sources, and feed them out asynchronously, and/or on a demand based basis:
https://developer.apple.com/documentation/quartzcore/catiledlayer

Fast Texture Data Update on iOS

I need to update texture data for one texture at every frame. Is there any way of doing this very fast? The best option for me would be something similar with GL_APPLE_client_storage, but this is not supported on iOS. The obvious solution is to call glTexImage2D every frame, but this will copy the data and also I will have to keep same texture 2 times in memory.
Texture caches (added in iOS 5.0) provide the equivalent functionality to GL_APPLE_client_storage on iOS. On initial creation, they do seem to trigger glTexImage2D(), but I believe that subsequent updates behave in the manner of the Mac's GL_APPLE_client_storage.
They provide a particular performance boost when dealing with camera frames, as AV Foundation is optimized for this case. I describe how this works in detail within this answer for the camera. For raw data, you can create your own CVPixelBufferRef to be used for this, and then write to its internal contents to update the texture directly.
You have to be a little careful with this, as you can be overwriting texture data while a scene is being rendered, leading to tearing and other artifacts.
Maybe you should use glTexSubImage2D to update Texture. See here

EAGLContext, EAGLSharegroups, RenderBuffers, FrameBuffers, oh my!

I'm trying to wrap my head around the OpenGL object model on iPhone OS. I'm currently rendering into a few different UIViews (build on CAEAGLayers) on the screen. I currently have each of these as using separate EAGLContext, each of which has a color renderbuffer and a framebuffer.
I'm rendering similar things in them, and I'd like to share textures between these instances to save memory overhead.
My current understanding is that I could use the same setup (some number of contexts, each with a FBO/RBO) but if I spawn the later ones using the EAGLShareGroup of the first one, then I can simply use the texture names (GLuints) from the first one in the later ones. Is this accurate?
If this is the case, I guess the followup question is: what's the benefit to having it be a "sharegroup"? Could I just reuse the same context, and attach multiple FBOs/RBOs to that context? I think I'm struggling with the abstraction layer of a sharegroup, which seems to share "objects" (textures and other named things) but not "state" (matrices, enabled/disabled states) which are owned by the context.
What's the best way to think of this?
Thanks for any enlightenment!
That’s correct—when two EAGLContexts are created with the same EAGLSharegroup, they share the same view of buffer objects, textures, renderbuffers, and framebuffers. If your contexts are using OpenGL ES 2.0, they’ll share shaders and program objects as well.
One of the biggest use cases for multiple contexts using the same sharegroup would be the ability to load resources asynchronously from another thread while you’re rendering. That doesn’t seem like what you’re doing here, and it doesn’t seem like having persistent context state is an issue for you, so you might be better off sticking with a single EAGLContext and just stashing the reference to it somewhere where all the objects that might need it can see it. You’d be able to change which views you’re rendering to simply by binding the appropriate framebuffer and color renderbuffer.

Richer iPhone Interfaces With Library Components?

My iPhone development is stepping up a notch and I'm looking at the UI. We're thinking of having a few nice interface-y features - things like dragging and dropping images onto one another from a gallery list, or similar.
How far does the basic iPhone interface stretch? Do most people create their own interfaces and code, and if so what's the base there? CoreGraphics? OpenGL?
I don't want to reinvent the wheel, but neither do I want to take an overcomplicated option if someone's done the work already.
There are several tiers to Cocoa-based interfaces. Generally, I recommend working at the highest level of abstraction that meets your needs for presentation and performance.
The base UIKit elements that you can place using Interface Builder or code are designed to handle the most common cases within an application's interface. These provide some degree of customization, depending on their type, but what you see is generally all you get. On the iPhone, Apple even tries to maintain a certain look and feel for these stock elements by rejecting applications during review that use them in ways that contradict the Human Interface Guidelines.
The next level down are custom UIViews. These can be made to look like anything through the use of Quartz drawing within the -drawRect: method. You can do your own touch handling by overriding methods like –touchesBegan:withEvent: or by using the new UIGestureRecognizers. Given the level of customization you can do here, this is where most people stop when tweaking their interfaces.
You can go a little lower than this by working with Core Animation layers and animations. You don't gain a lot, performance-wise, by using CALayers instead of UIViews on the iPhone, but they can be useful if you want to craft visual items that use the same code on Mac and iPhone. Custom animations may be required if you want to do something more than animate a view between two states linearly. You can even do some limited 3-D work using Core Animation.
Finally, there is OpenGL ES for display of full 3-D scenes and for really high performance graphic display. This is about as close to the metal as you're going to get when dealing with the iPhone display system, and it shows in terms of the amount and complexity of code you have to write. For complex 3-D work, this is what you will need to use, but for 2-D and even rudimentary 3-D I recommend looking first to Core Animation because of the code it can save you. If performance is unacceptable, then should you go to OpenGL ES.
Now, just because you need to use one of these technologies to work with part of your interface does not mean that it can't coexist with the others. UIViews are backed by Core Animation layers, and even OpenGL ES renders into a CALayer which can be placed in a view. Again, use the highest level of abstraction that is appropriate for that part of your interface.

Fastest iPhone Blit Routine?

I have a UIView subclass onto which I need to blit a UIImage. There are several ways to skin this cat depending on which series of APIs you prefer to use, and I'm interested in the fastest. Would it be UIImage's drawAtPoint or drawRect? Or perhaps the C-based CoreGraphics routines, or something else? I have no qualms about altering my source image data format if it'll make the blitting that much faster.
To describe my situation my app has anywhere from ~10 to ~200 small UIViews (64x64), a subset of which will need to be redrawn based on user interaction. My current implementation is a call to drawAtPoint inside my UIView subclass' drawRect routine. If you can think of a better way to handle this kind of scenario, I'm all ears (well, eyes).
Using an OpenGL view may be fastest of all. Keep an age cache of images (or if you know a better way to determine when certain images can be removed from the cache, by all means use that) and preload as many images as you can while the app is idle. It should be very quick, with almost no Objective-C calls involved (just -draw)
While not a "blit" at all, given the requirements of the problem (many small images with various state changes) I was able to keep the different states to redraw in their own separate UIImageView instances, and just showed/hid the appropriate instance given the state change.
Since CALayer is lightweight and fast I would get a try.
Thierry
The fastest blit implementation you are going to find is in my AVAnimator library, it contains an ARM asm implementation of a blit for a CoreGraphics buffer, have a look at the source. The way you could make use of it would be to create a single graphics context, the size of the whole screen, and then blit your specific image changes into this single graphics context, then create a UIImage from that and set it as the image of a UIImageView. That would involve 1 GPU upload per refresh, so it will not depend on how many images you render into the buffer. But, you will likely not need to go that low level. You should first try making each 64x64 image into a CALayer and then update each layer with the contents of an image that is the exact size of the layer 64x64. The only tricky thing is that you will want to decompress each of your original images if they come from PNG or JPEG files. You do that by creating another pixel buffer and rendering the original image into the new pixel buffer, that way all the PNG or JPEG decompression is done before you start setting CALayer contents.