Fast Texture Data Update on iOS - iphone

I need to update texture data for one texture at every frame. Is there any way of doing this very fast? The best option for me would be something similar with GL_APPLE_client_storage, but this is not supported on iOS. The obvious solution is to call glTexImage2D every frame, but this will copy the data and also I will have to keep same texture 2 times in memory.

Texture caches (added in iOS 5.0) provide the equivalent functionality to GL_APPLE_client_storage on iOS. On initial creation, they do seem to trigger glTexImage2D(), but I believe that subsequent updates behave in the manner of the Mac's GL_APPLE_client_storage.
They provide a particular performance boost when dealing with camera frames, as AV Foundation is optimized for this case. I describe how this works in detail within this answer for the camera. For raw data, you can create your own CVPixelBufferRef to be used for this, and then write to its internal contents to update the texture directly.
You have to be a little careful with this, as you can be overwriting texture data while a scene is being rendered, leading to tearing and other artifacts.

Maybe you should use glTexSubImage2D to update Texture. See here

Related

How can I load a Gigapixel image as a material in SceneKit?

I’m trying to create an AR image to project on a wall from a Gigapixel image. Obviously Xcode crashes if I try to load the image as a material. Is there an efficient way to load only parts of the image that the user is looking at?
I'm using Swift 4.
This may not do exactly what you want, and you might need to roll-your-own way of parsing and passing data between Core Animation and SceneKit, but this is native, and is designed to handle large images and texture data sources, and feed them out asynchronously, and/or on a demand based basis:
https://developer.apple.com/documentation/quartzcore/catiledlayer

What's a good approach to implement a smudge tool for a drawing program on the iPad?

At a high level (or low level if you'd like), what's a good way to implement a smudge affect for a drawing program on the iPad using Quartz2D (Core Graphics)? Has anyone tried this?
(source: pixlr.com)
Thanks so much in advance for your wisdom!
UPDATE I found this great article for those interested, check it!
Link now at: http://losingfight.com/blog/2007/09/05/how-to-implement-smudge-and-stamp-tools/
I would suggest implementing a similar algorithm to what is detailed in that article using OpenGL ES 2.0 to get the best performance.
Get the starting image as a texture
Set up a render-to-texture framebuffer
Render initial image in a quad
Render another quad the size of your brush with a slightly shifted view of the image, multiplied by an alpha mask stored in a texture or defined by, for example, a gaussian function. Use alpha-blending with the background quad.
Render this texture into a framebuffer associated with your CAEAGLLayer-backed view
Go to 1 on the next -touchesMoved event, with the result from your previous rendering as the input. Keep in mind you'll want to have 2 texture objects to "ping-pong" between as you can't read from and write to the same texture at once.
I think it's unlikely you're going to get great performance on the CPU, but it's definitely easier to set up that way. In this setup, though, you can have essentially unlimited brush size, etc and you're not looping over image drawing code.
Curious about what sort of performance you do get on the CPU, though. Take care :)

Fastest iPhone Blit Routine?

I have a UIView subclass onto which I need to blit a UIImage. There are several ways to skin this cat depending on which series of APIs you prefer to use, and I'm interested in the fastest. Would it be UIImage's drawAtPoint or drawRect? Or perhaps the C-based CoreGraphics routines, or something else? I have no qualms about altering my source image data format if it'll make the blitting that much faster.
To describe my situation my app has anywhere from ~10 to ~200 small UIViews (64x64), a subset of which will need to be redrawn based on user interaction. My current implementation is a call to drawAtPoint inside my UIView subclass' drawRect routine. If you can think of a better way to handle this kind of scenario, I'm all ears (well, eyes).
Using an OpenGL view may be fastest of all. Keep an age cache of images (or if you know a better way to determine when certain images can be removed from the cache, by all means use that) and preload as many images as you can while the app is idle. It should be very quick, with almost no Objective-C calls involved (just -draw)
While not a "blit" at all, given the requirements of the problem (many small images with various state changes) I was able to keep the different states to redraw in their own separate UIImageView instances, and just showed/hid the appropriate instance given the state change.
Since CALayer is lightweight and fast I would get a try.
Thierry
The fastest blit implementation you are going to find is in my AVAnimator library, it contains an ARM asm implementation of a blit for a CoreGraphics buffer, have a look at the source. The way you could make use of it would be to create a single graphics context, the size of the whole screen, and then blit your specific image changes into this single graphics context, then create a UIImage from that and set it as the image of a UIImageView. That would involve 1 GPU upload per refresh, so it will not depend on how many images you render into the buffer. But, you will likely not need to go that low level. You should first try making each 64x64 image into a CALayer and then update each layer with the contents of an image that is the exact size of the layer 64x64. The only tricky thing is that you will want to decompress each of your original images if they come from PNG or JPEG files. You do that by creating another pixel buffer and rendering the original image into the new pixel buffer, that way all the PNG or JPEG decompression is done before you start setting CALayer contents.

Need help optimizing my 2d drawing on iPhone

I'm writing a game that displays 56 hexagon pieces filling the screen in the shape of a board. I'm currently drawing each piece using a singleton rendering class that when called to draw a piece, creates a path from 6 points based of the coordinate passed in. This path is filled with a solid color and then a 59x59 png with an alpha to white gradient is overlayed over the drawing to give the piece a shiny look. Note I'm currently doing this in Core Graphics.
My first thought is that creating a path everytime I draw is costly and seems like I can somehow do this once and then reuse it, but I'm not sure of the best approach for this. When I look at the bottlenecks with Shark, it looks like the drawing of the png is the most taxing part of the process. I've tried just rendering the png overlay or just rendering the path without the overlay and both give me some frame gains, although removing the png overlay yields the most frames.
My current thought is that at startup, I should render 6 paths (1 for each color piece I have) and overlay them with the png and then store an image of these pieces and then just redraw the pieces each time I need them. Is there an effecient machanism for storing something you've drawn once and redrawing it? It kinda just sounds like I'd be running into the whole drawing pngs too often thing again, but maybe there's a less taxing method that does a similar thing...
Any suggestions are much appreciated.
Thanks!
You might try CGLayer or CALayer.
General thoughts:
Game programming on iPhone usually necessitates OpenGL. Core Graphics is a bit easier to work with, but OpenGL is optimized for speed.
Prerender this "shiny look" into the textures as much as is possible (as in: do it in Photoshop before you even insert them into your project). Alpha blending is hell on performance.
Maybe try PVRTC (also this tutorial) as it's a format used by iPhone's GPU's manufacturer. Then again, this could make things worse depending on where your bottleneck is.
If you really need speed you have to go the OpenGL route. Be careful if you want to mix OpenGL and Core Animation, they can conflict.
OpenGL is a pain if you haven't done much with it. It sounds like you could use Core Animation and make each tile a layer. CA doesn't call the redraw again unless you change something, so you should be able to just move that layer around without taking a big hit. Also note that CA stores the layer in the texture memory so it should be much faster.
Some others have mentioned that you should use OpenGL. Here's a nice introduction specifically for the iPhone: OpenGL ES from the Ground Up: Table of Contents
You might also want to look at cocos2d. It seems to be significantly faster than using CoreAnimation in my tests, and provides lots of useful stuff for games.

Performance and background images for OpenGL ES/iPhone

I'm developing a 2D game for the iPhone using OpenGL ES and I'd like to use a 320x480 bitmapped image as a persistent background.
My first thought was to create a 320x480 quad and then map a texture onto it that represents the background. So... I created a 512x512 texture with a 320x480 image on it. Then I mapped that to the 320x480 quad.
I draw this background every frame and then draw animated sprites on top of it. This works fine except that the drawing of all of these objects (background + sprites) is too slow.
I did some testing and discovered that my slowdown is in the pixel pipeline. Not surprisingly, the large background image is the main culprit. To prove this, I removed the background draw and everything else rendered very fast.
I am looking for advice on how to keep my background and also improve performance.
Here's some more info:
1) I am currently testing on the Simulator (still waiting on Apple for the license)
2) The background is a PVR texture squeezed down to 128k
3) I had hoped that there might be a way to cache this background into a color buffer but haven't had any luck with that. that may be due to my inexperience with OpenGL ES or it just might be a stupid idea that won't work :)
4) I realize that the entire background does not always have to refresh, just the parts that have been drawn over by the moving sprites. I started to look into techniques for refreshing (as necessary) parts of the the background either as separate textures or with a scissor box, however this seems less than elegant.
Any tips/advice would be greatly appreciated...
Thank you.
Do not do performance testing on the simulator. Ever!
The differences to the real hardware are huge. In both directions.
If you draw the background every frame:
Do not clear the framebuffer. The background will overdraw the whole thing anyway.
Do you really need a background texture ?
What about using a color gradient via vertex colors ?
Try using the 2bit mode for the texture.
Turn of all render steps that you do not need for the background.
E.g.: Lighting, Blending, Depth-Test, ...
If you could post some of your drawing code it would be a lot easier to help you.
If you're making a 2D game, is there any reason you aren't using an existing library? Specifically, the cocos2d for iPhone may be worth your time. I can't answer your question about how to fix the issue doing it all yourself, but I can say that I've done exactly what you're talking about (having one full screen background with sprites on top) with cocos2d and it works great. (Assuming 60 fps is fast enough for you.) You may have your reasons for doing it yourself, but if you can, I would highly suggest at least doing a quick prototype with cocos2d and seeing if that doesn't help you along. (Details and source for the iPhone version are here: http://code.google.com/p/cocos2d-iphone/)
Thanks to everyone who provided info on this. All of the advice helped out in one way or another.
However, I wanted to make it clear that the main issue here turned out to be the behavior of simulator itself (as implied by Andreas in his response). Once I was able to get the application on the device, it performed much, much better. I mention this because, prior to developing my game, I had seen a lot of posts that indicated that the device was much slower than the simulator. This might be true in some instances (e.g. general application logic) but in my experience, animation (particularly 3d transformations) are much faster on the device.
I dont have much experience with OpenGL ES, but this problem occurs generally.
Your idea about the 'color buffer' is good intuition, essentially you want to be storing your background as a frame buffer and loading it directly onto your rendering buffer before drawing the foreground.
In OpenGL this is fairly straight forward with Frame Buffer Objects (FBO's). Unfortunatly I dont think OpenGL ES supports them, but it might give you somewhere to start looking.
you may want to try using VBOs (Vertex Buffer Objects) and see if that speeds up things. Tutorial is here
In addition, I just saw, that since OpenGL ES v1.1, there is a function called glDrawTex (Draw Texture) that is designed for
fast rendering of background paintings, bitmapped font glyphs, and 2D framing elements in games
You could use frame buffer objects similar to the GLPaint example from Apple.
Use a texture atlas to minimize the number of draw calls you make. You can use glTexCoordPointer for setting your texture coordinates that maps each image to its correct position. Remember to set your vertex buffer too. Ideally one draw call will render your entire 2D scene.
Avoid enabling/disabling states where possible.