How to modify a bound texture in OpenGL ES 1.1 - iphone

My platform is iPhone - OpenGL ES 1.1
I'm looking for the tutorial about modifying or drawing to a texture.
For example:
I have a background texture: (Just blank blue-white gradiant image)
and a object texture:
I need to draw the object to background many times so to optimize the performance I want to draw it to the background texture like this:
does anyone know the fastest way to do this ?
Thanks a lot !

Do you want to draw it into the background texture, and then keep that, or overlay it, or what? I'm not entirely sure the question.
To draw onto the background and then reuse that, you'll want to create another texture, or a pbuffer/fbo, and bind that. Draw a full-screen quad with your background image, then draw additional quads with the overlays as needed. The bound texture should then have the results, composited as necessary, and can be used as a texture or copied into a file. This is typically known as render-to-texture, and is commonly used to combine images or other dynamic image effects.
To optimize the performance here, you'll want to reuse the texture containing the final results. This will reduce the render cost from whatever it may have been (1 background + 4 faces) to a single background draw.
Edit: This article seems to have a rather good breakdown of OpenGL ES RTT. Some good information in this one as well, though not ES-specific.
To overlay the decals, you simply need to draw them over the background. This is the same drawing method as in RTT, but without binding a texture as the render target. This will not persist, it exists only in the backbuffer, but will give the same effect.
To optimize this method, you'll want to batch drawing the decals as much as possible. Assuming they all have the same properties and source texture, this is pretty easy. Bind all the textures and set properties as needed, fill a chunk of memory with the corners, and just draw a lot of quads. You can also draw them individually, in immediate mode, but this is somewhat more expensive.

Related

zoom in the GLPaint sample code

i would like to make an app where you can paint like in the GLPaint sample code, but also zoom in to paint in more detail within your painting.
but i get the feeling, that using OpenGL ES 1.0 which is used in the GLPaint app, is pretty difficult to learn and could be a little bit of an overkill for my need.
if i am chaning the main views frame with the setFrame method to zoom with gesturerecognizer, the already painted lines get erased with every change of the frames size.
so i tried to realize it with another idea: in the touchmoves method i add at "many" positions uiimageviews with an image of the brush, it is slower than the glpaint app and a little bit of a memomy management mess, but i don´t see another way to go there.
any suggestions, learn openGL ES 1.0 or 2.0 or trying to realise the last idea
You can certainly achieve what you are doing, however it will require some effort.
Usually zooming is quite straight-forward as most OpenGL scenes typically do not rely on the the accumulation buffer as the GLPaint sample code does.
If you try and just zoom your the view in GLPaint, your new painting will be drawn at some adjusted scale over your original drawing - which is almost certainly not what you want.
A work-around is instead of drawing directly to your presenting screen buffer, you would first render to a texture buffer, then render said texture buffer on a quad (or equivalent). That way the quad scene can be cleared and re-rendered every frame refresh (at any scale you choose) while your paint buffer retains its accumulation buffer.
This has been tested and works.
I am quite sure the image view method will be an overkill after drawing for a few minutes... You can do all the zooming quite nicely with openGL and I suggest you do that. The bast practice would be to create a canvas as large as possible so when you zoom in you will not lose any resolution.
About zooming: Do not try to resize the GL frame or any frame for that matter because even if you manage to do that successfully you will lose resolution. You should use standard matrices to translate and scale the scene or just play around with glOrtho (set its values to the rect you are currently seeing). Once you get that part there are sadly 2 more things to do that require a bit of math, first is you will have to compute the new touch positions in the openGL scene as location in view will not know about your zooming and translating, second is you probably need to scale the brush as well (make smaller when the scene is bigger so you can draw details).
About the canvas: I do suggest you draw to a FBO rather then your main render buffer and present the texture to your main render scene. Note here that FBO will have attached texture and will be a size of power of 2 (create 2048x2048 or 4096x4096 for newer devices) but you will probably just be using some part of it to keep the same ratio as the screen (glViewport should do the job) so you will have to compute the texture coordinates. Overall the drawing mechanism doesn't change much.
So to sum this up, imagine you have a canvas (FBO) to which you apply the brush of certain size and position on touches events, then you use that canvas as a texture and draw it on your main GL view.

Redundant image drawing with CGContextDrawImage

I have one image that I wish to redraw repeatedly over the screen, however, there is a large number of redraws per second, and drawing the image each time makes the app take a huge performance hit. Is there a way to somehow cache the CGImageRef or something that would make CGContextDrawImage perform faster?
Try using UIImageViews and see if it's fast enough. You are allowed to have many UIImageViews. You should set all of their image properties to the same instance of UIImage.
If it's for a game, you should just use a game engine (Unity, Cocos2D, etc.). They have already spent a lot of time figuring out how to make this stuff fast.
A CGLayerRef should be what you need.
From the Apple docs:
Layers are suited for the following:
High-quality offscreen rendering of drawing that you plan to reuse.
For example, you might be building a scene and plan to reuse the same background. Draw the background scene to a layer and then draw the layer whenever you need it. One added benefit is that you don’t need to know color space or device-dependent information to draw to a layer.
Repeated drawing. For example, you might want to create a pattern that consists of the same item drawn over and over. Draw the item to a layer and then repeatedly draw the layer, as shown in Figure 12-1. Any Quartz object that you draw repeatedly—including CGPath, CGShading, and CGPDFPage objects—benefits from improved performance if you draw it to a CGLayer. Note that a layer is not just for onscreen drawing; you can use it for graphics contexts that aren’t screen-oriented, such as a PDF graphics context.
https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/drawingwithquartz2d/dq_layers/dq_layers.html#//apple_ref/doc/uid/TP30001066-CH219-TPXREF101

OpenGL, primitives with opacity without visible overlap

I'm trying to draw semi-transparent primitives (lines, circles) in OpenGL ES using Cocos2d but can't avoid the visible overlap regions. Does anyone know how to solve this?
This is a problem you usually come across pretty often, even in 3D.
I'm not too familiar with Cocos2D, but one way to solve this in generic OpenGL is to fill the framebuffer with your desired alpha channel, switch the blending mode to glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA) and draw the rectangles. The idea behind this is that you draw a rectangle with the desired transparency which is taken from the framebuffer, but in the progress mask the area you've drawn to so that your subsequent rectangles will be masked there.
Another approach is to render the whole thing to a texture or assemble the shape using polygons that don't overlap.
I'm not sure whether Cocos2D supports any of these…
I do not know what capabilities Cocos2D specifically provides, but I can see two options,
One, do not overlap like that, but rather construct more complex geometry such that each pixel is only covered once,
Two, use stencil buffer to create a mask as you draw, and to reject any pixels that are already masked.

OpenGL ES for iPhone blending not working

I'm a beginner to 3D graphics in general and I'm trying to make a 3D game for the iPhone, and more specifically, to use textures that contain transparency. I am able to load a texture (an 8 bit .png file) into OpenGL and map it to a square (made from a triangle strip) but the transparent parts of the image are not transparent when I run the app in the simulator - they take on the background colour, whatever it is set to, but obscure images that are further away. I am unable to post a screenshot as I am a new user, so my apologies for that. I will try to upload and link it some other way.
Even more annoying is that when I load the image into Apple's GLSprite example code, it works exactly as I want it to. I have copied the code from GLSprite's setupView into my project and it still doesn't work properly.
I am using the blend function:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I was under the impression that this is correct for what I want to do.
Is there something very basic I am missing here? Any help would be much appreciated as I am submitting this as a coursework project in a few weeks and would very much like it to work.
Let me break this down:
First of all your transparent object is drawn.
At this point two things happen:
The pixels are drawn correctly to the back buffer
The depth buffer pixels are set in the depth buffer. Note that the depth buffer will write values all across your object, and transparency does not affect it.
You then draw other objects behind the transparent object.
But any of these objects pixels will not be drawn, because their depth buffer value are less than those already drawn.
The solution to this problem is to draw your scene back-to-front (draw starting at the further away things).
Hope that helps.
Edit: I'm assuming you are using the depth buffer here. If this isn't correct I'll consider writing another answer.

What's the best way to create a "magnifying glass" on a 2D scene?

I'm working on a game where I need to let the player look at a plane (e.g., a wall) through a lens (e.g., a magnifying glass). The game is to run on the iPhone, so my choices are Core Animation or OpenGL ES.
My first idea (that I have not yet tried) is to do this using Core Animation.
Create the wall and objects on it using CALayers.
Use CALayer's renderInContext: method to create an image of the wall as a background layer.
Crop the image to the lens shape, scale it up, then draw it over the background.
Draw the lens frame and "shiny glass" layer on top of all that.
Notes:
I am a lot more familiar with Core Animation than OpenGL, so maybe there is a much better way to do this with OpenGL. (Please tell me!)
If I am using CALayers that are not attached to a view, do I have to manage all animations myself? Or is there a straightforward way to run them manually?
3D perspective is not important; I'm just magnifying a flat wall.
I'm concerned that doing all of the above will be too slow for smooth animation.
Before I commit a lot of code to writing this, my question is do you see any pitfalls in the plan above or can you recommend an easier way to do this?
I have implemented a magnifying glass on the iPhone using a UIView. CA was way too slow.
You can draw a CGImage into a UIView using it's drawRect method. Here's the steps in my drawRect:
get the current context
create a path for clipping the view (circle)
scale the current transformation matrix (CTM)
move the current transformation matrix
draw the CGimage
You can have the CGImage prerendered, then it's in the graphics memory.
If you want something dynamic, draw it from scratch instead of drawing a CGImage.
Very fast, looks great.
That is how I'd do it, it sounds like a good plan.
Whether you choose OGL or CA the basic principle is the same so I would stick with what you're more comfortable with.
Identify the region you wish to magnify
Render this region to a separate surface
Render any border/overlay onto of the surface
Render your surface enlarged onto the main scene, clipping appropriately.
In terms of performance you will have to try it and see (just make sure you test on actual hardware, because the simulator is far faster than the hardware). If it IS to slow then you can look at doing steps 2/3 less frequently, e.g every 2-3 frames. This will give some magnification lag but it may be perfectly acceptable.
I suspect that performance between OGL / CA will be roughly equivalent. CA is built ontop of the OGL libraries but your cost is going to be doing the actual rendering, not the time spent in the layers.