I'm a beginner to 3D graphics in general and I'm trying to make a 3D game for the iPhone, and more specifically, to use textures that contain transparency. I am able to load a texture (an 8 bit .png file) into OpenGL and map it to a square (made from a triangle strip) but the transparent parts of the image are not transparent when I run the app in the simulator - they take on the background colour, whatever it is set to, but obscure images that are further away. I am unable to post a screenshot as I am a new user, so my apologies for that. I will try to upload and link it some other way.
Even more annoying is that when I load the image into Apple's GLSprite example code, it works exactly as I want it to. I have copied the code from GLSprite's setupView into my project and it still doesn't work properly.
I am using the blend function:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I was under the impression that this is correct for what I want to do.
Is there something very basic I am missing here? Any help would be much appreciated as I am submitting this as a coursework project in a few weeks and would very much like it to work.
Let me break this down:
First of all your transparent object is drawn.
At this point two things happen:
The pixels are drawn correctly to the back buffer
The depth buffer pixels are set in the depth buffer. Note that the depth buffer will write values all across your object, and transparency does not affect it.
You then draw other objects behind the transparent object.
But any of these objects pixels will not be drawn, because their depth buffer value are less than those already drawn.
The solution to this problem is to draw your scene back-to-front (draw starting at the further away things).
Hope that helps.
Edit: I'm assuming you are using the depth buffer here. If this isn't correct I'll consider writing another answer.
Related
i would like to make an app where you can paint like in the GLPaint sample code, but also zoom in to paint in more detail within your painting.
but i get the feeling, that using OpenGL ES 1.0 which is used in the GLPaint app, is pretty difficult to learn and could be a little bit of an overkill for my need.
if i am chaning the main views frame with the setFrame method to zoom with gesturerecognizer, the already painted lines get erased with every change of the frames size.
so i tried to realize it with another idea: in the touchmoves method i add at "many" positions uiimageviews with an image of the brush, it is slower than the glpaint app and a little bit of a memomy management mess, but i donĀ“t see another way to go there.
any suggestions, learn openGL ES 1.0 or 2.0 or trying to realise the last idea
You can certainly achieve what you are doing, however it will require some effort.
Usually zooming is quite straight-forward as most OpenGL scenes typically do not rely on the the accumulation buffer as the GLPaint sample code does.
If you try and just zoom your the view in GLPaint, your new painting will be drawn at some adjusted scale over your original drawing - which is almost certainly not what you want.
A work-around is instead of drawing directly to your presenting screen buffer, you would first render to a texture buffer, then render said texture buffer on a quad (or equivalent). That way the quad scene can be cleared and re-rendered every frame refresh (at any scale you choose) while your paint buffer retains its accumulation buffer.
This has been tested and works.
I am quite sure the image view method will be an overkill after drawing for a few minutes... You can do all the zooming quite nicely with openGL and I suggest you do that. The bast practice would be to create a canvas as large as possible so when you zoom in you will not lose any resolution.
About zooming: Do not try to resize the GL frame or any frame for that matter because even if you manage to do that successfully you will lose resolution. You should use standard matrices to translate and scale the scene or just play around with glOrtho (set its values to the rect you are currently seeing). Once you get that part there are sadly 2 more things to do that require a bit of math, first is you will have to compute the new touch positions in the openGL scene as location in view will not know about your zooming and translating, second is you probably need to scale the brush as well (make smaller when the scene is bigger so you can draw details).
About the canvas: I do suggest you draw to a FBO rather then your main render buffer and present the texture to your main render scene. Note here that FBO will have attached texture and will be a size of power of 2 (create 2048x2048 or 4096x4096 for newer devices) but you will probably just be using some part of it to keep the same ratio as the screen (glViewport should do the job) so you will have to compute the texture coordinates. Overall the drawing mechanism doesn't change much.
So to sum this up, imagine you have a canvas (FBO) to which you apply the brush of certain size and position on touches events, then you use that canvas as a texture and draw it on your main GL view.
My platform is iPhone - OpenGL ES 1.1
I'm looking for the tutorial about modifying or drawing to a texture.
For example:
I have a background texture: (Just blank blue-white gradiant image)
and a object texture:
I need to draw the object to background many times so to optimize the performance I want to draw it to the background texture like this:
does anyone know the fastest way to do this ?
Thanks a lot !
Do you want to draw it into the background texture, and then keep that, or overlay it, or what? I'm not entirely sure the question.
To draw onto the background and then reuse that, you'll want to create another texture, or a pbuffer/fbo, and bind that. Draw a full-screen quad with your background image, then draw additional quads with the overlays as needed. The bound texture should then have the results, composited as necessary, and can be used as a texture or copied into a file. This is typically known as render-to-texture, and is commonly used to combine images or other dynamic image effects.
To optimize the performance here, you'll want to reuse the texture containing the final results. This will reduce the render cost from whatever it may have been (1 background + 4 faces) to a single background draw.
Edit: This article seems to have a rather good breakdown of OpenGL ES RTT. Some good information in this one as well, though not ES-specific.
To overlay the decals, you simply need to draw them over the background. This is the same drawing method as in RTT, but without binding a texture as the render target. This will not persist, it exists only in the backbuffer, but will give the same effect.
To optimize this method, you'll want to batch drawing the decals as much as possible. Assuming they all have the same properties and source texture, this is pretty easy. Bind all the textures and set properties as needed, fill a chunk of memory with the corners, and just draw a lot of quads. You can also draw them individually, in immediate mode, but this is somewhat more expensive.
I'm writing a game that displays 56 hexagon pieces filling the screen in the shape of a board. I'm currently drawing each piece using a singleton rendering class that when called to draw a piece, creates a path from 6 points based of the coordinate passed in. This path is filled with a solid color and then a 59x59 png with an alpha to white gradient is overlayed over the drawing to give the piece a shiny look. Note I'm currently doing this in Core Graphics.
My first thought is that creating a path everytime I draw is costly and seems like I can somehow do this once and then reuse it, but I'm not sure of the best approach for this. When I look at the bottlenecks with Shark, it looks like the drawing of the png is the most taxing part of the process. I've tried just rendering the png overlay or just rendering the path without the overlay and both give me some frame gains, although removing the png overlay yields the most frames.
My current thought is that at startup, I should render 6 paths (1 for each color piece I have) and overlay them with the png and then store an image of these pieces and then just redraw the pieces each time I need them. Is there an effecient machanism for storing something you've drawn once and redrawing it? It kinda just sounds like I'd be running into the whole drawing pngs too often thing again, but maybe there's a less taxing method that does a similar thing...
Any suggestions are much appreciated.
Thanks!
You might try CGLayer or CALayer.
General thoughts:
Game programming on iPhone usually necessitates OpenGL. Core Graphics is a bit easier to work with, but OpenGL is optimized for speed.
Prerender this "shiny look" into the textures as much as is possible (as in: do it in Photoshop before you even insert them into your project). Alpha blending is hell on performance.
Maybe try PVRTC (also this tutorial) as it's a format used by iPhone's GPU's manufacturer. Then again, this could make things worse depending on where your bottleneck is.
If you really need speed you have to go the OpenGL route. Be careful if you want to mix OpenGL and Core Animation, they can conflict.
OpenGL is a pain if you haven't done much with it. It sounds like you could use Core Animation and make each tile a layer. CA doesn't call the redraw again unless you change something, so you should be able to just move that layer around without taking a big hit. Also note that CA stores the layer in the texture memory so it should be much faster.
Some others have mentioned that you should use OpenGL. Here's a nice introduction specifically for the iPhone: OpenGL ES from the Ground Up: Table of Contents
You might also want to look at cocos2d. It seems to be significantly faster than using CoreAnimation in my tests, and provides lots of useful stuff for games.
I've got a pretty simple situation that calls for something I don't know how to do without a stencil buffer (which is not supported on the iPhone).
Basically, I've got a 3D model that gets drawn behind an image. I want an outline of that model to be drawn on top of it at all times. So when it's behind the image, you can see its outline, and when its not behind the image you can see a model with an outline.
An option to simply get an outline working would be to draw a wireframe of the model with thick lines and a z offset, then draw the regular model on top of it. The problem with this is obviously that I need the outline to be drawn after the model.
This method needs to be fast, as I'm already pushing a lot of polygons around - full-on drawing of the model again in one way or another is not really desired.
Also, is there any way to find out whether my model can be seen at the moment? That is, whether or not the image over top has an opaque section at the position of the model, or if it has a transparent section. If I can figure this out (again, very quickly), then I can just draw a wireframe instead of a textured model, depending on if it's visible.
Any ideas here? Thanks.
most of the time you can re-create stencil effects using the alpha channel and render-to-texture if you think about it ...
http://research.microsoft.com/en-us/um/people/hoppe/proj/silmap/ Is a technical paper on the matter. Hopefully there's an easier way for you to accomplish this ;)
Here is a general option that might produce the effect you want (I have experience with OGL, but not iPhone):
Method 1
Render object to texture as pure white, separate from scene. This will produce a white mask where the object would be rendered.
Either draw this directly to the screen with alpha fade for a "full object", or if you're intent on your outlines, you could try rendering THIS texture to another texture, slightly enlarged, then render the original "full object" shading over this enlarged texture as pure black. This will create a sort of outline texture that you could render over the top of the scene.
Method 2
Edit out. Just read the "no stencil buffer" stipulation.
Does that help?
I'm developing a 2D game for the iPhone using OpenGL ES and I'd like to use a 320x480 bitmapped image as a persistent background.
My first thought was to create a 320x480 quad and then map a texture onto it that represents the background. So... I created a 512x512 texture with a 320x480 image on it. Then I mapped that to the 320x480 quad.
I draw this background every frame and then draw animated sprites on top of it. This works fine except that the drawing of all of these objects (background + sprites) is too slow.
I did some testing and discovered that my slowdown is in the pixel pipeline. Not surprisingly, the large background image is the main culprit. To prove this, I removed the background draw and everything else rendered very fast.
I am looking for advice on how to keep my background and also improve performance.
Here's some more info:
1) I am currently testing on the Simulator (still waiting on Apple for the license)
2) The background is a PVR texture squeezed down to 128k
3) I had hoped that there might be a way to cache this background into a color buffer but haven't had any luck with that. that may be due to my inexperience with OpenGL ES or it just might be a stupid idea that won't work :)
4) I realize that the entire background does not always have to refresh, just the parts that have been drawn over by the moving sprites. I started to look into techniques for refreshing (as necessary) parts of the the background either as separate textures or with a scissor box, however this seems less than elegant.
Any tips/advice would be greatly appreciated...
Thank you.
Do not do performance testing on the simulator. Ever!
The differences to the real hardware are huge. In both directions.
If you draw the background every frame:
Do not clear the framebuffer. The background will overdraw the whole thing anyway.
Do you really need a background texture ?
What about using a color gradient via vertex colors ?
Try using the 2bit mode for the texture.
Turn of all render steps that you do not need for the background.
E.g.: Lighting, Blending, Depth-Test, ...
If you could post some of your drawing code it would be a lot easier to help you.
If you're making a 2D game, is there any reason you aren't using an existing library? Specifically, the cocos2d for iPhone may be worth your time. I can't answer your question about how to fix the issue doing it all yourself, but I can say that I've done exactly what you're talking about (having one full screen background with sprites on top) with cocos2d and it works great. (Assuming 60 fps is fast enough for you.) You may have your reasons for doing it yourself, but if you can, I would highly suggest at least doing a quick prototype with cocos2d and seeing if that doesn't help you along. (Details and source for the iPhone version are here: http://code.google.com/p/cocos2d-iphone/)
Thanks to everyone who provided info on this. All of the advice helped out in one way or another.
However, I wanted to make it clear that the main issue here turned out to be the behavior of simulator itself (as implied by Andreas in his response). Once I was able to get the application on the device, it performed much, much better. I mention this because, prior to developing my game, I had seen a lot of posts that indicated that the device was much slower than the simulator. This might be true in some instances (e.g. general application logic) but in my experience, animation (particularly 3d transformations) are much faster on the device.
I dont have much experience with OpenGL ES, but this problem occurs generally.
Your idea about the 'color buffer' is good intuition, essentially you want to be storing your background as a frame buffer and loading it directly onto your rendering buffer before drawing the foreground.
In OpenGL this is fairly straight forward with Frame Buffer Objects (FBO's). Unfortunatly I dont think OpenGL ES supports them, but it might give you somewhere to start looking.
you may want to try using VBOs (Vertex Buffer Objects) and see if that speeds up things. Tutorial is here
In addition, I just saw, that since OpenGL ES v1.1, there is a function called glDrawTex (Draw Texture) that is designed for
fast rendering of background paintings, bitmapped font glyphs, and 2D framing elements in games
You could use frame buffer objects similar to the GLPaint example from Apple.
Use a texture atlas to minimize the number of draw calls you make. You can use glTexCoordPointer for setting your texture coordinates that maps each image to its correct position. Remember to set your vertex buffer too. Ideally one draw call will render your entire 2D scene.
Avoid enabling/disabling states where possible.