We're in the process of creating an iPhone game using cocos2d. We're trying to layer several sprites on top of each other and have them cast shadows.
Right now the shadows are rendered as sprites which works fine for the most part. But we only want the shadows to hit the closest layer.
I've made an image that hopefully explains what we're trying to accomplish:
And here's what we have at the moment:
Basically we want the sprite to only render the part of the shadow that is at the same depth as the z-buffer.
We've played around with glDepthFunc and GL_DEPTH_TEST but nothing seems to work.
Here's how we're rendering the shadow sprite (subclassed CCSprite):
- (void)draw {
glDisable( GL_BLEND );
glEnable( GL_DEPTH_TEST );
glDepthFunc( GL_LESS );
glDepthMask( GL_FALSE );
[super draw];
glDepthMask( GL_TRUE );
glDisable( GL_DEPTH_TEST );
glEnable( GL_BLEND );
}
The GL_BLEND calls are only there so we can see the sprite at all times.
All sprites that aren't shadows use glDepthMask( GL_TRUE ) and we're clearing the depth buffer on each frame.
Any help would be much appreciated!
glDepthFunc(GL_LESS)
is actually the default value; it means "draw the pixel only if the thing currently in the depth buffer is further away". If you wanted exactly equal you'd use glDepthFunc(GL_EQUAL), but in practice you'll get all sorts of rounding oddities if you do that.
Assuming you're able to use depth values for this purpose, if you have ten objects then I'd suggest you:
set glClearDepth to 0 before you glClear; this'll fill the depth buffer with the nearest storable value so that with normal depth buffering nothing else would be drawn.
disable the depth and draw the shadows such as they're supposed to fall on the back plane; at this point your depth buffer will still be full of the nearest possible value.
enable the depth test but set glDepthFunc to GL_ALWAYS. Then draw all your solid rectangles in back to front order with their depth values set appropriately.
set glDepthFunc to GL_LESS and draw the shadows that are meant to fall on other sprites, each positioned further back than the sprite they're associated with but in front of the sprite behind.
By the time you get to step 4, you'll have correct depth information everywhere a sprite was drawn and you'll have the closest possible value set wherever the background plane was. So normal depth testing will work on the intermediate shadows — they'll draw on top of anything drawn in step 3 but not on top of anything drawn in step 2.
You're sort of using the depth buffer as a surrogate stencil, which the older iPhones don't support.
If you can't afford to use the depth buffer for this task then all I can think of is projecting the shadows as textures in the second texture unit, using the first for a mask texture (or not if you're actually drawing rectangles, but I guess you're probably not) and doing one rendering pass per sprite per shadow that falls upon it. Is that a passable solution?
Related
I have a brown sprite, which contains a hole with triangular shape.
I've added a trail renderer (and set its order in layer to appear behind the sprite), so the user can paint the sprite's hole without painting the sprite itself.
My question is: how can it detect when the hole is all painted?
I thought about using a shader to check if there is any black pixel in the screen, but I don't know if it's possible, because the shader won't know in what percentage of the image it is.
One way would be to take a screenshot with the ScreenCapture.CaptureScreenshotAsTexture method and then loop through an array of pixel colors from Texture2D.GetPixels32. You could then check if the array contains 'black' pixels.
I would do it in a coroutine for better performance as doing it every frame may slow down your application. Also what is important when it comes to CaptureScreenshotAsTexture according to unity docs:
To get a reliable output from this method you must make sure it is called once the frame rendering has ended, and not during the rendering process. A simple way of ensuring this is to call it from a coroutine that yields on WaitForEndOfFrame. If you call this method during the rendering process you will get unpredictable and undefined results.
I have a projector component and I need to find the angle that projected texture falls at to exclude the projecting on vertical faces.
My projector is under the mouse pointer and works ok when it is over an horizontal face:
I would like the projector to switch off on vertical faces to avoid this bad effect:
If possible, I would like to do it in the shader code to avoid the vertical projected image even if the cursor is located on the corners of an horizontal face and a part "goes out" on vertical face.
I found this solution in C#:
if (Physics.Raycast(MouseRay,out hitInfo)){
if(hitInfo.normal.y>0) {
// draw
} else {
// not draw
}
}
But only it works on curved surfaces and not, for example, on the face cubes.
How can I do this properly?
Normally they would use an image on a quad using TGA transparency, which rotates itself to the face that the middle of the object is aligned to, using ray to find the vertex and making it's absolute normal.
Other ways of doing it would be quite tricky, perhaps using decals... If you did it using a shader, it would take so much time... it's a case of problem solving not being ordered in order of importance for fast development. Technically you can project a volumetric texture on top of whatever object you are using... that way you can add your barred circle projected from a point in space towards the object, as a mathematical formula, it takes a while to do, check out volumetric textures, i have written some and in your case it needs the mouse pos sent to texture and maths to add transparent zone and red zone to texture. takes all day.
It's fine to have a flat circle that flips around when you change the pointer onto a different face, it will just look like a physical card and it's much easier to code, 10 minutes instead of many hours.
I'm rendering particles in a 2D game. Each particle is a quad (2 triangles). How can I make the drawing the fastest possible? All the particles has the same texture, I'm only changing it's positions.
Now I'm using a call to glVertexPointer and glDrawArrays for each particle. So I'm sending 4 vertices each time to the GPU.
Is there any other approach that could be faster?
I'm using OpenGL ES 1.1 (iPhone)
Thanks!
Every draw call you make (glDrawArrays) is expensive. Doing this once per particle is DEFINITELY way too often. All your particles can be drawn with a single draw call; just set up a big array of all the triangle verts and another big array with the texture coords, and call glVertexPointer/glDrawArrays once-- that's the power of glVertexPointer: arbitrary geometry of the same type in one call. :)
For what you're doing, you should also look into point sprites (GL_POINTS), which also function as tiny textured quads. They're 2D only, so you can't map your texture into the Z axis, but if your particles are just 2D quads of the same texture over and over, point sprites will likely do exactly what you want.
There's a way to do that all in one draw routine. I THINK it's by adding an extra vertex after each quad, which is the same as the previous vertex, but I could be wrong.
EDIT: After looking into it a bit, it looks like you need two in between; essentially one after, and one before. It does add up to quite a few extra vertexes, but I know from experience that it makes a HUGE positive difference on the iPhone to do it all in one draw operation (we were drawing text from a texture, so essentially the same thing).
EDIT2: Also note, I'm referring to using GL_TRIANGLE_STRIP - if you were using GL_TRIANGLES instead, you wouldn't need the extra vertices... except, then you'd be doing the same amount extra anyway, due to repeating 2 for each second triangle.
Does anyone know why this may happen:
I draw to a 2D screen using glDrawArrays with GL_POINTS (interleaved array). If I flip the buffers (presentRenderbuffer) after ever call to glDrawArrays -- so after each "tile" is drawn -- everything works fine.
This is, of course, inefficient.. so if I move the presentRenderBuffer outside of the draw loop, I get the errors. Basically parts of the screen just don't draw, and it's always in the same place (the middle of the screen, horizontally).
I'm using retainedbacking (as I update only tiles that changed) so I need to rely on the frame buffer staying the same between draws so I can draw over it.
Any ideas why presentRenderBuffer after each tile works fine, while one final presenRenderBuffer after all of the draws wouldn't?
EDIT: Also, adding glFlush() in the tile draw loop, and moving presentRenderBuffer outside the loop produces the correct image as well.
I'm working on a little game in OpenGL ES.
In the background, there is a world/map. The map is just a large texture.
Zoom/pinch/pan is used to move around. And I'm using glOrthof (left, right, bottom, top, zNear, zFar) to implement the zoom/pinch.
When I zoom in, the sprites on top of the map is also zoomed in. But I would like to have some sprites stay at a fixed size.
I could probably calculate a scale factor, depending on the parameters to glOrthof, but there must be a more natural and straightforward way of doing that, instead of scaling the sprites down when I zoom in.
If I add some text or some GUI elements on top of the map, they should definately have a fixed size.
Is there a solution to do this, or do I have to leave fixed values in glOrthof and implement zoom/pinch in another way?
EDIT: To be more clear: I want sprites that zoom in/out along with the map, but stay at the same size.
I have some elements that are like the pins on the iPhone's map application. When you zoom, the pins stay the same size, but move around on the screen to stay on the same spot on the map. That is mainly what I want a solution for.
Solutions for this already came below, thanks!
First call glOrthof with the settings you have, then draw the things that scale. Then make another call to glOrthof with different settings (after glLoadIdentity probably), and then draw the things that should not be scaled.
you can use something like this to draw fixed size elements at a given 3D position, keeping the current projection settings :
// go to correct coordinates
double v[3] = { x , y , z };
glRasterPos3dv( v );
glBitmap( 0 , 0 , 0 , 0 , -center_pix_x , -center_pix_y , NULL );
// and draw pixels
glPixelStorei( GL_PACK_LSB_FIRST , true );
glPixelStorei( GL_PACK_ALIGNMENT , 1 );
glDrawPixels( img_width , img_height , GL_RGBA , GL_UNSIGNED_BYTE , img_data_ptr );
center_pix are the coordinates of the reference point in the sprite that will match the 3D point.
Found one solution in this thread:
Drawing "point-like" shapes in OpenGL, indifferent to zoom
Point sprites... Apple's GLPaint example also uses this.
Quite simple to use. Uses the current texture.
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(40.0f);
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, 4);
These will move when the map moves, but does not change the size.
Edit: A small tip: Remember that the point coordinate is the middle of the texture, not a corner or anything. I struggled a bit with my sprites apparently "moving", because I used only the 35x35 upper left pixels in a 64x64 texture. Move your graphics to the middle of the texture and you'll be fine.