OpenGL ES 2D - z-ordering, depth buffer vs drawing in order - iphone

I'm completely new to OpenGL so sorry if it's a silly question. Also no idea if it makes a difference, just in case, I'm using OpenGL ES 1.1.
Currently I'm drawing sprites in order of texture, as I've read it's better for performance (makes sense). But now I'm wondering whether that was the right approach because I need certain sprites to be in front of others regardless of texture.
As far as I'm aware, my options for z-ordering would be either to enable the depth buffer and use that, or to switch the drawing order so the sprites are drawn in the order of a z value.
I've read that the depth buffer can be a performance hit, but so would changing the order. Which should I do?

The short answer is, sort the sprites.
It sounds like you're creating something that's really 2d based, and while a z-buffer can be a very useful tool, it can be an impressive performance hit if the hardware doesn't support it, and if you're not actually using 3d objects that may be intersecting one another, it doesn't make a lot of sense to me.
In addition, if you have any sprites that are partially transparent, i.e. have pixels with an alpha value that isn't 0 or 255 (or 0.0 or 1.0 if using floating point) then you have to sort anyway.
As a side note, I believe that the performance lost when changing "sprites" only occurs when switching out surfaces, and only rarely. One way to help mitigate this problem is to put as many different sprites in one image as you can, on a grid, and use little pieces of your surfaces as sprites.

Related

Unity - how to provide diffuse lighting

I have a simple scene of the interior of a house (less roof). It does not in any way need to look realistic, just to be geometrically correct, therefore the walls and furnishings and fittings are simply constructed from primitive objects - cubes and cylinders etc.
The layout is fine, the problem is the lighting - very black shadows. The scene has the standard single directional light source.
What I need to do is provide overall diffuse lighting - equivalent to an overcast day.
I should point out that I am pretty much a novice on all this - lighting, shaders etc, though I have been reading a lot.
From what I read it appears that this is controlled by shaders, shaders being attached to materials, materials being applied to the objects. However, it doesn't seem to make much sense to me. Surely, a shader, if part of the object by virtue of being attached to the material, can only deal with how light might be reflected off the surface - but the light has to get there first.
Therefore, there must be a way of providing an overall diffuse light in the first place?
Or have I got this completely wrong? How does one get rid of the blackness on the non-illuminated side of an object? So far the only way I have found is to make the surface emit light, ie glow a bit, which surely must not be right.
Your general understanding of how this all works is correct. One way to look at it object request rendering, looks up the material, the material binds shader to a set of parameters. The shader then gets executed, once per light in the scene that affects it (this is simplyfying things but we'll get to that in a bit). This is why lights are expensive (in forward rendering that is), until optimizations start to kick in, this means rendering the scene n times.
So yes, you could just add a constatnt factor in the shader, to achieve the effect of 'ambient' or 'diffuse' lighting. But that shader, in order to support all the features like reflectivity etc, would have to be crazy complicated.
Fortunately, with unity we also get a middle layer called Standard Shader, which does pretty much all of the math underneath, and releases you from the necessity for writing shader code.
For a gentle, diffused look, you definitely want to look at baked Indirect Illumination features of Unity, maybe even lit everything with area lights only.
Its probably also a good idea to looki into light probe groups. They work with spherical harmonics, encoding only the low frequency components of the lighting data, effectively only using slow changing factors like general direction of the light.
Finaly look into reflection probes (and skyboxes while at it), theres few good free HDR probes available that will emit light into your scene (when baking lightmaps and baking lightprobes), enabling surprising realism, compared to default unity skybox.
If you don't want harsh directional light, just disable it (although it's often useful to know what is your strongest light source in your sene - even if its a skybox with some clouds, i would probably keep a scene light just to know faster if anything goes wrong

Unity: Help needed with rendering voxels and performance

So here is the deal, i have an enemy that is rigged and consists out of quite a few voxels.
When i run the game i get very bad performance overall when its rendering the model because of the amount of objects it needs to render. How can i improve that?
Here's the things i have thought about:
Only rendering the faces we actually can see,
Or maybe using some sort of GPU Instancing?
Anyways, i would like to know how i could resolve such a problem. Any help is much appreciated!
Without knowing how you've actually implemented your voxels in game (i.e what components each voxel consists of and how you're currently managing them) it's hard to offer much specific advice.
In general though there are a few things to consider:
Faces that aren't oriented towards the camera won't be
rendered by default. What you might want to investigate is
Occlusion
Culling, but
since your character is not static within the scene you might not get
much performance improvement from that. The code to
constantly check whether objects are occluded might end up costing
more CPU than not implementing it.
Rendering might not be your actual bottleneck for performance. If each of your voxels is doing some work every update (such as collision detection), then that's going to hit your performance much harder than rendering the objects, particularly for cubic voxels. You'll need to use the Profiler to figure that out.
You need to consider whether you actually REALLY need all those voxels to be in the scene all the time or whether you can replace them with a single GameObject until you need to interact with them, then bring them into the scene as needed. For example, you might replace the upper arm with a mesh that mirrors the shape of the total voxels for that body part. Then when that part is shot, for example, you can detect the point of collision and instantiate the necessary voxels around that point to react as desired, then rebuild the arm mesh to reflect the changed shape.
It might also be worth looking into Unity's Data-oriented Technology Stack (DOTS) features, although that could be overkill for this situation.

Skybox OpenGL ES iPhone and iPad

I need to create a virtual tour tool for iOS. It's an archaeological application: the user could open it when he's inside an historic building or when he's visiting an archaeological dig. No need of doom-like subjective point of view: just a skybox. The application will have a list of points of interest (POIs). Every POI will have its own skybox.
I thought that I could use using OpenGL-ES to create a sort of textured skyboxes that could be driven/rotated by touches. Textures are hi-resolution PNG photos.
It's a funded project and I have 4 months.
Where do I have to go to learn how to develop it? Do I have to purchase a book? Which one?
I have just moderate Objectve-C and Cocoa-touch skills, since I've built just one application for the iPad. I have zero knowledge of OpenGL-ES.
Since I know OpenGL ES quite well, I had a go at a demo project, doing much of what you describe. The specific intention was to do everything in the simplest way available under OpenGL ES as long as the performance was good enough.
Starting from the OpenGL template that Apple supply, I have written one new class with a heavily commented implementation file 122 lines long that loads PNG images as textures. I've modified the sample view controller to draw a skybox as required and to respond to touches with a version of the normal iPhone inertial scrolling, which has meant writing less than 200 lines of (also commented) code.
To achieve this I needed to know:
the CoreGraphics means for getting pixel data from a PNG
how to set up the PROJECTION stack to get a perspective projection with the correct aspect ratio
how to manipulate the MODELVIEW stack to ensure two-axis rotation (first person shooter or Google StreetView style) of the scene according to member variables and to ensure that the cube geometry I defined doesn't visibly intersect the near clip plane
how to specify vertex locations and texture coordinates to OpenGL
how to specify the triangles OpenGL should construct between vertices
how to set the OpenGL texture parameters accordingly to supply only one level of detail for the texture
how to track a touch to manipulate the member variables dictating rotation, including a tiny bit of mechanics to give an inertial rotation
Of course, the normal view controller lifecycle instructions are obeyed. Textures are loaded on viewDidLoad and released on viewDidUnload, for example, to ensure that this view controller plays nicely with potential memory warnings.
The main observations are that, beyond knowing the Objective-C signalling mechanisms, most of this is C stuff. You're primarily using C arrays and references to make C function calls, both for OpenGL and CoreGraphics. So a prerequisite for coding this yourself is being happy in C, not just Objective-C.
The CoreGraphics stuff is a bit tedious but it's all just reading the docs to figure out how each type of thing relates to the next — none of it is really confusing. Just get into your head that you need a data provider for the PNG data, you can create an image from that data provider and then create a bitmap context with memory that you've allocated yourself, draw the image into the context and then release everything except the memory you allocated yourself to be left with the result. That result can be directly uploaded to OpenGL. It's relatively short boilerplate stuff, but OpenGL has no concept of PNGs and CoreGraphics has no convenient methods of pushing things into OpenGL.
I've assumed that textures are a suitable size on disk. For practical purposes, that means assuming they're a power-of-two in size along each edge. Mine are 512x512.
The OpenGL texture management stuff is easy enough; it's just reading the manual to learn about texture names, name allocation, texture parameters and uploading image data. More routine stuff that is more about knowing the right functions than managing an intuitive leap.
For supplying the geometry to OpenGL I've just written out the arrays in full. I guess you need a bit of a spatial mind to do it, but sketching out a 3d cube on paper and numbering the corners would be a big help. There are three relevant arrays:
the vertex positions
the texture coordinates that go with each vertex location
a list of indices referring to vertex positions that defines the geometry
In my code I've used 24 vertices, treating each face of the cube as a logically discrete thing (so, six faces, each with four vertices). I've defined the geometry using triangles only, for simplicity. Supplying this stuff to OpenGL is actually quite annoying when you're starting; making an error generally means your program crashes deep inside the OpenGL driver without giving you a hint as to what you did wrong. It's probably best to build up a bit at a time.
In terms of a UIView capable of hosting OpenGL content, I've more or less used the vanilla stuff Apple directly supply in the OpenGL template. The one change I made was explicitly to disable any attempted use of OpenGL ES 2.x. 1.x is more than sufficient for this task, so we gain simplicity firstly by not providing two alternative rendering paths and secondly because the ES 2.x path would be a lot more complicated. ES 2.x is the fully programmable pipeline with pixel and vertex shaders, but in ES land the fixed pipeline is completely removed. So if you want one then you have to supply your own substitutes for the normal matrix stacks, you have to write vertex and fragment shaders to do 'a triangle with a texture', etc.
The touch tracking isn't particularly complicated, more or less just requiring me to understand how the view frustum works and how touches are delivered in Cocoa Touch. Once you've done everything else, this bit should be quite easy.
Notably, the maths I had to implement was extremely simple. Just the touch tracking, really. Assuming you wanted a Google Maps-type view meant that I could rely entirely on OpenGL's built-in ability to rotate things, for example. At no point do I explicitly handle a matrix.
So, how long it would take you to write depends on your own confidence with C and with CoreGraphics, and how happy you are sometimes coding in the dark. Because I know what I'm doing, the whole thing took two or three hours.
I'll try to find somewhere to upload the project so that you can have a look at it. I think it'd be helpful to leaf through it and see how alien it looks. That'll probably give you a good idea about whether you could implement something that meets all of your needs within the time frame of your project.
I've left the view controller as having exactly one view, which is the OpenGL view. However, the normal iPhone compositing rules apply and in your project you can easily put normal controls on top. You can grab my little implementation at mediafire. StackOverflow post length limits prevent me from putting big snippets of code here, but please feel free to ask if you have any specific questions.
It's going to be pretty tough if you're learning OpenGL ES from scratch. I'd use a graphics engine to do most of the heavy lifting. I'm currently playing Ogre3d, from what I've seen so far I can recommend it: http://www.ogre3d.org/. It has Skybox (and much more) out of the box, and should be pretty straight forward to do.
I think you can do this, here are some links to help get you started:
http://sidvind.com/wiki/Skybox_tutorial
common problems:
( i would post direct links but stackoverflow wont let me )
look on stackoverflow items no 2859722 and 2297564.
some programs and tips to help make the textures:
spacescape
there are some great opengl tutorials here:
nehe.gamedev.net
they are not iphone specific, but they explain opengl pretty well. i think some folks have ported these to the phone as well, i just cant find them now.

Real time soft shadows without stencil buffers

I'm really curious how the following is done
(source: kortham.net)
They seem to achieve real time softish shadows on the iphone which does not have a stencil buffer available. It seems to run pretty fluid here http://www.youtube.com/watch?v=u5OM6tPoxLU
Anyone has an idea?
The stencil buffer allows hardware acceleration of shadows rendering, but isn't necessarily needed for displaying shadow volumes. With a low count of bodies and light sources, the software may emulate the behavior of the stencil buffer (but that will be very slow, compared to the hardware-accelerated implementation).
Also, there is other ways to display shadows. The most frequently used is Shadow Mapping (a more in-depth approach can be found on GameDev.net), which doesn't require a stencil buffer. It is used for PS2 games, as well as Wii games, because those hardware also doesn't have a stencil buffer.
And finally, under the circumstances of this particular game, the shadow algorithm can also be implemented as a simple ray tracing system, because there is no need for floor detection, and the shadows are basically calculated on 2D simple shapes (circles and squares). That might be the best approach for this particular case.
Most likely a "Shadow Mapping" variant. http://en.wikipedia.org/wiki/Shadow_mapping

OpenGL ES on Iphone: simple 2D animation (interpolation/tween)

I'm working on an app that basically revolves around 2D shapes (mostly simple polygons) being dynamically drawn and animated.
I'm looking for a way to easily time my animations. It's basically just moving a vertex to a specified point in a specified time, so just interpolating floats, with all the usual easing parameters. I come from a Flash/ActionScript 3 environment, so if you're familiar with that, think Tween Classes.
I probably could easily be doing this with Core Animation (BasicAnimation etc), but i will have up to a hundred gradient-filled shapes with varying opacity being animated dynamically,
and I need good performance (60fps would be great). So i went for OpenGL ES. Plus I'm totally for investing time into learning something that I'll be able to reuse cross-platform.
So I know OpenGL is only for graphic rendering, and I'm not going to find any 2D animation methods built in. And I heard using CA with OpenGL (if feasible) was not a good idea performance-wise.
But before I look deeper into interpolation algorithms to increment my vertex's coordinates every frame, I juste wanted to make sure I wasn't totally missing out on something much easier!?
Thanks!
I would look into the popular cocos2d library. It looks really nice; supports animation and uses OpenGL ES behind the scenes.