I'm really curious how the following is done
(source: kortham.net)
They seem to achieve real time softish shadows on the iphone which does not have a stencil buffer available. It seems to run pretty fluid here http://www.youtube.com/watch?v=u5OM6tPoxLU
Anyone has an idea?
The stencil buffer allows hardware acceleration of shadows rendering, but isn't necessarily needed for displaying shadow volumes. With a low count of bodies and light sources, the software may emulate the behavior of the stencil buffer (but that will be very slow, compared to the hardware-accelerated implementation).
Also, there is other ways to display shadows. The most frequently used is Shadow Mapping (a more in-depth approach can be found on GameDev.net), which doesn't require a stencil buffer. It is used for PS2 games, as well as Wii games, because those hardware also doesn't have a stencil buffer.
And finally, under the circumstances of this particular game, the shadow algorithm can also be implemented as a simple ray tracing system, because there is no need for floor detection, and the shadows are basically calculated on 2D simple shapes (circles and squares). That might be the best approach for this particular case.
Most likely a "Shadow Mapping" variant. http://en.wikipedia.org/wiki/Shadow_mapping
Related
I have a simple scene of the interior of a house (less roof). It does not in any way need to look realistic, just to be geometrically correct, therefore the walls and furnishings and fittings are simply constructed from primitive objects - cubes and cylinders etc.
The layout is fine, the problem is the lighting - very black shadows. The scene has the standard single directional light source.
What I need to do is provide overall diffuse lighting - equivalent to an overcast day.
I should point out that I am pretty much a novice on all this - lighting, shaders etc, though I have been reading a lot.
From what I read it appears that this is controlled by shaders, shaders being attached to materials, materials being applied to the objects. However, it doesn't seem to make much sense to me. Surely, a shader, if part of the object by virtue of being attached to the material, can only deal with how light might be reflected off the surface - but the light has to get there first.
Therefore, there must be a way of providing an overall diffuse light in the first place?
Or have I got this completely wrong? How does one get rid of the blackness on the non-illuminated side of an object? So far the only way I have found is to make the surface emit light, ie glow a bit, which surely must not be right.
Your general understanding of how this all works is correct. One way to look at it object request rendering, looks up the material, the material binds shader to a set of parameters. The shader then gets executed, once per light in the scene that affects it (this is simplyfying things but we'll get to that in a bit). This is why lights are expensive (in forward rendering that is), until optimizations start to kick in, this means rendering the scene n times.
So yes, you could just add a constatnt factor in the shader, to achieve the effect of 'ambient' or 'diffuse' lighting. But that shader, in order to support all the features like reflectivity etc, would have to be crazy complicated.
Fortunately, with unity we also get a middle layer called Standard Shader, which does pretty much all of the math underneath, and releases you from the necessity for writing shader code.
For a gentle, diffused look, you definitely want to look at baked Indirect Illumination features of Unity, maybe even lit everything with area lights only.
Its probably also a good idea to looki into light probe groups. They work with spherical harmonics, encoding only the low frequency components of the lighting data, effectively only using slow changing factors like general direction of the light.
Finaly look into reflection probes (and skyboxes while at it), theres few good free HDR probes available that will emit light into your scene (when baking lightmaps and baking lightprobes), enabling surprising realism, compared to default unity skybox.
If you don't want harsh directional light, just disable it (although it's often useful to know what is your strongest light source in your sene - even if its a skybox with some clouds, i would probably keep a scene light just to know faster if anything goes wrong
I have been building a game for VR using Unity3d. It has only low poly models and the file size is less then 40 mb still the game lags when played on mobile.. Please suggest how to improve the performance..
Thank you in advance..
In order to improve performance in VR for mobile you have to optimize everything as best as you can, you should keep some of these variables in mind:
Graphics Side
- Number of polygons in the scene
- How many source of lighting do you have
Programming Side
- How much work is taking your code, is doing it efficiently?
The programming part can include problems within the physics system, also some logic problems that can probably decrease the overall performance because of higher computation.
My advice is to learn about the Profiler that unity offers, actually you can observe how much work is taking your code and where exactly it is your bottle-neck. This video also can be useful.
Of course a solution could be implement your code following design standards, like design patterns and software architecture (depending on the size of the project).
I hope it can be useful for you!
What I found from developing and launching a vr game is some of the issues below
Number of polygons is usually your first to check even though your models are low poly. For example, I looked at Synty models in the unity store and some of them were over 1k for a bag and 7k for a character model. This seriously reduce the amount of objects you can if you want to target a max of 50000 per eyes.
With some models, you can use blender and the decimate tool to reduce the polygon count pretty easily. From there I would use LOD's to reduce their count further based on distance.
Use occlusion culling (pro version only)
Set your camera distance to maybe a 100 instead of the default
Use mobile shaders and careful using some of the standard shaders as they are expensive. Also transparent shaders will becomes expensive cause overdraw.
Batch your textures and make them static if possible
Don't use dynamic shadows on lighting but instead bake your lights
Try to avoid using physics as this becomes expensive and instead raycast to trigger events or shooting weapons.
Run profiler often and check for any bottlenecks (pro version only)
Reduce the count of Particles effects and their values
Character bones can also cause issue so remove as many as possible
There is also your code to look at as mentioned by Manujamming
Set quality setting to low in the inspector to gain best performance.
Could you provide a screenshot of your game scene?
I hope this makes sense.
Best of luck!
I'm completely new to OpenGL so sorry if it's a silly question. Also no idea if it makes a difference, just in case, I'm using OpenGL ES 1.1.
Currently I'm drawing sprites in order of texture, as I've read it's better for performance (makes sense). But now I'm wondering whether that was the right approach because I need certain sprites to be in front of others regardless of texture.
As far as I'm aware, my options for z-ordering would be either to enable the depth buffer and use that, or to switch the drawing order so the sprites are drawn in the order of a z value.
I've read that the depth buffer can be a performance hit, but so would changing the order. Which should I do?
The short answer is, sort the sprites.
It sounds like you're creating something that's really 2d based, and while a z-buffer can be a very useful tool, it can be an impressive performance hit if the hardware doesn't support it, and if you're not actually using 3d objects that may be intersecting one another, it doesn't make a lot of sense to me.
In addition, if you have any sprites that are partially transparent, i.e. have pixels with an alpha value that isn't 0 or 255 (or 0.0 or 1.0 if using floating point) then you have to sort anyway.
As a side note, I believe that the performance lost when changing "sprites" only occurs when switching out surfaces, and only rarely. One way to help mitigate this problem is to put as many different sprites in one image as you can, on a grid, and use little pieces of your surfaces as sprites.
I need to create a virtual tour tool for iOS. It's an archaeological application: the user could open it when he's inside an historic building or when he's visiting an archaeological dig. No need of doom-like subjective point of view: just a skybox. The application will have a list of points of interest (POIs). Every POI will have its own skybox.
I thought that I could use using OpenGL-ES to create a sort of textured skyboxes that could be driven/rotated by touches. Textures are hi-resolution PNG photos.
It's a funded project and I have 4 months.
Where do I have to go to learn how to develop it? Do I have to purchase a book? Which one?
I have just moderate Objectve-C and Cocoa-touch skills, since I've built just one application for the iPad. I have zero knowledge of OpenGL-ES.
Since I know OpenGL ES quite well, I had a go at a demo project, doing much of what you describe. The specific intention was to do everything in the simplest way available under OpenGL ES as long as the performance was good enough.
Starting from the OpenGL template that Apple supply, I have written one new class with a heavily commented implementation file 122 lines long that loads PNG images as textures. I've modified the sample view controller to draw a skybox as required and to respond to touches with a version of the normal iPhone inertial scrolling, which has meant writing less than 200 lines of (also commented) code.
To achieve this I needed to know:
the CoreGraphics means for getting pixel data from a PNG
how to set up the PROJECTION stack to get a perspective projection with the correct aspect ratio
how to manipulate the MODELVIEW stack to ensure two-axis rotation (first person shooter or Google StreetView style) of the scene according to member variables and to ensure that the cube geometry I defined doesn't visibly intersect the near clip plane
how to specify vertex locations and texture coordinates to OpenGL
how to specify the triangles OpenGL should construct between vertices
how to set the OpenGL texture parameters accordingly to supply only one level of detail for the texture
how to track a touch to manipulate the member variables dictating rotation, including a tiny bit of mechanics to give an inertial rotation
Of course, the normal view controller lifecycle instructions are obeyed. Textures are loaded on viewDidLoad and released on viewDidUnload, for example, to ensure that this view controller plays nicely with potential memory warnings.
The main observations are that, beyond knowing the Objective-C signalling mechanisms, most of this is C stuff. You're primarily using C arrays and references to make C function calls, both for OpenGL and CoreGraphics. So a prerequisite for coding this yourself is being happy in C, not just Objective-C.
The CoreGraphics stuff is a bit tedious but it's all just reading the docs to figure out how each type of thing relates to the next — none of it is really confusing. Just get into your head that you need a data provider for the PNG data, you can create an image from that data provider and then create a bitmap context with memory that you've allocated yourself, draw the image into the context and then release everything except the memory you allocated yourself to be left with the result. That result can be directly uploaded to OpenGL. It's relatively short boilerplate stuff, but OpenGL has no concept of PNGs and CoreGraphics has no convenient methods of pushing things into OpenGL.
I've assumed that textures are a suitable size on disk. For practical purposes, that means assuming they're a power-of-two in size along each edge. Mine are 512x512.
The OpenGL texture management stuff is easy enough; it's just reading the manual to learn about texture names, name allocation, texture parameters and uploading image data. More routine stuff that is more about knowing the right functions than managing an intuitive leap.
For supplying the geometry to OpenGL I've just written out the arrays in full. I guess you need a bit of a spatial mind to do it, but sketching out a 3d cube on paper and numbering the corners would be a big help. There are three relevant arrays:
the vertex positions
the texture coordinates that go with each vertex location
a list of indices referring to vertex positions that defines the geometry
In my code I've used 24 vertices, treating each face of the cube as a logically discrete thing (so, six faces, each with four vertices). I've defined the geometry using triangles only, for simplicity. Supplying this stuff to OpenGL is actually quite annoying when you're starting; making an error generally means your program crashes deep inside the OpenGL driver without giving you a hint as to what you did wrong. It's probably best to build up a bit at a time.
In terms of a UIView capable of hosting OpenGL content, I've more or less used the vanilla stuff Apple directly supply in the OpenGL template. The one change I made was explicitly to disable any attempted use of OpenGL ES 2.x. 1.x is more than sufficient for this task, so we gain simplicity firstly by not providing two alternative rendering paths and secondly because the ES 2.x path would be a lot more complicated. ES 2.x is the fully programmable pipeline with pixel and vertex shaders, but in ES land the fixed pipeline is completely removed. So if you want one then you have to supply your own substitutes for the normal matrix stacks, you have to write vertex and fragment shaders to do 'a triangle with a texture', etc.
The touch tracking isn't particularly complicated, more or less just requiring me to understand how the view frustum works and how touches are delivered in Cocoa Touch. Once you've done everything else, this bit should be quite easy.
Notably, the maths I had to implement was extremely simple. Just the touch tracking, really. Assuming you wanted a Google Maps-type view meant that I could rely entirely on OpenGL's built-in ability to rotate things, for example. At no point do I explicitly handle a matrix.
So, how long it would take you to write depends on your own confidence with C and with CoreGraphics, and how happy you are sometimes coding in the dark. Because I know what I'm doing, the whole thing took two or three hours.
I'll try to find somewhere to upload the project so that you can have a look at it. I think it'd be helpful to leaf through it and see how alien it looks. That'll probably give you a good idea about whether you could implement something that meets all of your needs within the time frame of your project.
I've left the view controller as having exactly one view, which is the OpenGL view. However, the normal iPhone compositing rules apply and in your project you can easily put normal controls on top. You can grab my little implementation at mediafire. StackOverflow post length limits prevent me from putting big snippets of code here, but please feel free to ask if you have any specific questions.
It's going to be pretty tough if you're learning OpenGL ES from scratch. I'd use a graphics engine to do most of the heavy lifting. I'm currently playing Ogre3d, from what I've seen so far I can recommend it: http://www.ogre3d.org/. It has Skybox (and much more) out of the box, and should be pretty straight forward to do.
I think you can do this, here are some links to help get you started:
http://sidvind.com/wiki/Skybox_tutorial
common problems:
( i would post direct links but stackoverflow wont let me )
look on stackoverflow items no 2859722 and 2297564.
some programs and tips to help make the textures:
spacescape
there are some great opengl tutorials here:
nehe.gamedev.net
they are not iphone specific, but they explain opengl pretty well. i think some folks have ported these to the phone as well, i just cant find them now.
I am trying to Google for what I've mentioned in the title, but somehow I couldn't find it. This should not be that hard, should it?
What I am looking for is a way to gain access to an OpenGL ES texture on iPhone, and a way to get/set pixel with it. What are the OpenGL ES functions I am looking for?
Before OpenGL ES is able to see your texture, you should have loaded it in memory already, generated texture names(glGenTextures), and bound it(glBindTexture). Your texture data is just a big array in memory.
Therefore, should you with to change a single texel, you can manipulate it in-memory and then bind it again. This approach is usually done for procedural texture generation. There are many available resources on the net about it, for instance: http://www.blumtnwerx.com/blog/2009/06/opengl-es-texture-mapping-for-iphone-oolong-powervr/
While glReadPixels is available, there are very few situations where you'd need to use it for interactive applications(screen capture comes to mind). It absolutely destroys performance. And still won't give you back the original textures, but instead will return a block of the framebuffer.
I have no idea what kind of effect you are looking for. However, if you are targeting a device that supports pixel shaders, perhaps a custom pixel shader can do what you want.
Of course, I am working under the assumption you didn't mean pixel as in screen coordinates.
I don't know about setting an individual pixel, but glReadPixels can read a block of pixels from the frame buffer (http://www.khronos.org/opengles/sdk/docs/man/glReadPixels.xml). Your problem googling may be because texture pixels are often shortened to 'texels'.