I'm currently experimenting with OpenGL ES 1.1 on the iPhone and trying to get my head around some of the basics. So far I've managed to draw a grid of objects which are lit with one GL_LIGHT. Here is a screenshot of the current output (question to follow)...
So you can see that my test consists of a grid of about 140 cubes - some slightly elevated so I can see how the shaded areas work. Each cube consists of this model (from Blender) and have normals / texture coordinates...
What's puzzling me, is why I don't get a 'uniform' lighting across the entire surface. Each cube seems to be lit individually and I can kind of understand why that would be... but is it not possible to have the light transition 'normally' like it would if you arranged this model out of blocks and shone a light across it. I'd expect to not see a dark edge on each individual cube, but rather a smooth transition across the whole area.
(I'm still inwardly chuffed that I managed to get this far!)
Any help or explanations would be awesome.
Thanks,
Simon
The reason why you don't get 'uniform' lighting is because I presume you are using per vertex lighting. That is the lighting is calculated per vertex and interpolated over each triangle making up the model. Since your cube has a pretty low polygon count the transition of light across the model won't look smooth.
Using OpenGL ES 1.1 there are two solutions to this. You can use higher polygon count models or implement per-pixel (DOT3) lighting. I've not implemented this myself but have come across this problem before (my solution was to switch to OpenGL ES 2.0 and use shaders to perform per-pixel lighting).
Here is a link, which may be of use: What is DOT3 lighting?
All the best!
Related
My game uses very low-poly models for most of its geometry, and my current outline shader, which inverts normals and "scales up" the material, doesn't really cut it for this. Admittedly I have very little experience with shader graph but I'm trying my best here. These outlines are part of the Render Objects renderer feature in URP.
Here is an example of the issue in question, as well as the shader graph itself.
The shader you made requires smoothed normals for all vertices. If you need to have an edge on one of your outlined objects, you will get the following result with gaps because the cube doesn't have smooth normals.
And your models being very low-poly makes it a lot more visible.
I'd suggest just using a Frensel Node and attach it to a step node to get harsh outlines which you can just connect to emmision or something since it would probably only take 4-5 Nodes
This question relates to using shaders (probably in the Unity3D milieu, but Metal or OpenGL is fine), to achieve rounded edges on a mesh-minimal cube.
I wish to use only 12-triangle minimalist mesh cubes,
and then via the shader,
Achieve the edges (/corners) of each block being slightly bevelled.
In fact, can this be done with a shader?
I recently finished creating such shader. The only way it can work is by providing 4 normal vectors instead of one for each vertex (smooth, sharp and one for each edge of the triangle for the given vertex). You will also need one float3 to detect edges.
To add such data in a mesh I made a custom mesh editor, comes with Playtime Painter Asset from Unity Asset Store. Will post the shader with the next update. Also will post to public GitHub.
You can see some dark lines, it's because it starts to interpolate to a normal vector which facing away from light source, but since there are no additional triangles, the result is visible on a triangle which is facing the camera.
Update (2/12/2018)
Realised that by clipping pixels that end up having a normal facing away from the camera, it is possible to smooth the outline shape. It wasn't tested for all possible scenarios but works great for simple shapes:
As per request added a comparison cube:
Currently, Playtime Painter has a simplified version of that shader, which interpolates between 2 normal vectors and gives ok results on some edges.
Wrote an article.
In general the Relief Mapping is able to modify the object silhouette like on this picture. You'd need to prepare a heightmap that lowers at the borders and that's it. However I think that using such shader might be an overkill for such a simple effect so maybe it's better to just make it in your geometry.
We are trying to achieve the following in an iphone game:
Using 2d png files, set-up a scene that seems 3d. As the user moves the device, the individual png files would warp/distort accordingly to give the effect of depth.
example of a scene: an empty room, 5 walls and a chair in the middle. = 6 png files layered.
We have successfully accomplished this using native functions like skew and scale. By applying transformations to the various walls and the chair, as the device is tilted moved, the walls would skew/scale/translate . However, the problem is since we are using 6 png files, the edges dont meet as we move the device. We need a new solution using a real engine.
Question:
we are thinking of instead of applying skew/scale transformations, that if given the freedom to move the vertices of the rectangular images, we could precisly distort images and keep all the edges 100% aligned.
What is the best framework to do this in the LEAST amount of time? Are we going about this the correct way?
You should be able to achieve this effect (at least in regards to the perspective being applied to the walls) using Core Animation layers and appropriate 3-D transforms.
A good example of constructing a scene like this can be found in the example John Blackburn provides here. He shows how to set up layers to represent the walls in a maze by applying the appropriate rotation and translation to them, then gives the scene perspective by using the trick of altering the m34 component of the CATransform3D for the scene.
I'm not sure how well your flat chair would look using something like this, but certainly you can get your walls to have a nice perspective to them. Using layers and Core Animation would let you pull off what you want using far less code than implementing this using OpenGL ES.
Altering the camera angle is as simple as rotating the scene in response to shifts in the orientation of the device.
If you're going to the effort of warping textures as they would be warped in a 3D scene, then why not let the graphics hardware do the hard work for you by mapping the textures to 3D polygons, then changing your projection or moving polygons around?
I doubt you could do it faster by restricting yourself to 2D transformations --- the hardware is geared up to do 3x3 (well, 4x4 homogenous) matrix multiplication.
I was trying to figure out how I can create simple 3D walls like this in openGL. I dont want to create any fancy stuff just a basic 3D wall where i can move fwd and backwards imagine it as Wolf 3D game with only map no killing etc.
Is there any framework which I can use to do this?? I want to do it in openGL so that I can create/render this thing on my iphone.
Thanks
Pranay
If any body can point me to some sample source code it will be helpful.
As a non-OpenGL alternative, you can construct such a maze and move through it using only Core Animation. The textured wall segments would be CALayers containing images that had been transformed in 3-D to face the appropriate directions. The maze could be translated relative to the camera to cause the user to move through the area. The code for this would be significantly simpler than an equivalent OpenGL ES implementation written from scratch.
An example of this is presented by John Blackburn in his article here.
If you want to use OpenGL, then you have to create everything yourself. But there are several nice 3D engines.
Free:
oolongengine,
Ogre iPhone
Payed (but very powerful):
Shiva3D,
Unity3D
Creating a walk-throug in a 3D space from scratch, isn't basic stuff. It's actually a lot of math.
You will start with the 3D model of the world and in order to put yourself in the perspective of the viewer you have to transform this 3D model with a series of transformations:
The World transformation - Moves the world map
The View transformation - Transforms vertices into camera space
Perspective transformation - Maps 3D space into 2D
Each of those transformations will be defined as a 4x4 matrix. Hope this helps you for a start.
How big of a difference is the description language of Quartz2d to OpenGL ES?
It seems they are similar in description power... except that Quartz is mostly 2d and that OpenGL is out of the box 3d ( but can be made 2d focused ).
Are the mappings from 2dQuartz to 2d OpenGL ES that different? Im sure there must be differences in some specific features that might be handled differently on one vs another... but to do a translator?
Anyone have experience with both OpenGL and Quartz2d have some insights?
Quartz and OpenGL ES are two completely different animals. While they both have a C-based API that deals with a state machine and that draws into a context, their purposes are dissimilar. In Quartz you specify lines, Bezier and quadratic curves, arcs, or rectangles, as well as fills, gradients, and shadows / glows. In OpenGL ES, you provide vertices, raster textures, and lighting information, from which a scene is generated.
They are both useful in particular cases. You might draw a 2-D static element using Quartz, into a view, layer, or texture, and then place and move that view or layer in 3-D space using Core Animation or do the same for a texture using OpenGL ES.
Rather than try to overlay one API on the other, use whichever is more appropriate for what you are doing, or look to a framework like cocos2d which lets you build and animate 2-D scenes or Core Animation where you can do Quartz drawing into a layer but still use a nicely abstracted API for moving these layers around.