Unity resolution problem, mesh has holes when zooming out - unity3d

I have a server which renders a mesh using OpenGL with a custom frame buffer. In my fragment shader I write the gl_primitiveID into an output variable. Afterwards I call glReadPixels to read the IDs out. Then I know which triangles were rendered (and are therefore visible) and I send all those triangles to a client which runs on Unity. On this client I add the vertex and index data to a GameObject and it renders it without a problem. I get the exact same rendering result in Unity as I got with OpenGL, unless I start to zoom out.
Here are pictures of the mesh rendered with Unity:
My first thought was that I have different resolutions, but this is not the case. I have 1920*1080 on both server and client. I use the same view and projection matrix from the client on my server, so this also shouldn't be the problem. What could be cause of this error?
In case you need to see some of the code I wrote.
Here is my vertex shader code:
#version 330 core
layout (location = 0) in vec3 position;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * vec4(position, 1.0f);
}
Here is my fragment shader code:
#version 330 core
layout(location = 0) out int primitiveID;
void main(void)
{
primitiveID = gl_PrimitiveID + 1; // +1 because the first primitive is 0
}
and here is my getVisibleTriangles method:
std::set<int> getVisibleTriangles() {
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0, 0, SCR_WIDTH, SCR_HEIGHT, GL_RED_INTEGER, GL_INT, &pixels[0]);
std::set<int> visibleTriangles;
for (int i = 0; i < pixelsLength; i += 4) {
int id = * (int*) &pixels[i];
if (id != 0) {
visibleTriangles.insert(id-1); // id 0 is NO_PRIMITIVE
}
}
return visibleTriangles;
}

Oh my god, I can't believe I made such a stupid mistake.
After all it WAS a resolution problem.
I didn't call gl_Viewport (only when resizing the window). Apparently when creating a window with glfwCreateWindow GLFW creates the window but since the parameters are only hints and not hard constraints (as it is stated here: glfw reference) it is possible that those are not exactly fulfilled. Since I have passed a desired size of 1920*1080 (which is also my resolution) apparently the drawing area did not actually get the size of 1920*1080, because there is also some space needed for the menu etc. So it is rendered with a lower resolution (to be exact, it was 1920*1061) on the server which results in missing triangles on the client.

Before getting into the shader details for this problem, are you sure that the problem doesn't lie with the way the zoom out functionality has been implemented in Unity? It's just a hunch since I have seen it in older projects, but if the zoom in/out functionality works by actually moving the camera then the movement of the clipping planes will create those "holes" when the mesh surfaces go outside the range. Although the placement of some of those holes in the shared image makes me doubt that this is the case, but you never know.
If this happens to be the way the zoom function works then you can confirm this by looking at the editor mode while zooming out. It will display the position of the clipping planes of the camera in relation to the mesh.

Related

Cross section shader for box bounding using amplify Shader

I am trying to create shader through amplify shader for a cube to cut through plane or any mesh when cross section. I know that I should be using size, rotation and position for that but what exactly to do with them that I don't know. Yup by that it means that I am new to amplify shader and also in shader programming so please don't provide shader code as I need to make it customizable for future so please help me out in amplify shader nodes.
Currently I have this effect but I want to make it more box bounding specific not plane normals based.
I want not this effect but the box effect shown below. This was achieved through ray marching concept but this I want to achieve with Amplify Shader. Kindly guide me through this.
This is what I have done so far with the amplify nodes
Result:
Here is the result of doing the shader using "Amplify Shader":
Solution:
First we'll call the green cube the "intersector" and the red cube the "intersectee".
So as you've done with the plane, the cutout works because the back face of the intersector is shown when inside the intersectee and the intersectee front face is show when it is inside the intersector.
Create a shader (which is used by both cubes) and put them into two seperate materials - apply individual materials to each cube. After this we can get into the actually shader node stuff.
First we need to make sure "Cull Mode" is off (Output Node > Cull Mode > off). This will ensure the back face is actually rendered (This can be optimized by decided depending on where the cube is in the intersector).
Next we need to get the surface point in object space:
Most of the variables will be defined in script. The rotation matrix is used to rotate a point. However, it is inversed as the rotation matrix rotates the cube into world space, therefore, inversing this would rotate the world space point into object space. We also get a "_Cubepos" which is the position of the cube to intersect with (E.g it would be the intersector if shader is on the intersectee). This is subracted by the world pos as the rotation matrix rotates around the origin. After this it is added back to be in the correct position.
This leads to the next section where the extents are added and subtracted to the "_Cubepos" and "_CubeExtent" to find the minimum and maximum extents.
Unfortunately, Amplify shader has no good way to check if a vector lies within two vectors. So we have to break it into components. (I encourage you to learn how to write shaders). Each compare with range returns 1 if the point in object space is within the extents for each axis. If one returns 0 we use the last multiply node to make sure the final output will be 0.
Finally, we get to the last part of the shader. The "IsIntersector" is set in script to be 1 or 0 depending on whether the cube we are refering to is used to intersect or is an intersectee. Depending on the scenario, here we set the opacity mask to 1 or 0.
After this we have to define the script to attach to each object. Add a new script and type the following in:
[ExecuteInEditMode]
public class SetVar : MonoBehaviour
{
//Transform of opposite cube
public Transform intersectingCube;
//Is this an intersector or intersectee
public bool isIntersector;
//Material of object
public Material mat;
// Start is called before the first frame update
void Start()
{
//Get material
mat = GetComponent<Renderer>().material;
}
// Update is called once per frame
void OnRenderObject()
{
//Calculate rotation matrix
Matrix4x4 m = Matrix4x4.TRS(-intersectingCube.position, intersectingCube.rotation, Vector3.one);
//Set shader variables
mat.SetMatrix("RotationMatrix", m);
mat.SetVector("_Cubepos", intersectingCube.position);
mat.SetVector("_CubeExtent", intersectingCube.localScale / 2.0f);
mat.SetFloat("_IsIntersector", (isIntersector) ? 0 : 1);
}
}
Then we can set the correct inspector values depending if the cube is an intersector or intersectee. Here is an example for the intersector cube:
Make sure to have the IsIntersector ticked depending if the cube is an intersector or not.
Here is a link to the shader: http://paste.amplify.pt/view/raw/4b248bc3. Also to do this for any mesh is a very complicated operation - too complicated for nodes. Learn about shader code and use a raycasting algorithm to determine if the point is inside the cube.
Also, alternatively for any convex shape. You could calculate each plane and then using your method already used, can check if the world position point works for every plane. For a cube there would be 6 planes, however, its a bit slower than the above method (as it is optimized for a cube).

Unity3d - Need to hide a group of objects in the area

I've already tried depthmask shaders and examined some other ideas, but it seems like it doesn't suit me at all.
I'm making an AR game and I have a scene with a house and trees. All these objects are animated and do something like falling from the sky, but not all at once, but in sequence. For example, the house first, then trees, then fence etc.
(Plz, look at my picture for details) http://f2.s.qip.ru/bVqSAgcy.png
If user moves camera too far, he will see all these objects stucking in the air and waiting for their order to start falling, and it is not good. I want to hide this area from all sides (because in AR camera can move around freely) and make all parts visible only when each will start moving (falling down).
(One more screen) http://f3.s.qip.ru/bVqSAgcz.png
I thought about animation events, but there are too many objects (bricks, for example) and I can't handle all of them manually.
I look forward to your great advice ;)
P.S. Sorry for my bad english.
You can disable their(the objects that are gonna fall) mesh renderers and re active them when they are ready to fall.
See here for more details about mesh renderer.
Deactivate your Object. You might use the camera viewport coordinates to get a y position outside the viewport. They start on the bottom left of the screen (0,0) and go to the top right of the screen (1,1). Convert them to worldspace coordinates. Camera.ViewportToWorldPoint
Vector3 outsideCamera = Camera.main.ViewportToWorldPoint(new Vector3(0.5f, 1.2f, 10.0f));
Now you can use the intended x and z positions of your object. Activate it when you want to drop it.
myObject.transform.position = new Vector3(myObject.transform.position.x, outsideCamera.y, myObject.transform.position.z);
Another thing you could additionally do is scaling the object from very small to its intended size when it is falling. This would prevent the object being visible before falling when the users point the camera upwards.
1- Maybe you can use the Camera far clipping plane property.
Or you can even use 2 Cameras if you need to display let's say the landscape on one (which will not render the house + trees + ...) with a "big" far clipping plane and use a second one with Depth only clear flags rendering only the items (this one can have a smaller far clipping plane from what I understand).
2- Other suggestion I'd give you is adding the scale to your animation:
set the scale to 0 on the beginning of animation
wait for the item to be needed to fall down
set the scale to 1 (with a transition if needed)
make the item fall down
EDIT: the workaround you found is quite just fine too! But tracking only world position should be enough I think (saving a tiny amount of memory).
Hope this helps,
Finally, the solution I chose. I've added this script to each object in composition. It stores object's position (in my case both world and local) at Start() and listening if it changes in Update(). So, if true, stop monitoring and set MeshRenderer in on state.
[RequireComponent(typeof(MeshRenderer))]
public class RenderScript : MonoBehaviour
{
private MeshRenderer mr;
private bool monitoring = true;
private Vector3 posLocal;
private Vector3 posWorld;
// Use this for initialization
void Start()
{
mr = GetComponent<MeshRenderer>();
mr.enabled = false;
posLocal = transform.localPosition;
posWorld = transform.position;
}
// Update is called once per frame
void Update()
{
if (monitoring)
{
if (transform.localPosition != posLocal || transform.position != posWorld)
{
monitoring = false;
mr.enabled = true;
}
}
}
}
Even my funny cheap сhinese smartphone is alive after this, so, I guess, it's OK.

Weird Lines 3D Unity

I'm working on a project, using unity 5.4.
In this projects blocks are stacked next to eachother.
However there appear some annoying weird lines. Also on android these
line occur more often than on PC.
For illustration purposes I added an image and video.
Please zoom in on the picture to see, the line I'm speaking of, clearly.
Could anyone please provide a solution to get red of this nuissance.
Thanks in advance.
Literature:
Block alignment code snippet:
for (int x = 0; x < xSize; x++)
for (int z = 0; z < zSize; z++)
{
Vector3 pos = new Vector3(x, -layerDepth, z);
InstantiateBlock(pos);
}
Video link: https://youtu.be/5wN1Wn51d_Y
You have object seams!
This occurs when there is a physical or perceived gap between objects.
There are multiple causes for this.
1. Floating Point Imprecision
This could be because you are setting the position of the cubes to int's but they have floating point dimensions. The symptom for this is usually no white seams when the camera is close to the objects, and then they gradually appear as you get further away due to floating point imprecision. More.
Most of these blocks appear to line up exactly, from most camera positions. But from the occasional unfortunate position, the exact value for A's position plus its vertex at (0.5,0.5,-0.5) might be slightly different to object B's position plus its vertex at (-0.5,0.5,-0.5) . The result is that Unity shows a tiny gap, within which you can see the shadowed side of cube A.
If you consider the following on paper 3 == 1/3 * 3 this is mathematically correct, however using floats, 1/3 == 0.333333... and subsequently 3 * 0.333333... == 0.999999... BINGO! random gap between objects!
So how to solve? Use floats to calculate the positions of your objects. new vector3(1,1,1); should be new vector3(1f,1f,1f); - for example. For further reading on this try this SOP.
2. Texture Wrap-mode
If you are using textures on your objects, try changing the Wrap-Mode of your texture from wrap to clamp, or try upping the texture padding.
3. Shadow Acne - (Lighting and Shadow artifacts)
This is the arbitrary patterns of pixels in shadow when they should really be lit or NOT lit.
To prevent shadow acne, a Bias value can be added to the distance in the shadow map to ensure that pixels on the borderline definitely pass the comparison as they should, or to ensure that while rendering into the shadow map. source.
In Unity... go to your light source and then increase the Shadow Type > shadow Bias I would suggest doubling the default value of 0.05 and then continue so until fixed. You don't want to crank this value to max because...
Do not set the Bias value too high, because areas around a shadow near the GameObject casting it are sometimes falsely illuminated. This results in a disconnected shadow, making the GameObject look as if it is flying above the ground.
Are you using different blocks that you put against eachother? Your problem sounds like the blocks are not completely against eachother which causes you to see the side of the next block (this explains the camera Y changing: you might see the side better from higher up). That side will have different lighting and appear as a different/lighter colour. To check if this is the problem, try overlapping them slightly manually in the editor and see if the problem still occurs.
Making the blocks kinematic solves that. The issue is the rigid bodies bumping up against one another.

How can I improve the performance of my custom OpenGL ES 2.0 depth texture generation?

I have an open source iOS application that uses custom OpenGL ES 2.0 shaders to display 3-D representations of molecular structures. It does this by using procedurally generated sphere and cylinder impostors drawn over rectangles, instead of these same shapes built using lots of vertices. The downside to this approach is that the depth values for each fragment of these impostor objects needs to be calculated in a fragment shader, to be used when objects overlap.
Unfortunately, OpenGL ES 2.0 does not let you write to gl_FragDepth, so I've needed to output these values to a custom depth texture. I do a pass over my scene using a framebuffer object (FBO), only rendering out a color that corresponds to a depth value, with the results being stored into a texture. This texture is then loaded into the second half of my rendering process, where the actual screen image is generated. If a fragment at that stage is at the depth level stored in the depth texture for that point on the screen, it is displayed. If not, it is tossed. More about the process, including diagrams, can be found in my post here.
The generation of this depth texture is a bottleneck in my rendering process and I'm looking for a way to make it faster. It seems slower than it should be, but I can't figure out why. In order to achieve the proper generation of this depth texture, GL_DEPTH_TEST is disabled, GL_BLEND is enabled with glBlendFunc(GL_ONE, GL_ONE), and glBlendEquation() is set to GL_MIN_EXT. I know that a scene output in this manner isn't the fastest on a tile-based deferred renderer like the PowerVR series in iOS devices, but I can't think of a better way to do this.
My depth fragment shader for spheres (the most common display element) looks to be at the heart of this bottleneck (Renderer Utilization in Instruments is pegged at 99%, indicating that I'm limited by fragment processing). It currently looks like the following:
precision mediump float;
varying mediump vec2 impostorSpaceCoordinate;
varying mediump float normalizedDepth;
varying mediump float adjustedSphereRadius;
const vec3 stepValues = vec3(2.0, 1.0, 0.0);
const float scaleDownFactor = 1.0 / 255.0;
void main()
{
float distanceFromCenter = length(impostorSpaceCoordinate);
if (distanceFromCenter > 1.0)
{
gl_FragColor = vec4(1.0);
}
else
{
float calculatedDepth = sqrt(1.0 - distanceFromCenter * distanceFromCenter);
mediump float currentDepthValue = normalizedDepth - adjustedSphereRadius * calculatedDepth;
// Inlined color encoding for the depth values
float ceiledValue = ceil(currentDepthValue * 765.0);
vec3 intDepthValue = (vec3(ceiledValue) * scaleDownFactor) - stepValues;
gl_FragColor = vec4(intDepthValue, 1.0);
}
}
On an iPad 1, this takes 35 - 68 ms to render a frame of a DNA spacefilling model using a passthrough shader for display (18 to 35 ms on iPhone 4). According to the PowerVR PVRUniSCo compiler (part of their SDK), this shader uses 11 GPU cycles at best, 16 cycles at worst. I'm aware that you're advised not to use branching in a shader, but in this case that led to better performance than otherwise.
When I simplify it to
precision mediump float;
varying mediump vec2 impostorSpaceCoordinate;
varying mediump float normalizedDepth;
varying mediump float adjustedSphereRadius;
void main()
{
gl_FragColor = vec4(adjustedSphereRadius * normalizedDepth * (impostorSpaceCoordinate + 1.0) / 2.0, normalizedDepth, 1.0);
}
it takes 18 - 35 ms on iPad 1, but only 1.7 - 2.4 ms on iPhone 4. The estimated GPU cycle count for this shader is 8 cycles. The change in render time based on cycle count doesn't seem linear.
Finally, if I just output a constant color:
precision mediump float;
void main()
{
gl_FragColor = vec4(0.5, 0.5, 0.5, 1.0);
}
the rendering time drops to 1.1 - 2.3 ms on iPad 1 (1.3 ms on iPhone 4).
The nonlinear scaling in rendering time and sudden change between iPad and iPhone 4 for the second shader makes me think that there's something I'm missing here. A full source project containing these three shader variants (look in the SphereDepth.fsh file and comment out the appropriate sections) and a test model can be downloaded from here, if you wish to try this out yourself.
If you've read this far, my question is: based on this profiling information, how can I improve the rendering performance of my custom depth shader on iOS devices?
Based on the recommendations by Tommy, Pivot, and rotoglup, I've implemented some optimizations which have led to a doubling of the rendering speed for the both the depth texture generation and the overall rendering pipeline in the application.
First, I re-enabled the precalculated sphere depth and lighting texture that I'd used before with little effect, only now I use proper lowp precision values when handling the colors and other values from that texture. This combination, along with proper mipmapping for the texture, seems to yield a ~10% performance boost.
More importantly, I now do a pass before rendering both my depth texture and the final raytraced impostors where I lay down some opaque geometry to block pixels that would never be rendered. To do this, I enable depth testing and then draw out the squares that make up the objects in my scene, shrunken by sqrt(2) / 2, with a simple opaque shader. This will create inset squares covering area known to be opaque in a represented sphere.
I then disable depth writes using glDepthMask(GL_FALSE) and render the square sphere impostor at a location closer to the user by one radius. This allows the tile-based deferred rendering hardware in the iOS devices to efficiently strip out fragments that would never appear onscreen under any conditions, yet still give smooth intersections between the visible sphere impostors based on per-pixel depth values. This is depicted in my crude illustration below:
In this example, the opaque blocking squares for the top two impostors do not prevent any of the fragments from those visible objects from being rendered, yet they block a chunk of the fragments from the lowest impostor. The frontmost impostors can then use per-pixel tests to generate a smooth intersection, while many of the pixels from the rear impostor don't waste GPU cycles by being rendered.
I hadn't thought to disable depth writes, yet leave on depth testing when doing the last rendering stage. This is the key to preventing the impostors from simply stacking on one another, yet still using some of the hardware optimizations within the PowerVR GPUs.
In my benchmarks, rendering the test model I used above yields times of 18 - 35 ms per frame, as compared to the 35 - 68 ms I was getting previously, a near doubling in rendering speed. Applying this same opaque geometry pre-rendering to the raytracing pass yields a doubling in overall rendering performance.
Oddly, when I tried to refine this further by using inset and circumscribed octagons, which should cover ~17% fewer pixels when drawn, and be more efficient with blocking fragments, performance was actually worse than when using simple squares for this. Tiler utilization was still less than 60% in the worst case, so maybe the larger geometry was resulting in more cache misses.
EDIT (5/31/2011):
Based on Pivot's suggestion, I created inscribed and circumscribed octagons to use instead of my rectangles, only I followed the recommendations here for optimizing triangles for rasterization. In previous testing, octagons yielded worse performance than squares, despite removing many unnecessary fragments and letting you block covered fragments more efficiently. By adjusting the triangle drawing as follows:
I was able to reduce overall rendering time by an average of 14% on top of the above-described optimizations by switching to octagons from squares. The depth texture is now generated in 19 ms, with occasional dips to 2 ms and spikes to 35 ms.
EDIT 2 (5/31/2011):
I've revisited Tommy's idea of using the step function, now that I have fewer fragments to discard due to the octagons. This, combined with a depth lookup texture for the sphere, now leads to a 2 ms average rendering time on the iPad 1 for the depth texture generation for my test model. I consider that to be about as good as I could hope for in this rendering case, and a giant improvement from where I started. For posterity, here is the depth shader I'm now using:
precision mediump float;
varying mediump vec2 impostorSpaceCoordinate;
varying mediump float normalizedDepth;
varying mediump float adjustedSphereRadius;
varying mediump vec2 depthLookupCoordinate;
uniform lowp sampler2D sphereDepthMap;
const lowp vec3 stepValues = vec3(2.0, 1.0, 0.0);
void main()
{
lowp vec2 precalculatedDepthAndAlpha = texture2D(sphereDepthMap, depthLookupCoordinate).ra;
float inCircleMultiplier = step(0.5, precalculatedDepthAndAlpha.g);
float currentDepthValue = normalizedDepth + adjustedSphereRadius - adjustedSphereRadius * precalculatedDepthAndAlpha.r;
// Inlined color encoding for the depth values
currentDepthValue = currentDepthValue * 3.0;
lowp vec3 intDepthValue = vec3(currentDepthValue) - stepValues;
gl_FragColor = vec4(1.0 - inCircleMultiplier) + vec4(intDepthValue, inCircleMultiplier);
}
I've updated the testing sample here, if you wish to see this new approach in action as compared to what I was doing initially.
I'm still open to other suggestions, but this is a huge step forward for this application.
On the desktop, it was the case on many early programmable devices that while they could process 8 or 16 or whatever fragments simultaneously, they effectively had only one program counter for the lot of them (since that also implies only one fetch/decode unit and one of everything else, as long as they work in units of 8 or 16 pixels). Hence the initial prohibition on conditionals and, for a while after that, the situation where if the conditional evaluations for pixels that would be processed together returned different values, those pixels would be processed in smaller groups in some arrangement.
Although PowerVR aren't explicit, their application development recommendations have a section on flow control and make a lot of recommendations about dynamic branches usually being a good idea only where the result is reasonably predictable, which makes me think they're getting at the same sort of thing. I'd therefore suggest that the speed disparity may be because you've included a conditional.
As a first test, what happens if you try the following?
void main()
{
float distanceFromCenter = length(impostorSpaceCoordinate);
// the step function doesn't count as a conditional
float inCircleMultiplier = step(distanceFromCenter, 1.0);
float calculatedDepth = sqrt(1.0 - distanceFromCenter * distanceFromCenter * inCircleMultiplier);
mediump float currentDepthValue = normalizedDepth - adjustedSphereRadius * calculatedDepth;
// Inlined color encoding for the depth values
float ceiledValue = ceil(currentDepthValue * 765.0) * inCircleMultiplier;
vec3 intDepthValue = (vec3(ceiledValue) * scaleDownFactor) - (stepValues * inCircleMultiplier);
// use the result of the step to combine results
gl_FragColor = vec4(1.0 - inCircleMultiplier) + vec4(intDepthValue, inCircleMultiplier);
}
Many of these points have been covered by others who have posted answers, but the overarching theme here is that your rendering does a lot of work that will be thrown away:
The shader itself does some potentially redundant work. The length of a vector is likely to be calculated as sqrt(dot(vector, vector)). You don’t need the sqrt to reject fragments outside of the circle, and you’re squaring the length to calculate the depth, anyway. Additionally, have you looked at whether or not explicit quantization of the depth values is actually necessary, or can you get away with just using the hardware’s conversion from floating-point to integer for the framebuffer (potentially with an additional bias to make sure your quasi-depth tests come out right later)?
Many fragments are trivially outside the circle. Only π/4 of the area of the quads you’re drawing produce useful depth values. At this point, I imagine your app is heavily skewed towards fragment processing, so you may want to consider increasing the number of vertices you draw in exchange for a reduction in the area that you have to shade. Since you’re drawing spheres through an orthographic projection, any circumscribing regular polygon will do, although you may need a little extra size depending on zoom level to make sure you rasterize enough pixels.
Many fragments are trivially occluded by other fragments. As others have pointed out, you’re not using hardware depth test, and therefore not taking full advantage of a TBDR’s ability to kill shading work early. If you’ve already implemented something for 2), all you need to do is draw an inscribed regular polygon at the maximum depth that you can generate (a plane through the middle of the sphere), and draw your real polygon at the minimum depth (the front of the sphere). Both Tommy’s and rotoglup’s posts already contain the state vector specifics.
Note that 2) and 3) apply to your raytracing shaders as well.
I'm no mobile platform expert at all, but I think that what bites you is that:
your depth shader is quite expensive
experience massive overdraw in your depth pass as you disable GL_DEPTH test
Wouldn't an additional pass, drawn before the depth test be helpful ?
This pass could do a GL_DEPTH prefill, for example by drawing each sphere represented as quad facing camera (or a cube, that may be easier to setup), and contained in the associated sphere. This pass could be drawn without color mask or fragment shader, just with GL_DEPTH_TEST and glDepthMask enabled. On desktop platforms, these kind of passes get drawn faster than color + depth passes.
Then in you depth computation pass, you could enable GL_DEPTH_TEST and disable glDepthMask, this way your shader would not be executed on pixels that are hidden by nearer geometry.
This solution would involve issuing another set of draw calls, so this may not be beneficial.

How do I map a texture to the sides of an icosahedron?

I have been trying to develop a 3D game for a long time now. I went through
this
tutorial and found that I didn't know enough to actually make the game.
I am currently trying trying to add a texture to the icosahedron (in the "Look at Basic Drawing" section) he used in the tutorial, but I cannot get the texture on more than one side. The other sides are completely invisible for no logical reason (they showed up perfectly until I added the texture).
Here are my main questions:
How do I make the texture show up properly without using a million vertices and colors to mimic the results?
How can I move the object based on a variable that I can set in other functions?
Try to think of your icosahedron as a low poly sphere. I suppose Lamarche's icosahedron has it's center at 0,0,0. Look at this tutorial, it is written for directX but it explains the general principle of sphere texture mapping http://www.mvps.org/directx/articles/spheremap.htm. I used it in my project and it works great. You move the 3D object by applying various transformation matrices. You should have something like this
glPushMatrix();
glTranslatef();
draw icosahedron;
glPopMatrix();
Here is my code snippet of how I did texCoords for a semisphere shape, based on the tutorial mentioned above
GLfloat *ellipsoidTexCrds;
Vector3D *ellipsoidNorms;
int numVerts = *numEllipsoidVerticesHandle;
ellipsoidTexCrds = calloc(numVerts * 2, sizeof(GLfloat));
ellipsoidNorms = *ellipsoidNormalsHandle;
for(int i = 0, j = 0; i < numVerts * 2; i+=2, j++)
{
ellipsoidTexCrds[i] = asin(ellipsoidNorms[j].x)/M_PI + 0.5;
ellipsoidTexCrds[i+1] = asin(ellipsoidNorms[j].y)/M_PI + 0.5;
}
I wrote this about a year and a half ago, but I can remember that I calculated my vertex normals as being equal to normalized vertices. That is possible because when you have a spherical shape centered at (0,0,0), then vertices basically describe rays from the center of the sphere. Normalize them, and you got yourself vertex normals.
And by the way if you're planning to use a 3D engine on the iPhone, use Ogre3D, it's really fast.
hope this helps :)