I am writing a tile-based game engine for the iPhone and it works in general apart from the following glitch. Basically, the camera will always keep the player in the centre of the screen, and it moves to follow the player correctly and draws everything correctly when stationary. However whilst the player is moving, the tiles of the surface the player is walking on glitch as shown:
http://img41.imageshack.us/img41/9422/movingy.png
Compared to the stationary (correct):
http://img689.imageshack.us/img689/7026/still.png
Does anyone have any idea why this could be?
Thanks for the responses so far. Floating point error was my first thought also and I tried slightly increasing the size of the tiles but this did not help. Changing glClearColor to red still leaves black gaps so maybe it isn't floating point error. Since the tiles in general will use different textures, I don't know if vertex arrays can be used (I always thought that the same texture had to be applied to everything in the array, correct me if I'm wrong), and I don't think VBO is available in OpenGL ES. Setting the filtering to nearest neighbour improved things but the glitch still happens every ten frames or so, and the pixelly result means that this solution is not viable anyway.
The main difference between what I'm doing now and what I've done in the past is that this time I am moving the camera rather than the stationary objects in the world (i.e. the tiles, the player is still being moved). The code I'm using to move the camera is:
void Camera::CentreAtPoint( GLfloat x, GLfloat y )
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(x - size.x / 2.0f, x + size.x / 2.0f, y + size.y / 2.0f, y - size.y / 2.0f, 0.01f, 5.0f);
glMatrixMode(GL_MODELVIEW);
}
Is there a problem with doing things this way and if so is there a solution?
My first guess would be floating point rounding error. This could cause the co-ordinates for your quads to be just a little bit out, resulting in the gaps you see. to verify this, you might want to try changing glClearColor() and seeing if the gaps change colour with it.
One solution to this would be to make the tiles slightly larger. Only a very small increment is needed (like 0.0001f) to cover over this kind of error.
Alternatively, you could try using a Vertex Array or a VBO to store your ground mesh (ensuring that adjoining squares share vertices). I'd expect this to fix the issue, but I'm not 100% sure - and it should also render faster.
Sometimes this is caused by filtering issues on border texels. You could try using GL_CLAMP_TO_EDGE in your texture parameters.
Its due to filtering.. use clamp to edge AND leave a 1 or 2 pixel border.. this is why we have an option for BORDER in glTexImage call..
the 4th parameter change from 0 to 1
Related
I'm making a water simulation, and I'm trying to visualize the velocity field. In this demo I continuously add water in the centre of the image, where the blue 'raindrop' things are.
I have a texture where rg is the X and Y direction of the velocity, and ba is the total movement of water through it (ie: every step ba = ba + rg * delta_time).
I'm working in Unity Shader Graph.
I want to rotate a 'ripple' texture in the direction of the velocity, and then translate in that direction as well. To prevent the shader from jumping around when the velocity changes I thought of using the ba channels (which were previously unused) to keep like a total velocity like described above.
However, both the rotation (based on velocity alone), and the translation (based on the 'total velocity') work fine on their own. But when I sum them together it looks like the translation is also rotated. I'm not sure why this happens.
Here's what I do:
First part: rotating my water texture in the direction of the velocity, and that looks fine:
The shader itself looks like this:
So basically I discretize the uv (custom function on the right), get the angle of the velocity (using arctan2), and then rotate each discrete block using the Rotate block. This works as expected.
Second part: translating the texture based on the total velocity (in the ba channels), also works as expected:
The shader itself looks like this:
Again I used the discretized uv, now I translate each block based on the ba channels, which contain the total of the velocity (ba = ba + rg * delta_time each time step). As you can see this shows the textures flowing away from the centre (where water is added constantly). This is what I would expect to happen.
Now, when I combine them, it goes wrong:
The one I circled in red shows the problem the best (though all block seem to have it to some degree, depending on how much they were rotated). The arrow point to the bottom-right, which seems to be correct, however it flows to the top now.
The shader:
So here I add the rotated discrete block to the translation. But it looks like the translation part now also rotated, even though I add them together after the rotation block. So while the translation isn't rotated, it looks like it is.
Why is this happening? And how can I fix it.
I hope I explained it adequately, since it's not easy to show in just pictures and gifs.
Thanks!
So I fixed my problem by rather than storing the x and y of the offset in the b & a channels, to just storing the total distance moved in the b channel (thus b += length(rg)).
Then I'm using float2(0, b)` as the offset.
This is then also rotated for some reason and visually works as I wanted it.
However, I still don't really see why, sometimes I think I get it, and then I think some more and I don't any more.
So if anyone knows why this happens and can explain, I'm happy to accept that answer.
However, for now it is solved.
I'm working on a project, using unity 5.4.
In this projects blocks are stacked next to eachother.
However there appear some annoying weird lines. Also on android these
line occur more often than on PC.
For illustration purposes I added an image and video.
Please zoom in on the picture to see, the line I'm speaking of, clearly.
Could anyone please provide a solution to get red of this nuissance.
Thanks in advance.
Literature:
Block alignment code snippet:
for (int x = 0; x < xSize; x++)
for (int z = 0; z < zSize; z++)
{
Vector3 pos = new Vector3(x, -layerDepth, z);
InstantiateBlock(pos);
}
Video link: https://youtu.be/5wN1Wn51d_Y
You have object seams!
This occurs when there is a physical or perceived gap between objects.
There are multiple causes for this.
1. Floating Point Imprecision
This could be because you are setting the position of the cubes to int's but they have floating point dimensions. The symptom for this is usually no white seams when the camera is close to the objects, and then they gradually appear as you get further away due to floating point imprecision. More.
Most of these blocks appear to line up exactly, from most camera positions. But from the occasional unfortunate position, the exact value for A's position plus its vertex at (0.5,0.5,-0.5) might be slightly different to object B's position plus its vertex at (-0.5,0.5,-0.5) . The result is that Unity shows a tiny gap, within which you can see the shadowed side of cube A.
If you consider the following on paper 3 == 1/3 * 3 this is mathematically correct, however using floats, 1/3 == 0.333333... and subsequently 3 * 0.333333... == 0.999999... BINGO! random gap between objects!
So how to solve? Use floats to calculate the positions of your objects. new vector3(1,1,1); should be new vector3(1f,1f,1f); - for example. For further reading on this try this SOP.
2. Texture Wrap-mode
If you are using textures on your objects, try changing the Wrap-Mode of your texture from wrap to clamp, or try upping the texture padding.
3. Shadow Acne - (Lighting and Shadow artifacts)
This is the arbitrary patterns of pixels in shadow when they should really be lit or NOT lit.
To prevent shadow acne, a Bias value can be added to the distance in the shadow map to ensure that pixels on the borderline definitely pass the comparison as they should, or to ensure that while rendering into the shadow map. source.
In Unity... go to your light source and then increase the Shadow Type > shadow Bias I would suggest doubling the default value of 0.05 and then continue so until fixed. You don't want to crank this value to max because...
Do not set the Bias value too high, because areas around a shadow near the GameObject casting it are sometimes falsely illuminated. This results in a disconnected shadow, making the GameObject look as if it is flying above the ground.
Are you using different blocks that you put against eachother? Your problem sounds like the blocks are not completely against eachother which causes you to see the side of the next block (this explains the camera Y changing: you might see the side better from higher up). That side will have different lighting and appear as a different/lighter colour. To check if this is the problem, try overlapping them slightly manually in the editor and see if the problem still occurs.
Making the blocks kinematic solves that. The issue is the rigid bodies bumping up against one another.
I am using Unity 5 to develop a game. I'm still learning, so this may be a dumb question. I have read about Depth Buffer and Depth Texture, but I cannot seem to understand if that applies here or not.
My setting is simple: I create a grid using several quads (40x40) which I use to snap buildings. Those buildings also have a base, made with quads. Every time I put one one the map, the Quads overlap and they look like the picture.
As you can see, the red quad is "merging" with the floor (white quads).
How can I make sure Unity renders the red one first, and the white ones are background? Of course, I can change the red quad Y position, but that seems like the wrong way of solving this.
This is a common issue, called Z-Fighting.
Usually you can reduce it by reducing the range of “Clipping Planes” of the camera, but in your case the quads are at the same Y position, so you can’t avoid it without changing the Y position.
I don't know if it is an option for you, but if you use SpriteRenderer (Unity 2D) you don’t have that problem and you can just set “Sorting Layer” or “Order in Layer” if you want modify the rendering order.
I am fairly new to programming and I'm doing it, at this point, just to educate myself and have fun.
I'm having a lot of trouble understanding some OpenGL stuff despite having read this great article here. I've also downloaded and played around with an example from the apple developer site that uses a .png image for a sprite. I do eventually want to use an image.
All I want to do is take an image and warp it such that it's four corners end up at four different x,y coordinates that I supply. This would be on a timer of sorts (CADisplayLink?) with one or more of these points changing at each moment. I just want to stretch it between these dynamic points.
I'm just having trouble understanding exactly how this works. As I've understood some example code over at the developer center, I can use:
glVertexPointer(2, GL_FLOAT, 0, spriteVertices);
where spriteVertices is something like:
const GLfloat spriteVertices[] = {
-0.90f, -.85f,
0.95f, -0.83f,
-0.85f, 0.85f,
0.80f, 0.80f,
};
The problem is that I don't understand what the numbers actually mean, why some have negatives infront of them, and where they are counting from to get the four corners. How would I need to change normal x,y coordinates that I get in order to plug them into this? (the numbers I would have for x,y wouldn't look like numbers between 1 and 0 would they? I would like something akin to per pixel accuracy.
Any help is greatly appreciated even if it's just a link to more reading. I'm having trouble finding resources for a newb.
It isn't as complicated as it seems at first. Each pair of numbers relates to an x,y position on the screen. So, 0.80f, 0.80f, would say go to 80% of the drawable area for both x and y(left to right, down to up). While -0.80,-0.80 would say go to 80% of the drawable area from right to left, up to down. The negatives just switch the sides. A point of note, openGL draws down to up(as if you were looking up a building from the ground), while the iPhone draws up to down (as though you were reading a book).
To get pixels, you multiply the float value by drawable area 1024 X 0.8 = 819.2.
This tutorial is for textures, but it is amazing and really helps you learn the coordinate systems:
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-6_25.html
In my spare time I like to play around with game development on the iPhone with OpenGL ES. I'm throwing together a small 2D side-scroller demo for fun, and I'm relatively new to OpenGL, and I wanted to get some more experienced developers' input on this.
So here is my question: does it make sense to specify the vertices of each 2D element in model space, then translate each element to it's final view space each time a frame is drawn?
For example, say I have a set of blocks (squares) that make up the ground in my side-scroller. Each square is defined as:
const GLfloat squareVertices[] = {
-1.0, 1.0, -6.0, // Top left
-1.0, -1.0, -6.0, // Bottom left
1.0, -1.0, -6.0, // Bottom right
1.0, 1.0, -6.0 // Top right
}
Say I have 10 of these squares that I need to draw together as the ground for the next frame. Should I do something like this, for each square visible in the current scene?
glPushMatrix();
{
glTranslatef(currentSquareX, currentSquareY, 0.0);
glVertexPointer(3, GL_FLOAT, 0, squareVertices);
glEnableClientState(GL_VERTEX_ARRAY);
// Do the drawing
}
glPopMatrix();
It seems to me that doing this for every 2D element in the scene, for every frame, gets a bit intense and I would imagine the smarter people who use OpenGL much more than I do may have a better way of doing this.
That all being said, I'm expecting to hear that I should profile the code and see where any bottlenecks may be: to those people, I say: I haven't written any of this code yet, I'm simply in the process of wrapping my mind around it so that when I do go to write it it goes smoother.
On the subject of profiling and optimization, I'm really not trying to prematurely optimize here, I'm just trying to wrap my mind around how one would set up a 2D scene and render it. Like I said, I'm relatively new to OpenGL and I'm just trying to get a feel for how things are done. If anyone has any suggestions on a better way to do this, I'd love to hear your thoughts.
Please keep in mind that I'm not interested in 3D, just 2D for now. Thanks!
You are concerned with the overhead it takes to transform a model (in this case a square) from model coordinates to world coordinates when you have a lot of models. This seems like an obvious optimization for static models.
If you build your square's vertices in world coordinates, then of course it is going to be faster as each square will avoid the extra cost of these three functions (glPushMatrix, glPopMatrix, and glTranslatef) since there is no need to translate from model to world coordinates at render time. I have no idea how much faster this will be, I suspect that it won't be a humongous optimization, and you lose the modularity of keeping the squares in model coordinates: What if in the future you decide you want these squares to be moveable? That will be a lot harder if you're keeping their vertices in world coordinates.
In short, it's a tradeoff:
World Coordinates
More Memory - each square needs its
own set of vertices.
Less computation - no need to perform
glPushMatrix, glPopMatrix, or
glTranslatef for each square at render time.
Less flexible - lacks support (or
complicates) for dynamically moving these squares
Model Coordinates
Less memory - the squares can share the same vertex data
More Computation - each square must
perform three extra functions at
render time.
More Flexible - squares can easily be
moved by manipulating the
glTranslatef call.
I guess the only way to know what is the right decision is by doing and profiling. I know you said you haven't written this yet, but I suspect that whether your squares are in model or world coordinates it won't make much of a difference - and if it does, I can't imagine an architecture that you could create where it would be hard to switch your squares from model to world coordinates or vice-versa.
Good luck to you and your adventures in iPhone game development!
If you are only using screen aligned quads it might be easier to use the OES Draw Texture extension. Then you can use a single texture to hold all your game "sprites". First specify the crop rectangle by setting the GL_TEXTURE_CROP_RECT_OES TexParameter. This is the boundry of the sprite within the larger texture. To render, call glDrawTexiOES passing in the desired position & size in viewport coordinates.
int rect[4] = {0, 0, 16, 16};
glBindTexture(GL_TEXTURE_2D, sprites);
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, rect);
glDrawTexiOES(x, y, z, width, height);
This extension isn't available on all devices, but it works great on the iPhone.
You might also consider using a static image and just scrolling that instead of drawing each individual block of the floor, and translating its position, etc.