Weird Lines 3D Unity - unity3d

I'm working on a project, using unity 5.4.
In this projects blocks are stacked next to eachother.
However there appear some annoying weird lines. Also on android these
line occur more often than on PC.
For illustration purposes I added an image and video.
Please zoom in on the picture to see, the line I'm speaking of, clearly.
Could anyone please provide a solution to get red of this nuissance.
Thanks in advance.
Literature:
Block alignment code snippet:
for (int x = 0; x < xSize; x++)
for (int z = 0; z < zSize; z++)
{
Vector3 pos = new Vector3(x, -layerDepth, z);
InstantiateBlock(pos);
}
Video link: https://youtu.be/5wN1Wn51d_Y

You have object seams!
This occurs when there is a physical or perceived gap between objects.
There are multiple causes for this.
1. Floating Point Imprecision
This could be because you are setting the position of the cubes to int's but they have floating point dimensions. The symptom for this is usually no white seams when the camera is close to the objects, and then they gradually appear as you get further away due to floating point imprecision. More.
Most of these blocks appear to line up exactly, from most camera positions. But from the occasional unfortunate position, the exact value for A's position plus its vertex at (0.5,0.5,-0.5) might be slightly different to object B's position plus its vertex at (-0.5,0.5,-0.5) . The result is that Unity shows a tiny gap, within which you can see the shadowed side of cube A.
If you consider the following on paper 3 == 1/3 * 3 this is mathematically correct, however using floats, 1/3 == 0.333333... and subsequently 3 * 0.333333... == 0.999999... BINGO! random gap between objects!
So how to solve? Use floats to calculate the positions of your objects. new vector3(1,1,1); should be new vector3(1f,1f,1f); - for example. For further reading on this try this SOP.
2. Texture Wrap-mode
If you are using textures on your objects, try changing the Wrap-Mode of your texture from wrap to clamp, or try upping the texture padding.
3. Shadow Acne - (Lighting and Shadow artifacts)
This is the arbitrary patterns of pixels in shadow when they should really be lit or NOT lit.
To prevent shadow acne, a Bias value can be added to the distance in the shadow map to ensure that pixels on the borderline definitely pass the comparison as they should, or to ensure that while rendering into the shadow map. source.
In Unity... go to your light source and then increase the Shadow Type > shadow Bias I would suggest doubling the default value of 0.05 and then continue so until fixed. You don't want to crank this value to max because...
Do not set the Bias value too high, because areas around a shadow near the GameObject casting it are sometimes falsely illuminated. This results in a disconnected shadow, making the GameObject look as if it is flying above the ground.

Are you using different blocks that you put against eachother? Your problem sounds like the blocks are not completely against eachother which causes you to see the side of the next block (this explains the camera Y changing: you might see the side better from higher up). That side will have different lighting and appear as a different/lighter colour. To check if this is the problem, try overlapping them slightly manually in the editor and see if the problem still occurs.

Making the blocks kinematic solves that. The issue is the rigid bodies bumping up against one another.

Related

Unity3d: How to stretch (or share) a shader across multiple objects

I am tinkering around with cubes trying to build variations of 'block types' (in an effort to get more familiar with Unity's abilities, shaders, editor tools etc).
I have a generic cube:
That I want to add a material/shader.. which I have done (no problem there):
Which looks well enough (for my purposes) when it's just one block, but when I stick them altogether, I don't like the effect; you can see the individual boxes and the shader (which you can't see in the still image) is actually animated water, so when it's animating it looks ... pretty ugly.
(Bad/undesired)
I am trying to STRETCH or share the shader/material across all the selected blocks. See the below example (in this case, I have taken a SINGLE block and stretched it, but that's not keeping with the spirit of having individual blocks, so also not what I want).
(better/more desired)
I have thought the following may help, but they all seem overly complicated (aka I think I'm going about it incorrectly)
Have the individual blocks, but stretch a single plane across them and then apply the material.
I have found examples of programmatically joining meshes, and then apply the material/shader to the single object.
Take a single block and stretch it to the dimensions needed.
Maybe (not sure if I can), but have a plane with the water material applied to it and use the blocks as masks to only display water for those blocks? Not sure how that works...
In the end I am hoping to have the following:
Individual blocks (so I can interact with them.
Shader animations/colors are shared across the shared/connected blocks.
It won't always be a 2x3 grid... it could be diagonal, or contain odd shapes of connected blocks...
(this is all in EDITOR mode).
Any thoughts on how I might approach this?
Phrases you could try searching are "converting from world space to uv space", "transforming uv coordinates", "uv math". UV is the name for coordinates in textures that a shader samples from, and if you take already existing shader code, you can do interesting things by changing the UV(s) it uses. One of those things is letting you "stretch" it.
In your 2x3 cube example you could tell each cube to treat its U value as going from 0 to 0.5 or 0.5 to 1 and the V as going from 0 to 0.33 or 0.33 to 0.67 or 0.67 to 1 depending on where it is instead of each one going from 0 to 1. You could do this by having a property on the shader to tell it where to start the uv (a) and where to end its uv (b), and you lerp from (0,0) - (1,1) to a - b.
My answer to a different question uses some similar logic to that by comparing the world position of the pixel vs a range of world positions to get a UV. The relevant shader code is:
fixed4 colorizedMapUV = (IN.worldPos.xz-_WorldSpaceRange.xy)
/ (_WorldSpaceRange.zw-_WorldSpaceRange.xy);
Another option is to only look at the world position, and completely disregard a notion of where the "corners" of the uv should be. A method called "triplanar mapping" might guide you to a solution that does this

Unity Shader Graph: Combining texture rotation and offset

I'm making a water simulation, and I'm trying to visualize the velocity field. In this demo I continuously add water in the centre of the image, where the blue 'raindrop' things are.
I have a texture where rg is the X and Y direction of the velocity, and ba is the total movement of water through it (ie: every step ba = ba + rg * delta_time).
I'm working in Unity Shader Graph.
I want to rotate a 'ripple' texture in the direction of the velocity, and then translate in that direction as well. To prevent the shader from jumping around when the velocity changes I thought of using the ba channels (which were previously unused) to keep like a total velocity like described above.
However, both the rotation (based on velocity alone), and the translation (based on the 'total velocity') work fine on their own. But when I sum them together it looks like the translation is also rotated. I'm not sure why this happens.
Here's what I do:
First part: rotating my water texture in the direction of the velocity, and that looks fine:
The shader itself looks like this:
So basically I discretize the uv (custom function on the right), get the angle of the velocity (using arctan2), and then rotate each discrete block using the Rotate block. This works as expected.
Second part: translating the texture based on the total velocity (in the ba channels), also works as expected:
The shader itself looks like this:
Again I used the discretized uv, now I translate each block based on the ba channels, which contain the total of the velocity (ba = ba + rg * delta_time each time step). As you can see this shows the textures flowing away from the centre (where water is added constantly). This is what I would expect to happen.
Now, when I combine them, it goes wrong:
The one I circled in red shows the problem the best (though all block seem to have it to some degree, depending on how much they were rotated). The arrow point to the bottom-right, which seems to be correct, however it flows to the top now.
The shader:
So here I add the rotated discrete block to the translation. But it looks like the translation part now also rotated, even though I add them together after the rotation block. So while the translation isn't rotated, it looks like it is.
Why is this happening? And how can I fix it.
I hope I explained it adequately, since it's not easy to show in just pictures and gifs.
Thanks!
So I fixed my problem by rather than storing the x and y of the offset in the b & a channels, to just storing the total distance moved in the b channel (thus b += length(rg)).
Then I'm using float2(0, b)` as the offset.
This is then also rotated for some reason and visually works as I wanted it.
However, I still don't really see why, sometimes I think I get it, and then I think some more and I don't any more.
So if anyone knows why this happens and can explain, I'm happy to accept that answer.
However, for now it is solved.

.dae model disappears when approaching

When I move towards my .dae imported model, it disappears. I'm not "inside" the mesh yet, visibly at least, so I don't know what the deal is.
It looks like your object is closer than the scene-view camera's "Near Clip Plane", and is not being rendered as a result. The default editor "near clip plane" distance is around 0.3 units, so it shouldn't normally interfere with your objects.
Check that your object scale is correct. If your object is very small, the scene camera's near clip plane will seem much farther in comparison, and will appear to clip objects more aggressively.
You can create a default "Cube" primitive to check the size of your objects. Cubes are 1 unit in all dimensions by default, and most of the time it's a good idea to roughly map one unit to a real-world scale of 1 meter. If your object is considerably smaller than the cube, you may want to try scaling them up and seeing if that helps.
F key is a shortcut key that will automatically zoom and focus to an object. Select the GameObject and press F. This problem should be gone.
If the problem is still there, select the Camera and change the Clipping Planes Near to 0.3 and Far to 50000. You can mess with these values until Object stops disappearing. Although, pressing F should solve it.

Unity - Avoid quad clipping or set rendering order

I am using Unity 5 to develop a game. I'm still learning, so this may be a dumb question. I have read about Depth Buffer and Depth Texture, but I cannot seem to understand if that applies here or not.
My setting is simple: I create a grid using several quads (40x40) which I use to snap buildings. Those buildings also have a base, made with quads. Every time I put one one the map, the Quads overlap and they look like the picture.
As you can see, the red quad is "merging" with the floor (white quads).
How can I make sure Unity renders the red one first, and the white ones are background? Of course, I can change the red quad Y position, but that seems like the wrong way of solving this.
This is a common issue, called Z-Fighting.
Usually you can reduce it by reducing the range of “Clipping Planes” of the camera, but in your case the quads are at the same Y position, so you can’t avoid it without changing the Y position.
I don't know if it is an option for you, but if you use SpriteRenderer (Unity 2D) you don’t have that problem and you can just set “Sorting Layer” or “Order in Layer” if you want modify the rendering order.

Glitch when moving camera in OpenGL

I am writing a tile-based game engine for the iPhone and it works in general apart from the following glitch. Basically, the camera will always keep the player in the centre of the screen, and it moves to follow the player correctly and draws everything correctly when stationary. However whilst the player is moving, the tiles of the surface the player is walking on glitch as shown:
http://img41.imageshack.us/img41/9422/movingy.png
Compared to the stationary (correct):
http://img689.imageshack.us/img689/7026/still.png
Does anyone have any idea why this could be?
Thanks for the responses so far. Floating point error was my first thought also and I tried slightly increasing the size of the tiles but this did not help. Changing glClearColor to red still leaves black gaps so maybe it isn't floating point error. Since the tiles in general will use different textures, I don't know if vertex arrays can be used (I always thought that the same texture had to be applied to everything in the array, correct me if I'm wrong), and I don't think VBO is available in OpenGL ES. Setting the filtering to nearest neighbour improved things but the glitch still happens every ten frames or so, and the pixelly result means that this solution is not viable anyway.
The main difference between what I'm doing now and what I've done in the past is that this time I am moving the camera rather than the stationary objects in the world (i.e. the tiles, the player is still being moved). The code I'm using to move the camera is:
void Camera::CentreAtPoint( GLfloat x, GLfloat y )
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(x - size.x / 2.0f, x + size.x / 2.0f, y + size.y / 2.0f, y - size.y / 2.0f, 0.01f, 5.0f);
glMatrixMode(GL_MODELVIEW);
}
Is there a problem with doing things this way and if so is there a solution?
My first guess would be floating point rounding error. This could cause the co-ordinates for your quads to be just a little bit out, resulting in the gaps you see. to verify this, you might want to try changing glClearColor() and seeing if the gaps change colour with it.
One solution to this would be to make the tiles slightly larger. Only a very small increment is needed (like 0.0001f) to cover over this kind of error.
Alternatively, you could try using a Vertex Array or a VBO to store your ground mesh (ensuring that adjoining squares share vertices). I'd expect this to fix the issue, but I'm not 100% sure - and it should also render faster.
Sometimes this is caused by filtering issues on border texels. You could try using GL_CLAMP_TO_EDGE in your texture parameters.
Its due to filtering.. use clamp to edge AND leave a 1 or 2 pixel border.. this is why we have an option for BORDER in glTexImage call..
the 4th parameter change from 0 to 1