I'm currently working on voxel terrain generation in Unity, and I've run into something annoying:
From certain camera angles, you can see seams between the edges of chunk meshes, as pictured below:
What I know:
This only occurs on the edge between two meshes.
This is not being caused by texture bleeding (The textures are solid colors, so I'm using a very large amount of padding when setting up the UVs).
The positions of all vertices and meshes are showing up as exact integers.
Disabling anti-aliasing almost entirely fixes this (You can still see the occasional speck along the edge).
I'm using Unity's default Standard shader.
Can someone explain what's causing this, and whether there's a way to solve this other than disabling AA?
Almost certainly the side faces are demonstrating z-buffer fighting with the top faces — precision is imperfect so along the seam of your geometry rounding errors are making the very top of the brown face of one cube seem to be closer to the camera than the very top of the green face of the next.
Ideally, don't draw the brown faces that definitely aren't visible — if a cube has a neighbour on face X then don't draw either its face X or its neighbour's adjoining face.
Related
In HLSL, how can I calculate lighting based on pixels of a texture, instead of pixels that make up the object?
In other words, if I have a 64x64px texture being rendered on a 1024x768px screen, I want to calculate the lighting as it affects the 64x64px space, resulting in jagged pixels instead of a smooth line.
I've researched dozens of answers but I'm not sure how I can determine at all times if a fragment is a part of a pixel that should be fully lit or not. Maybe this is the wrong approach?
The current implementation uses a diffuse texture and a normal map. It results in what appear as artifacts (diagonal lines) in the output:
Note: The reason it almost looks correct is because of the normal map, which causes some adjacent pixels to have normals that are angled just enough to light some pixels and not others.
I am beating my head a little bit here for a while but I still could bot find a way to set up a matrix that projects my Unity game in a Tibianeske like manner:
Reading on tutorials on internet I could figure out how a normal orthographic perspective works, but tibia's one is kind of odd.
Digging over webs I found in here a guy (Clint Bellanger) who describes really well how to get the same perspective in blender's render according to him:
Start with a scene in 45 degree isometric. Video game style, where
the camera angle is Blender (60,0,45).
In Blender if you look at Buttons Window -> Scene -> Render Buttons ->
Format, you can set the render aspect ratio. Set AspY to half of
AspX. This is the same as taking regular rendered output and scaling
X by 50%. If you rendered a cube, the top of the cube will be a
perfect square (though at a 45 degree angle).
We can then use Blender nodes to rotate the result 45 degrees. The
output:
Note this started as a cube, so there's a lot of "vertical"
distortion. So you might have to scale meshes to 50% Z before using
this method. Also notice the Edge seems to be applied after the
Aspect, so the edge isn't distorted.
Blend file: http://clintbellanger.net/images/temp/UltimaVII.blend (I'm
a Nodes noob so there might be a smarter setup).
For kicks, here is that tower again. I pulled it into the above
workflow scene and scaled Z by 50%. Click "Re-render this layer" on
the first node to create the composite.
On his method, he used stuff like rescaling the render and changing the scale of models, Im convinced I could get along just with the 4x4matrix in unity(or in any other 3d environment really).
Hope someone more experienced with perks of 3D maths could help me to figure it out. Thank you! =D
What you ask for is a simple parallel projection. The typical orthographic projection is just a special case where the projection rays are perpendicular to the image plane. However, every parallel projection can be represented by an affine shear transformation followed by a standard orthogonal projection.
Im convinced I could get along just with the 4x4matrix in unity(or in any other 3d environment really).
Yes. Using default GL conventions here, all you have to do is to take the standard ortho matrix, post-multiply it by an appropriate shear matrix and use that as the projection matrix.
I have surface of floor like in screenshot http://prntscr.com/amqstw. If I move camera in some angle I don`t see angle floor : http://prntscr.com/amqt19. How I may resolve this problem.
That effect is due to backface culling.
In that angle (probably) the camera is inside the mesh of the floor, so the normal vectors of the cube (I presume) are facing the other way, and they get "culled" (become invisible).
You can turn it of in two ways:
By editing the mesh in your modeling software so that it becomes a
"double-sided mesh", or
By finding a shader online which, once
applied to the floor object, deactivates its backfire culling (harder
to do, without screwing up something else)
I am using Unity 5 to develop a game. I'm still learning, so this may be a dumb question. I have read about Depth Buffer and Depth Texture, but I cannot seem to understand if that applies here or not.
My setting is simple: I create a grid using several quads (40x40) which I use to snap buildings. Those buildings also have a base, made with quads. Every time I put one one the map, the Quads overlap and they look like the picture.
As you can see, the red quad is "merging" with the floor (white quads).
How can I make sure Unity renders the red one first, and the white ones are background? Of course, I can change the red quad Y position, but that seems like the wrong way of solving this.
This is a common issue, called Z-Fighting.
Usually you can reduce it by reducing the range of “Clipping Planes” of the camera, but in your case the quads are at the same Y position, so you can’t avoid it without changing the Y position.
I don't know if it is an option for you, but if you use SpriteRenderer (Unity 2D) you don’t have that problem and you can just set “Sorting Layer” or “Order in Layer” if you want modify the rendering order.
I have a projector component and I need to find the angle that projected texture falls at to exclude the projecting on vertical faces.
My projector is under the mouse pointer and works ok when it is over an horizontal face:
I would like the projector to switch off on vertical faces to avoid this bad effect:
If possible, I would like to do it in the shader code to avoid the vertical projected image even if the cursor is located on the corners of an horizontal face and a part "goes out" on vertical face.
I found this solution in C#:
if (Physics.Raycast(MouseRay,out hitInfo)){
if(hitInfo.normal.y>0) {
// draw
} else {
// not draw
}
}
But only it works on curved surfaces and not, for example, on the face cubes.
How can I do this properly?
Normally they would use an image on a quad using TGA transparency, which rotates itself to the face that the middle of the object is aligned to, using ray to find the vertex and making it's absolute normal.
Other ways of doing it would be quite tricky, perhaps using decals... If you did it using a shader, it would take so much time... it's a case of problem solving not being ordered in order of importance for fast development. Technically you can project a volumetric texture on top of whatever object you are using... that way you can add your barred circle projected from a point in space towards the object, as a mathematical formula, it takes a while to do, check out volumetric textures, i have written some and in your case it needs the mouse pos sent to texture and maths to add transparent zone and red zone to texture. takes all day.
It's fine to have a flat circle that flips around when you change the pointer onto a different face, it will just look like a physical card and it's much easier to code, 10 minutes instead of many hours.