Unity - Avoid quad clipping or set rendering order - unity3d

I am using Unity 5 to develop a game. I'm still learning, so this may be a dumb question. I have read about Depth Buffer and Depth Texture, but I cannot seem to understand if that applies here or not.
My setting is simple: I create a grid using several quads (40x40) which I use to snap buildings. Those buildings also have a base, made with quads. Every time I put one one the map, the Quads overlap and they look like the picture.
As you can see, the red quad is "merging" with the floor (white quads).
How can I make sure Unity renders the red one first, and the white ones are background? Of course, I can change the red quad Y position, but that seems like the wrong way of solving this.

This is a common issue, called Z-Fighting.
Usually you can reduce it by reducing the range of “Clipping Planes” of the camera, but in your case the quads are at the same Y position, so you can’t avoid it without changing the Y position.
I don't know if it is an option for you, but if you use SpriteRenderer (Unity 2D) you don’t have that problem and you can just set “Sorting Layer” or “Order in Layer” if you want modify the rendering order.

Related

Scaling Object turns the textures white (Unity3D)

I'm trying to figure out why my Object's textures keep turning white once I scale the object down to 1% (or less) of its normal size.
I can manipulate the objects realtime with my fingers and there is a threshold where all the textures (except a few) turn completely ghost white, as shown below:
https://imgur.com/wMykeFw
Any input to fix is appreciated!
One potential cause of this issue is due to how certain shaders can miscalculate how to render textures when scales are set to low values.
To be able to render this asset so small using the same shader, re-import the mesh with a smaller scale factor (in the mesh import settings), and that may fix it.
select ARCamera then camera, in the inspector, select the cameras clipping plane and increase it(you want to find the minimum possible clipping that works to save on memory, so start at 20000, and work your way backwards til it stops working, then back up a notch).
next (still in the cameras inspector), select Rendering Path and set it to Legacy Vertex Lit
this should clear it up for you

Paint on mesh for makeover

I'm now struggling for weeks on a part of the game I'm making.
As a beginner in Unity and programming, I need your experience and advice to understand how can I paint on skinned mesh like this (from 1:10):
https://www.youtube.com/watch?v=grVEK1Bb6ZM
I spend a lot of time to find a solution with no result. (Decal shader to separate texture, paint on mesh with alpha, project texture, merge texture .. ). But these solutions look bad for mobile or not exactly what I need.
So If someone know a way to do that, even a little info or anything, that will drive my research, it's very welcome.
Thank you !
The example you provide limits the range of the painting with a bitmap mask (ie on the eyebrows, or on the lips), so the painting is only meant for a more enjoyable UX. If this is what you need, you should probably do something like this:
You need to know where the mouse is interacting with the model. Raycasting is expensive and requires to update the colliders every frame, since you character is skinned. If you use the masking trick of your example, this dramatically reduces the amount of computation, since you could pass a subset of the mesh containing only that specific area (maybe just the face for ex)
see https://docs.unity3d.com/ScriptReference/SkinnedMeshRenderer.BakeMesh.html
and https://answers.unity.com/questions/39490/collider-on-skinned-mesh.html
(if you can't, there could be other tricks, like rendering the character's UV into a separate float buffer/texture, and sample that buffer using the mouse position)
Once you can raycast the mesh you can fetch the UV position of the hit
https://docs.unity3d.com/ScriptReference/RaycastHit-textureCoord.html
Using those UVs you can write to a texture, or instance particles/objects on a render target etc (there are many options here).
You then need to combine that texture with the bitmap mask in the shader of the character.

HLSL (Unity-specific ok, not necessary) combining Stencil and worldspace "reverse" clipping

I've built a working surface shader (call it "wonderland") that renders as invisible unless a companion "lookingGlass" shader intersects with it from the viewpoint of the camera. Simple stencil shader arrangement.
Easy peasy.
I can add shader settings to specify a plane, or even just a minimum worldspace Z value, and use clip() to only render pixels on one side of that plane... (in other words, I could use that to trim the content that's allowed by the Stencil.)
What I want to do is use the stencil on surfaces "through the looking glass", (to reveal geometry that's inside the looking glass) and to always render those surfaces when they're on "our" side of the looking glass (to always show them if they're on this side of the looking glass portal). eg., if z<0, render if the Stencil Ref value is satisfied. if z>=0, render regardless.
Now, in Unity I can attach two materials to the MeshRenderer component (one with a stencil shader, one with a "plane cutoff" shader) - that works fine. It's pretty awesome, actually, at least visually. But while I haven't benchmarked it yet, I instinctively believe it's going to massively impact framerate if there are a number of objects, fairly complicated geometry, etc., set up with this arrangement.
(I can also manage shader attachment in code, and only do this when I expect something to transition, but I'm really hoping to get a unified shader out of this to avoid unnecessary draw calls.)
As it turns out, what I was looking to do is impossible.
The two shaders I wish to combine are both surface shaders. While you can combine multiple surface shaders into a multipass shader, you cannot combine multiple surface shaders, with a Stencil, and with a clip() where the clip is applied to passes that the Stencil is not and vice-versa.
There are combinations that can achieve parts of this, or can achieve the entire goal with surface and vert (or other non-surf) shaders, but the combination of requirements stipulated by this question isn't supported as desired.
While this does not answer the question, the workaround in Unity is to create two materials that provide each piece of functionality. They can both exist on the item that needs both pieces, and code can otherwise manage whether one or the other or both is actively in use.
Similar solutions would be available in other packages.

How can I make part of a mesh transparent?

Is it possible to make some % of my mesh transparent?
For example, imagine I have a mesh that is a house. At first the mesh is transparent. As a person clicks on the house, it becomes opaque along the Y-axis so it looks like it's being built up.
Any ideas how to approach this problem?
"a house. At first the mesh is transparent. As a person clicks on the house, it becomes opaque along the Y-axis so it looks like it's being built up"
Literally in answer to your question, in general:
I would approach this by making a shader which was sensitive to the global Y value of the point in question. It would use that value, over time, to decide on alpha at a given point.
alternately
Imagine a second texture of the house, call it GUIDE, which is: imagine a monochrome house: at the ground it is black and it slowly becomes pure white at the tops. Additionally you could color it any way you want, for example, the window frames and quoining could be black and so on. Now, the shader would use the GUIDE texture as a key, to know at what time, that area, should become transparent.
That would actually look quite incredible and offer amazing control. You could fade in different parts in whatever order you wish.
It would be beyond the scope of an answer here to actually engineer this. But I believe the key here is, unfortunately for what you describe that is really all done in the shader, I'd say.
Note that if you just want "a clean hole", look in to approaches using a depth mask shader And indeed https://www.youtube.com/watch?v=s3RKGAj9Uzk
for 2D consider this, http://answers.unity3d.com/questions/449034/see-through-hole-via-shaders-on-a-2d-plane.html
in other cases you may literally want to cut a sharp hole in the mesh which is a "whole" different technology. https://gamedev.stackexchange.com/questions/72978/shader-that-cuts-hole-through-all-geometry
if you want this effect http://answers.unity3d.com/questions/622089/how-can-i-render-a-semi-transparent-texture-with-a.html (see mario image) that's totally different again - it's nothing more than a gray image with a hole!

OpenGL: optimizing render of quad particles

I'm rendering particles in a 2D game. Each particle is a quad (2 triangles). How can I make the drawing the fastest possible? All the particles has the same texture, I'm only changing it's positions.
Now I'm using a call to glVertexPointer and glDrawArrays for each particle. So I'm sending 4 vertices each time to the GPU.
Is there any other approach that could be faster?
I'm using OpenGL ES 1.1 (iPhone)
Thanks!
Every draw call you make (glDrawArrays) is expensive. Doing this once per particle is DEFINITELY way too often. All your particles can be drawn with a single draw call; just set up a big array of all the triangle verts and another big array with the texture coords, and call glVertexPointer/glDrawArrays once-- that's the power of glVertexPointer: arbitrary geometry of the same type in one call. :)
For what you're doing, you should also look into point sprites (GL_POINTS), which also function as tiny textured quads. They're 2D only, so you can't map your texture into the Z axis, but if your particles are just 2D quads of the same texture over and over, point sprites will likely do exactly what you want.
There's a way to do that all in one draw routine. I THINK it's by adding an extra vertex after each quad, which is the same as the previous vertex, but I could be wrong.
EDIT: After looking into it a bit, it looks like you need two in between; essentially one after, and one before. It does add up to quite a few extra vertexes, but I know from experience that it makes a HUGE positive difference on the iPhone to do it all in one draw operation (we were drawing text from a texture, so essentially the same thing).
EDIT2: Also note, I'm referring to using GL_TRIANGLE_STRIP - if you were using GL_TRIANGLES instead, you wouldn't need the extra vertices... except, then you'd be doing the same amount extra anyway, due to repeating 2 for each second triangle.