Atlas UV map vs. Local UV map - unity3d

I want glow on my sprites using the UV coordinates, but the problem is, if the sprite originates from an atlas created by Unity's sprite packer, then the UV aren't normalized from 0 to 1, but from and to two arbitrary values. How do I normalize UV data for a single sprite that resides in an atlas? Am I required to parse additional information into the shader or should I already have the necessary information to do this process? The image below describes the situation:
The hand to the left is a sprite not from an atlas. The hand on the right is a sprite from an atlas. I want the right hand to look the same as the hand on the left.
I am not that familiar with shaders yet, so I am reliant to using shaderforge. I am using the following shaderforge layout:

You probably already know this, but the fundamental problem is the output of your "UV Coords" node. The other nodes in your shader are expecting normalized UVs ranging from 0 to 1, but that's not what you're getting when you use the texture atlas.
I can think of two ways to solve that. They're both viable, so I'd recommend trying whichever one fits more cleanly into your workflow.
Add a second UV channel
It's easy to treat UV0 as the only UV channel, but for certain techniques it can be helpful to add multiple UV coords to each vertex.
As an example, lightmapping is a popular feature where each model has its own individual textures (diffuse/normal/etc), but each scene has a pre-baked lightmap texture that is shared between multiple models -- sort of like an atlas for lighting information. UVs for these will not match, so the lightmap UVs are stored on a second channel (UV1).
In a similar fashion, you could use UV0 for atlas UVs and UV1 for local UVs. That gives you clean input on the [0,1] range that you can use for that multiply effect.
Add material params
You could scale and offset the UVs so that they are normalized.
Before rendering, find the min and max values for the mesh's UV coords
Pass those values in as material parameters
Add shader nodes to scale and offset the input UV, such that the range is normalized
For example, you could add min to each UV (offset), then divide by max - min (scale).

Related

Why is Shader Graph Transform node affected by UV mapping?

I've had an issue with normal maps not behaving correctly in my custom shader and finally managed to find the cause. It turns out it was the way UV of my objects were mapped. In UV0 I stored a mapping to a color palette texture - the UVs were all scrambled together as the only thing that mattered was that they are on a pixel with the correct color. In UV1 I stored the traditional UV uwrap, which I used to apply the normal map. To get the normal map I used a set-up like this:
I'm doing my own light calculations so I need to transform the normal from tangent space to world space before using it.
This approach was causing two issues - weird artifacts and the normals being "stuck" to the object:
The sphere on the right is upside down and if you look at the normals they are also upside down. The artifacts are on both spheres, but they are visible on the right one from this perspective.
What seems to be the cause is the way I used UV0 to map the object to a color palette. It somehow affects the tangent to world space transformation done by the Transform node (I know it's this node because removing it makes the artifacts disappear). Also, swapping the UV channels so that the traditional unwrap is in UV0 and the palette mapping is in UV1 fixes the issue:
There are no artifacts and the normals aren't stuck to the object.
So why is the transform node affected by UV mapping? I thought it does the transformation based on the geometry of the object. And if it uses UV maps, why is there no dropdown to select which UV it's going to use?
The tangent space (aka texture space) is partially defined by the geometry (the normal) but also by the uv coordinates (the tangent).
Just as the normal is derived from the vertex position, the tangent is inferred from the UVs. It's essentially an object space vector (xyz) that points to the U axis (horizontally in your UV space) and is perpendicular to the normal.
The normal map is a vector in texture space, and the channels of the bitmap can be seen as the offsets from the vertex defined tangent space for each of its vectors.
half3 TangentToObjectSpace(half3 input, half3 nml, half3 tgt, half3 btg) {
return tgt * input.x + btg * input.y + nml * input.z;
}
As you can see the input.x (normal map red channel, which defines the horizontal part of the vector) is modulating the tangent. We need another vector for the green channel, which we can generate using a cross product: provided the two existing vectors you get a new one perpendicular to both. The catch: for flipped UV shells this generated vector is pointing in the wrong direction. In Unity tangents are actually vector4, and the last component (w) is the flip of the binormal.
Unity uses the tangent space of the first UV coordinate by default, which is why you fixed your issue by swapping them :)
It's very, very likely that the Transform node is dependent on the UV selection insofar it depends on the output of the Texture Sample node. So it`s probably a subtle difference of the output in the sample node that makes the transform node behave this way, and not some hidden UV setting on the transform node.

Unity Point-cloud to mesh with texture/color

I have a point-cloud and a rgb texture that fit together from a depth camera. I procedurally created a mesh from a selected part of the point-cloud implementing the quickhull 3D algorithm for mesh creation.
Now, somehow I need to apply the texture that I have to that mesh. Note that there can be multiple selected parts of the point-cloud thus making multiple objects that need the texture. The texture is just a basic 720p file that should be applied to the mesh material.
Basically I have to do this: https://www.andreasjakl.com/capturing-3d-point-cloud-intel-realsense-converting-mesh-meshlab/ but inside Unity. (I'm also using a RealSense camera)
I tried with a decal shader but the result is not precise. The UV map is completely twisted from the creation process, and I'm not sure how to generate a correct one.
UV and the mesh
I only have two ideas but don't really know if they'll work/how to do them.
Try to create a correct UV and then wrap the texture around somehow
Somehow bake colors to vertices and then use vertex colors to create the desired effect.
What other things could I try?
I'm working on quite a similar problem. But in my case I just want to create a complete mesh from the point cloud. Not just a quickhull, because I don't want to lose any depth information.
I'm nearly done with the mesh algorithm (just need to do some optimizations). Quite challenging now is to match the RGB camera's texture with the depth camera sensor's point cloud, because they of course have a different viewport.
Intel RealSense provides an interesting whitepaper about this problem and as far as I know the SDK corrects these different perspectives with uv mapping and provides a red/green uv map stream for your shader.
Maybe the short report can help you out. Here's the link. I'm also very interested in what you are doing. Please keep us up to date.
Regards

How to texture mesh? Shader vs. generated texture

I managed to create a map divided in chunks. Each one holding a mesh generated by using perlin noise and so on. The basic procedural map method, shown in multiple tutorials.
At this point i took a look at surface shader and managed to write one which fades multiple textures depending on the vertex heights.
This gives me a map which is colored smoothly.
In tutorials i watched they seem to use different methods to texture a mesh. So in this one for example a texture is generated for each mesh. This texture will hold a different color depending on the noise value.This texture is applied to the mesh and after that the mesh vertices are displaced depending on the z-value.
This results in a map with sharper borders between the colors giving the whole thing a different look. I believe there is a way to create smoother transitions between the tile-colors by fading them like i do in my shader.
My question is simply what are the pro and cons of those methods. Let's call them "shader" and "texture map". I am lost right now, not knowing in which direction to go.

Unity Repeat UV Coordinates On Quadtree Shaderforge

As u can see in the image, on larger tiles (n > 1) the texture should be repeated as long as the current rect size.. i don't know how i can achieve this!
FYI, im getting the tile texture id with the alpha value of the vertex color.
Here the shader im using..
[UPDATE]
Thanks for clarifying the uv coordinates, unfortunately that doesn't answer my question. Take a look at the following pixture...
Your shader is fine; it's actually the vertex UVs that are the problem:
So for all rectangles the uv coordinates are as following [0, 0] / [0, rect.height] / [rect.width, 0] / [rect.width, rect.height]. So the uvs are going beyond 1
Your shader is designed to support the standard UV space, in which case you should replace rect.width and rect.height with 1.
By using UV coords greater than one, you're effectively asking for texels outside of the specified texture. When used with a texture atlas, that means you're asking for texels outside of the specified tile -- in this case, those happen to be white, and that's what you're seeing in the rendered output.
Tiling with an atlas texture
Updating because I missed an important detail: you want a tiling material.
Usually, UVs interpolate linearly:
For tiling, you essentially want more of a "sawtooth" output:
For a non-atlas texture, you can adjust material scale/wrap settings and call it done. For an atlas texture, it's possible but you'll end up with a shader and/or geometry that aren't quite standard.
The "most standard" solution would be if your larger quads are on a separate mesh from the smaller ones:
Add a float material param named uv_scale or some such
Add a Multiply node that scales incoming UVs by uv_scale
Pass output from that into a Frac node
Pass output from that into the UV Tile node
Pseudocode is roughly: uv = frac(uv * uv_scale)
If you need all of your quads to be in the same mesh, you end up needing non-standard geometry:
Change your UVs again (going back to rect.width and rect.height)
Add a Frac node before the UV Tile node
This is a simpler shader change, but has the downside that your geometry will no longer be cleanly supported in other shaders.
Thanks rutter!
i've implemented your solution into my shader and now it works perfectly!
so for everyone looking for this here is the shader im using now
Cheers, M

When should I use uv1 instead of uv0 on Unity Standard shader

I'm wondering when should I use different UV Set options in the Unity Standard shader.
I know that UV stands for the texture coordinates, but what would I need to switch to uv1 instead of uv0? I cannot see any immediate difference when I switch between them and Unity docs doesn't seem to explain much.
Perhaps somebody could shed some light on when different sets need to be used.
Just for the sake of completeness: UV coordinates are assigned in model creation tools (Maya, Max, Blender etc.) not in Unity. So Unity just gives you access to a limited number of UV maps you may or may not have exported from afore-mentioned tools. That's why there isn't more documentation on UV sets, because they are an optional additional feature of an imported model.
There are scenarios in which more than one UV map is necessary or more elegant/performant. You can have shaders that use different UV coordinates for different maps. One example would be a shader that adds a detail map to a model which requires a different set of UV coordinates (independent from the base UV map which is used for the other maps like albedo, normals etc.)
You are not limited to switching between uv0 and uv1. You can use them in combination as well, if you create a shader that makes use of both maps. I think this is even more common than switching between different UV maps.