HLSL (Unity-specific ok, not necessary) combining Stencil and worldspace "reverse" clipping - unity3d

I've built a working surface shader (call it "wonderland") that renders as invisible unless a companion "lookingGlass" shader intersects with it from the viewpoint of the camera. Simple stencil shader arrangement.
Easy peasy.
I can add shader settings to specify a plane, or even just a minimum worldspace Z value, and use clip() to only render pixels on one side of that plane... (in other words, I could use that to trim the content that's allowed by the Stencil.)
What I want to do is use the stencil on surfaces "through the looking glass", (to reveal geometry that's inside the looking glass) and to always render those surfaces when they're on "our" side of the looking glass (to always show them if they're on this side of the looking glass portal). eg., if z<0, render if the Stencil Ref value is satisfied. if z>=0, render regardless.
Now, in Unity I can attach two materials to the MeshRenderer component (one with a stencil shader, one with a "plane cutoff" shader) - that works fine. It's pretty awesome, actually, at least visually. But while I haven't benchmarked it yet, I instinctively believe it's going to massively impact framerate if there are a number of objects, fairly complicated geometry, etc., set up with this arrangement.
(I can also manage shader attachment in code, and only do this when I expect something to transition, but I'm really hoping to get a unified shader out of this to avoid unnecessary draw calls.)

As it turns out, what I was looking to do is impossible.
The two shaders I wish to combine are both surface shaders. While you can combine multiple surface shaders into a multipass shader, you cannot combine multiple surface shaders, with a Stencil, and with a clip() where the clip is applied to passes that the Stencil is not and vice-versa.
There are combinations that can achieve parts of this, or can achieve the entire goal with surface and vert (or other non-surf) shaders, but the combination of requirements stipulated by this question isn't supported as desired.
While this does not answer the question, the workaround in Unity is to create two materials that provide each piece of functionality. They can both exist on the item that needs both pieces, and code can otherwise manage whether one or the other or both is actively in use.
Similar solutions would be available in other packages.

Related

Unity Point-cloud to mesh with texture/color

I have a point-cloud and a rgb texture that fit together from a depth camera. I procedurally created a mesh from a selected part of the point-cloud implementing the quickhull 3D algorithm for mesh creation.
Now, somehow I need to apply the texture that I have to that mesh. Note that there can be multiple selected parts of the point-cloud thus making multiple objects that need the texture. The texture is just a basic 720p file that should be applied to the mesh material.
Basically I have to do this: https://www.andreasjakl.com/capturing-3d-point-cloud-intel-realsense-converting-mesh-meshlab/ but inside Unity. (I'm also using a RealSense camera)
I tried with a decal shader but the result is not precise. The UV map is completely twisted from the creation process, and I'm not sure how to generate a correct one.
UV and the mesh
I only have two ideas but don't really know if they'll work/how to do them.
Try to create a correct UV and then wrap the texture around somehow
Somehow bake colors to vertices and then use vertex colors to create the desired effect.
What other things could I try?
I'm working on quite a similar problem. But in my case I just want to create a complete mesh from the point cloud. Not just a quickhull, because I don't want to lose any depth information.
I'm nearly done with the mesh algorithm (just need to do some optimizations). Quite challenging now is to match the RGB camera's texture with the depth camera sensor's point cloud, because they of course have a different viewport.
Intel RealSense provides an interesting whitepaper about this problem and as far as I know the SDK corrects these different perspectives with uv mapping and provides a red/green uv map stream for your shader.
Maybe the short report can help you out. Here's the link. I'm also very interested in what you are doing. Please keep us up to date.
Regards

Unity3d: How to stretch (or share) a shader across multiple objects

I am tinkering around with cubes trying to build variations of 'block types' (in an effort to get more familiar with Unity's abilities, shaders, editor tools etc).
I have a generic cube:
That I want to add a material/shader.. which I have done (no problem there):
Which looks well enough (for my purposes) when it's just one block, but when I stick them altogether, I don't like the effect; you can see the individual boxes and the shader (which you can't see in the still image) is actually animated water, so when it's animating it looks ... pretty ugly.
(Bad/undesired)
I am trying to STRETCH or share the shader/material across all the selected blocks. See the below example (in this case, I have taken a SINGLE block and stretched it, but that's not keeping with the spirit of having individual blocks, so also not what I want).
(better/more desired)
I have thought the following may help, but they all seem overly complicated (aka I think I'm going about it incorrectly)
Have the individual blocks, but stretch a single plane across them and then apply the material.
I have found examples of programmatically joining meshes, and then apply the material/shader to the single object.
Take a single block and stretch it to the dimensions needed.
Maybe (not sure if I can), but have a plane with the water material applied to it and use the blocks as masks to only display water for those blocks? Not sure how that works...
In the end I am hoping to have the following:
Individual blocks (so I can interact with them.
Shader animations/colors are shared across the shared/connected blocks.
It won't always be a 2x3 grid... it could be diagonal, or contain odd shapes of connected blocks...
(this is all in EDITOR mode).
Any thoughts on how I might approach this?
Phrases you could try searching are "converting from world space to uv space", "transforming uv coordinates", "uv math". UV is the name for coordinates in textures that a shader samples from, and if you take already existing shader code, you can do interesting things by changing the UV(s) it uses. One of those things is letting you "stretch" it.
In your 2x3 cube example you could tell each cube to treat its U value as going from 0 to 0.5 or 0.5 to 1 and the V as going from 0 to 0.33 or 0.33 to 0.67 or 0.67 to 1 depending on where it is instead of each one going from 0 to 1. You could do this by having a property on the shader to tell it where to start the uv (a) and where to end its uv (b), and you lerp from (0,0) - (1,1) to a - b.
My answer to a different question uses some similar logic to that by comparing the world position of the pixel vs a range of world positions to get a UV. The relevant shader code is:
fixed4 colorizedMapUV = (IN.worldPos.xz-_WorldSpaceRange.xy)
/ (_WorldSpaceRange.zw-_WorldSpaceRange.xy);
Another option is to only look at the world position, and completely disregard a notion of where the "corners" of the uv should be. A method called "triplanar mapping" might guide you to a solution that does this

Why in 3D game we need to separate a material into so many textures for a static object?

Perhaps the question is not that correct, the textures should be say a kind of channel? although I know they will be mixed in the shader finally.
I know the knowledge of the various textures is very important, but also a bit hard to understand completely.
From my understanding:
diffuse - the 'real' color of an object without light involved.
light - for static objects. render light effections into texture beforehand.
specular - the area where has direct reflection.
ao - to absorb indirect light for the different area of an object.
alpha - to 'shape' the object.
emissive - self illuminance.
normal - pixel normal vector to deal with the light ray.
bump - (dont' know the exact differences between normalmap).
height - stores Z range values, to generate terrain, modify vertex etc.
And the items below should be related to PBR material which I'm not familiar with:
translucency / cavity / metalness / roughness etc...
Please correct me if some misunderstandings there.
But whatever, my question is why we need to separate these textures apart for a material but not only render them together into the diffusemap directly for a static object?
It'll be appreciated if some examples (especially for PBR) , and thank you very much.
I can beforehand bake all things into the diffuse map and apply to my
mesh, why I need to apply so many different textures?
Re-usability:
Most games re-use textures to reduce the size of the game. You can't if you combine them together. For example, when you two similar objects but you want to randomize the looks of them(aging effect), you can make them share the-same color(albedo) map but use different ao map. This becomes important when there hundreds of objects, you can use different combination of texture maps on similar objects to create unique Objects. If you have combined this into one, it would be impossible to share it with other similar objects but you to slightly make to look different.
Customize-able:
If you separate them, you'll be able to change the amount of effect each texture will apply to the Object. For example, the slider on the metallic slot for the Standard shader. There are more of this sliders on other map slots but they only appear once you plug a texture into the slot. You can't do this when you combine the textures into one.
Shader:
The standard shader can't do this so you have to learn how to write shader since you can't use one image to get the effects you would with all those texture maps with the standard shader. A custom shader is required and you need a way to read the information about the maps in the combined shader.
This seems like a reasonable place to start:
https://en.wikipedia.org/wiki/Texture_mapping
A texture map is an image applied (mapped) to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3d model formats or material definitions, and assembled into resource bundles.
I would add to this that the shape or a polygon don't have to belong to 3d objects as one may imagine it. If you render two triangles as a rectangle, you can run all sorts of computations and store it in a "live" texture.
Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974.
What this detail represents is either some agreed upon format to represent some property, (say "roughness" within some BRDF model) which you would encounter if you are using some kind of an engine.
Or whatever you decide that detail to be, if you are writing your own engine. You can decide to store whatever you want, however you want it.
You'll notice on the link that different "mapping" techniques are mentioned, each with their own page. This is the result of some person, or people who did some research and came up with a paper detailing the technique. Other people adopt it, and that's how they find their way into engines.
There is no rule saying these can't be combined.

Paint on mesh for makeover

I'm now struggling for weeks on a part of the game I'm making.
As a beginner in Unity and programming, I need your experience and advice to understand how can I paint on skinned mesh like this (from 1:10):
https://www.youtube.com/watch?v=grVEK1Bb6ZM
I spend a lot of time to find a solution with no result. (Decal shader to separate texture, paint on mesh with alpha, project texture, merge texture .. ). But these solutions look bad for mobile or not exactly what I need.
So If someone know a way to do that, even a little info or anything, that will drive my research, it's very welcome.
Thank you !
The example you provide limits the range of the painting with a bitmap mask (ie on the eyebrows, or on the lips), so the painting is only meant for a more enjoyable UX. If this is what you need, you should probably do something like this:
You need to know where the mouse is interacting with the model. Raycasting is expensive and requires to update the colliders every frame, since you character is skinned. If you use the masking trick of your example, this dramatically reduces the amount of computation, since you could pass a subset of the mesh containing only that specific area (maybe just the face for ex)
see https://docs.unity3d.com/ScriptReference/SkinnedMeshRenderer.BakeMesh.html
and https://answers.unity.com/questions/39490/collider-on-skinned-mesh.html
(if you can't, there could be other tricks, like rendering the character's UV into a separate float buffer/texture, and sample that buffer using the mouse position)
Once you can raycast the mesh you can fetch the UV position of the hit
https://docs.unity3d.com/ScriptReference/RaycastHit-textureCoord.html
Using those UVs you can write to a texture, or instance particles/objects on a render target etc (there are many options here).
You then need to combine that texture with the bitmap mask in the shader of the character.

Shader-coding: nonlinear projection models

As I understand it, the standard projection model places an imaginary grid in front of the camera, and for each triangle in the scene, determines which 3 pixels its 3 corners project onto. The color is determined for each of these points, and the fragment shader fills in the rest using interpolation.
My question is this: is it possible to gain control over this projection model? For example, create my own custom distorted uv-grid? Or even just supply my own algorithm:
xyPixelPos_for_Vector3( Vector3 v ) {...}
I'm working in Unity3D, so I think that limits me to cG or openGL.
I did once write a GLES2 shader, but I don't remember ever performing any kind of "ray hits quad" type test to resolve the pixel position of a particular 3D point in space.
I'm going to assume that you want to render 3d images based upon 3d primitives that are defined by vertices. This is not the only way to render images with OpenGL but it is the most common. The technique that you describe sounds much more like Ray-Tracing.
How OpenGL Typically Works:
I wouldn't say that OpenGL creates an imaginary grid. Instead, what it does is take the positions of each of your vertices, and converts them into a different space using linear algebra (Matrices).
If you want to start playing around with this, it would be best to do some reading on Matrices, to understand what the graphics card is doing.
You can easily start warping the positions of Vertices by making a vertex shader. However, there is some setup involved. See the Lighthouse tutorials (http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/) to get started with that! You will also want to read their tutorials on lighting (http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/), to create a fully functioning vertex shader which includes a lighting model.
Thankfully, once the shader is set up, you can distort your entire scene to your hearts content. Just remember to do your distortions in the right 'space'. World coordinates are much different than eye coordinates!