Cross section shader for box bounding using amplify Shader - unity3d
I am trying to create shader through amplify shader for a cube to cut through plane or any mesh when cross section. I know that I should be using size, rotation and position for that but what exactly to do with them that I don't know. Yup by that it means that I am new to amplify shader and also in shader programming so please don't provide shader code as I need to make it customizable for future so please help me out in amplify shader nodes.
Currently I have this effect but I want to make it more box bounding specific not plane normals based.
I want not this effect but the box effect shown below. This was achieved through ray marching concept but this I want to achieve with Amplify Shader. Kindly guide me through this.
This is what I have done so far with the amplify nodes
Result:
Here is the result of doing the shader using "Amplify Shader":
Solution:
First we'll call the green cube the "intersector" and the red cube the "intersectee".
So as you've done with the plane, the cutout works because the back face of the intersector is shown when inside the intersectee and the intersectee front face is show when it is inside the intersector.
Create a shader (which is used by both cubes) and put them into two seperate materials - apply individual materials to each cube. After this we can get into the actually shader node stuff.
First we need to make sure "Cull Mode" is off (Output Node > Cull Mode > off). This will ensure the back face is actually rendered (This can be optimized by decided depending on where the cube is in the intersector).
Next we need to get the surface point in object space:
Most of the variables will be defined in script. The rotation matrix is used to rotate a point. However, it is inversed as the rotation matrix rotates the cube into world space, therefore, inversing this would rotate the world space point into object space. We also get a "_Cubepos" which is the position of the cube to intersect with (E.g it would be the intersector if shader is on the intersectee). This is subracted by the world pos as the rotation matrix rotates around the origin. After this it is added back to be in the correct position.
This leads to the next section where the extents are added and subtracted to the "_Cubepos" and "_CubeExtent" to find the minimum and maximum extents.
Unfortunately, Amplify shader has no good way to check if a vector lies within two vectors. So we have to break it into components. (I encourage you to learn how to write shaders). Each compare with range returns 1 if the point in object space is within the extents for each axis. If one returns 0 we use the last multiply node to make sure the final output will be 0.
Finally, we get to the last part of the shader. The "IsIntersector" is set in script to be 1 or 0 depending on whether the cube we are refering to is used to intersect or is an intersectee. Depending on the scenario, here we set the opacity mask to 1 or 0.
After this we have to define the script to attach to each object. Add a new script and type the following in:
[ExecuteInEditMode]
public class SetVar : MonoBehaviour
{
//Transform of opposite cube
public Transform intersectingCube;
//Is this an intersector or intersectee
public bool isIntersector;
//Material of object
public Material mat;
// Start is called before the first frame update
void Start()
{
//Get material
mat = GetComponent<Renderer>().material;
}
// Update is called once per frame
void OnRenderObject()
{
//Calculate rotation matrix
Matrix4x4 m = Matrix4x4.TRS(-intersectingCube.position, intersectingCube.rotation, Vector3.one);
//Set shader variables
mat.SetMatrix("RotationMatrix", m);
mat.SetVector("_Cubepos", intersectingCube.position);
mat.SetVector("_CubeExtent", intersectingCube.localScale / 2.0f);
mat.SetFloat("_IsIntersector", (isIntersector) ? 0 : 1);
}
}
Then we can set the correct inspector values depending if the cube is an intersector or intersectee. Here is an example for the intersector cube:
Make sure to have the IsIntersector ticked depending if the cube is an intersector or not.
Here is a link to the shader: http://paste.amplify.pt/view/raw/4b248bc3. Also to do this for any mesh is a very complicated operation - too complicated for nodes. Learn about shader code and use a raycasting algorithm to determine if the point is inside the cube.
Also, alternatively for any convex shape. You could calculate each plane and then using your method already used, can check if the world position point works for every plane. For a cube there would be 6 planes, however, its a bit slower than the above method (as it is optimized for a cube).
Related
Do you know how to do an inverted mask/shader or any other way to see onlu part of an object within a sphere boundary in Unity?
Cut out sphere What im looking for it this exactly behaviour but what i need is that is compatible with URP, as you may see im just starting with Shaders could you give me any guidance on how to update this to URP? i have look up for stencil shader / cut out / Buffer i have replicated the portal stile ones but i need the object to be able to grow like a tree but if its outside the sphere should not show
You can do it with Sphere Mask node : And link Out to the Alpha property (if your Surface Type is set to Opaque then, check the Alpha Clipping in your Graph settings).
(UNITY) Plane not rotating to normal vector of three points?
I am trying to get a stretched out cube (which we can call a plane for the sake of discussion) to orient itself to the normal vector of a plane described by three points. I wrote a script to find the normal of three points, and then used transform.LookAt to have the planes align. However, I am finding that this script is not working at all how it is intended to and despite my best efforts I can not figure out why. drastic movements of the individual points hardly effect the planes rotation. the rotation of the object when using the existing points in the script should be 0,0,0 in the inspector. However, it is always off by a few degrees and as i said does not align itself when I move the points around. This is the script. I can also post photos showing the behavior or share a small unity package
First of all Transform.LookAt takes a position as parameter, not a direction! And then it Rotates the transform so the forward vector points at worldPosition. Doesn't sound like what you are trying to achieve. If you want your object to look with its forward vector in the given normal direction (assuming you are calculating the normal correctly) then you could rather use Quaternion.LookRotation transform.rotation = Quaternion.LookRotation(doNormal(cpit, cmit, ctht); alternatively to this you can also simply assign the according vector directly like e.g. transform.forward = doNormal(cpit, cmit, ctht); or transform.up = doNormal(cpit, cmit, ctht); depending on your needs
Unity, Relative dimensions of gameobjects
I saw some documents saying that there is no concepts of length in Unity. All you can do to determine the dimensions of the gameobjects is to use Scale. Then how could I set the overall relative dimensions between the gameobjects? For example, the dimension of a 1:1:1 plane is obviously different from a 1:1:1 sphere! Then how could I know what's the relative ratios between the plane and the sphere? 1 unit length of the plane is equal to how much unit of the diameter of the sphere!? Otherwise how could I know if I had set everything in the right proportion?
Well, what you say is right, but consider that objects could have a collider. And, in case of a sphere, you could obtain the radius with SphereCollider.radius. Also, consider Bounds.extents, that's relative to the objects's bounding box. Again, considering the Sphere, you can obtain the diameter with: Mesh mesh = GetComponent<MeshFilter>().mesh; Bounds bounds = mesh.bounds; float diameter = bounds.extents.x * 2;
All GameObjects in unity have a Transform component, which determines its position, rotation and scale. Most 3D Objects also have a MeshFilter component, which contains reference to the Mesh object. The Mesh contains the actual shape of the object, for example six faces of a cube or, faces of a sphere. Unity provides a handful of built in objects (cube, sphere, cyliner, plane, quad), but this is just a 'starter kit'. Most of those built in objects are 1 unit in size, but this is purely because the vertexes have been placed in those positions (so you need to scale by 2 to get 2units size). But there is no limit on positinos within a mesh, you can have a tiny tiny object od a whole terrain object, and have them massively different in size despite keeping their scale at 1. You should try to learn some 3D modelling application to create arbitrary objects. Alternatively try and install a plugin called ProBuilder which used to be quite expensive and is nowe free (since acquired by Unity) which enabels in-editor modelling. Scales are best kept at one, but its good to have an option to scale - this way you can re-use the spehre mesh, or the cube mesh, (less waste of memory) by having them at different scales.
In most unity applications you set the scale to some arbitrary number. So typically 1 m = 1 unit. All things that are 1 unit tall are 1 m tall. If you import a mesh from a modelling program that is the wrong size, scale it to exactly one meter (use a standard 1,1,1 cube as reference). Then, stick it inside an empty game object to “convert” it into your game’s proper scale. So now if you scale the empty object’s y axis to 2, the object is now 2 meters tall. A better solution is to keep all objects’ highest parent in the hierarchy at 1,1,1 scale. Using the 1,1,1 reference cube, scale your object to a size that looks proper. So for example if I had a model of a person I’d want it to be scaled to be roughly twice as tall as the cube. Then, drag it into an empty object of 1,1,1 scale this way, everything in your scene’s “normal” size is 1,1,1. If you want to double the size of something you’d then make it 2,2,2. In practice this is much more useful than the first option. Now, if you change its position by 1 unit it is moving effectively by what would look like the proper 1 m also. This process also lets you change where the “bottom” of an object is. You can change the position of the object inside the empty, making an “offset”. This is Useful for making models stand right on the ground with position y=0.
Query mesh for its interpolated light probe density
We are working on AI for our game, and currently the detection system. How can I read the lightprobe interpolation data off a mesh? If in shadow it will take longer time and closer distances for the AI to detect the player edit: https://docs.unity3d.com/ScriptReference/LightProbes.GetInterpolatedProbe.html
Ok so the best way is to use GetInterpolatedProbe You call it like SphericalHarmonicsL2 probe; LightProbes.GetInterpolatedProbe(Target.position, renderer, out probe); Make sure the position is not inside the mesh since realtime shadows will effect the result Then you can query the SphericalHarmonicsL2 doing Vector3[] directions = { new Vector3(0, -1, 0.0f) }; var colors = new Color[1]; probe.Evaluate(directions, colors); In above example you will get the color at the point from the upward direction. Above example will create garbage, make sure to re use arrays in real example
How Does Unity Assign Pivot Point Location on Script Generated Meshes
I have tried to find any information on how the Unity assigns pivot points to object but all I keep finding is threads on how to move pivot points and that it can't be done. I am creating a 2D game with a background that is randomly created with meshes that are wrapped in empty GameObjects. These objects are organically shaped but they have a property that returns a rectangle that bounds the object so that they can be placed in a way that they are not overlapping. The trouble is that the algorithm assumes that the pivot point is going to be the center of the object. What I would like to know is how does Unity decide where the pivot point will be set to so that I can predict how much I will need to move my mesh inside the parent object so that the pivot point will be in the center of the bounding rectangle.
Possible fix: Try create the meshes during runtime and see if it always places the pivot points at a certain corner or at least relatively speaking the same location. If it does that you would know where the pivot point is and could take it into account in your code, if you also know the size of the mesh you spawn.
So I think most general and correct answer that I can come up with is that unity assigns the pivot point to the center of the GameObject that you apply the Mesh to. The local coordinates of the vertices of the mesh depending on how you create them mighht place your mesh so that its logical center is not the same as the that of the empty GameObject that it is attached to. What I did to fix the issue was to make a vector from local point (0,0,0) to the center of bounding rectangle and translate the vertices I use to make my mesh by that vector inverted. It wasn't perfect but by far close enough to ensure that I won't have any overlapping meshes.