We are working on AI for our game, and currently the detection system. How can I read the lightprobe interpolation data off a mesh? If in shadow it will take longer time and closer distances for the AI to detect the player
edit: https://docs.unity3d.com/ScriptReference/LightProbes.GetInterpolatedProbe.html
Ok so the best way is to use GetInterpolatedProbe
You call it like
SphericalHarmonicsL2 probe;
LightProbes.GetInterpolatedProbe(Target.position, renderer, out probe);
Make sure the position is not inside the mesh since realtime shadows will effect the result
Then you can query the SphericalHarmonicsL2 doing
Vector3[] directions = {
new Vector3(0, -1, 0.0f)
};
var colors = new Color[1];
probe.Evaluate(directions, colors);
In above example you will get the color at the point from the upward direction. Above example will create garbage, make sure to re use arrays in real example
Related
I am trying to get a stretched out cube (which we can call a plane for the sake of discussion) to orient itself to the normal vector of a plane described by three points. I wrote a script to find the normal of three points, and then used transform.LookAt to have the planes align. However, I am finding that this script is not working at all how it is intended to and despite my best efforts I can not figure out why.
drastic movements of the individual points hardly effect the planes rotation.
the rotation of the object when using the existing points in the script should be 0,0,0 in the inspector. However, it is always off by a few degrees and as i said does not align itself when I move the points around.
This is the script. I can also post photos showing the behavior or share a small unity package
First of all Transform.LookAt takes a position as parameter, not a direction!
And then it
Rotates the transform so the forward vector points at worldPosition.
Doesn't sound like what you are trying to achieve.
If you want your object to look with its forward vector in the given normal direction (assuming you are calculating the normal correctly) then you could rather use Quaternion.LookRotation
transform.rotation = Quaternion.LookRotation(doNormal(cpit, cmit, ctht);
alternatively to this you can also simply assign the according vector directly like e.g.
transform.forward = doNormal(cpit, cmit, ctht);
or
transform.up = doNormal(cpit, cmit, ctht);
depending on your needs
We are currently trying to lay mesh colliders onto our edges as shown in the pictures. The problem is that the meshes sometimes seem to be 2D instead of 3D (shown in Picture 2 and Picture 3), which makes them unselectable from certain camera-angles. Sometimes the meshes even seem to disappear through some parts of the Edge(Picture 1).
Turning convex on for the colliders makes them way easier to select, but we dont really want to do that because that makes it realy unclear which edge you are selecting right now.
We are creating our meshes through bakeMesh from our previously created Edges as shown below:
LineRenderer lineRenderer = gameEdge.GetComponent<LineRenderer>();
MeshCollider meshCollider = gameEdge.AddComponent<MeshCollider>();
Mesh mesh = new Mesh();
lineRenderer.BakeMesh(mesh, Camera.main, false);
meshCollider.sharedMesh = mesh;
meshCollider.convex = false;
Edit:
We used this https://github.com/mattatz/unity-tubular to generate tube meshes around our edges, working pretty well now!
The mesh generated by the line renderer is actually 2D. I think that your best chance is to update the mesh orientation to face to the camera so that the mesh will always be facing to the camera wherever you are looking from. That way you'll allways be able to click on it.
You have 2 options:
1.- Passing the new camera (with its new position) all the time. This would be in the Update. Be carefull to only use the code that bakes the mesh and not the Mesh mesh = new Mesh() in the update. Because if you create a new mesh in the Update passed some time you'll have an stack overflow error. I recomend if you do this, to make the Mesh mesh; a class variable and start it only in the class initialization (mesh = new Mesh();) so that you use only the created instance to update the mesh all the time, not creating a new one for each update.
2.- If you are concerned about efficiency, you can handle when your camera is moving, so that when it stops moving, you pass in the new camera along with its position. For this you would need to handle the camera OnStopMoving event yourself to pass the new camera in, so whenever the Main camera stops moving, the mesh will always be facing it.
This makes sense because its easier for the user to click on things while stopped, so it can be presumed that the user will try to click on the lines while not moving
There is one third option I did not mention due to feasability, that is wrapping your line renderer with primitive colliders with the points of your line as an starting point for the procedural collider wrapping logic. However you would need to code all that out, which might take a while.
On the other hand making it a convex collider as far as I checked is not feasable, as the behaviour and shape of the collider itself changes on the mesh cooking by the MeshRenderer component. Check if this might be of help.
I've been looking for a solution to this for quite a while now (meaning several days) and I haven't found anything yet. Maybe I'm thinking about it wrong and there isn't a way, but let's try!
I'm recording hand-data on a Hololens (the Unity Hololens Input Simulation for now). This essentially gives me one float AnimationCurve for each hand joint for each transform.position.x to z and rotation.x to w. Now my goal is to put these curves into an AnimationClip and add it to an AnimatorController (via an AnimatorOverrideController) that animates a hand rig and replay the recordings. Everything so far works!
However, the recorded hand-data from the Hololens is in world scale, not in local scale. (which makes sense, since you usually want absolute coordinates when you want to know where the hand is.) But to animate the hand, it seems I'm only able to set local coordinates, which I don't have.
Example:
clip.SetCurve("", typeof(Transform), "localPosition.x", curve.PositionX);
Here, the clip takes the the x-coordinates from some hand joint and puts it to the localPosition.x of the corresponding hand rig joint. The problem: curve.PositionX is world-scale (absolute coordinates), but localPosition.x takes local-scale (coordinates relative to its parent).
I can't simply change "localPosition.x" to "position.x", like so:
clip.SetCurve("", typeof(Transform), "position.x", curve.PositionX);
even though the Transform class has both properties and position is the object's world scale position. I'm not sure why this doesn't work, but it gives me the following error:
Cannot bind generic curve on Transform component, only position, rotation and scale curve are supported.
I'm aware that it doesn't make much sense to use absolute coordinates for an animation, but I simply don't have anything else.
Does anyone have an approach how I can deal with this in a sensible, not-too-cumbersome way? It seems I have all the important parts, I just can't figure out how to put them together. Thanks so much already! :)
From my basic understanding, it seems like you are using the Input animation recording service provided by MRTK. Unfortunately, MRTK does not provide the localPosition version of Curves data. However, you can modify the data from the recordingBuffer after the InputRecordingService stops recording.
So, this is a method worth trying for you: in the handJointCurves dictionary property of recordingBuffer field, a set of pose curves is stored for each joint. And then, base on this table:Joint pose curves, subtract the position value of the key None from the position value of each other joint in every key frame so that the localPosition based on the key None is obtained.
I am trying to create shader through amplify shader for a cube to cut through plane or any mesh when cross section. I know that I should be using size, rotation and position for that but what exactly to do with them that I don't know. Yup by that it means that I am new to amplify shader and also in shader programming so please don't provide shader code as I need to make it customizable for future so please help me out in amplify shader nodes.
Currently I have this effect but I want to make it more box bounding specific not plane normals based.
I want not this effect but the box effect shown below. This was achieved through ray marching concept but this I want to achieve with Amplify Shader. Kindly guide me through this.
This is what I have done so far with the amplify nodes
Result:
Here is the result of doing the shader using "Amplify Shader":
Solution:
First we'll call the green cube the "intersector" and the red cube the "intersectee".
So as you've done with the plane, the cutout works because the back face of the intersector is shown when inside the intersectee and the intersectee front face is show when it is inside the intersector.
Create a shader (which is used by both cubes) and put them into two seperate materials - apply individual materials to each cube. After this we can get into the actually shader node stuff.
First we need to make sure "Cull Mode" is off (Output Node > Cull Mode > off). This will ensure the back face is actually rendered (This can be optimized by decided depending on where the cube is in the intersector).
Next we need to get the surface point in object space:
Most of the variables will be defined in script. The rotation matrix is used to rotate a point. However, it is inversed as the rotation matrix rotates the cube into world space, therefore, inversing this would rotate the world space point into object space. We also get a "_Cubepos" which is the position of the cube to intersect with (E.g it would be the intersector if shader is on the intersectee). This is subracted by the world pos as the rotation matrix rotates around the origin. After this it is added back to be in the correct position.
This leads to the next section where the extents are added and subtracted to the "_Cubepos" and "_CubeExtent" to find the minimum and maximum extents.
Unfortunately, Amplify shader has no good way to check if a vector lies within two vectors. So we have to break it into components. (I encourage you to learn how to write shaders). Each compare with range returns 1 if the point in object space is within the extents for each axis. If one returns 0 we use the last multiply node to make sure the final output will be 0.
Finally, we get to the last part of the shader. The "IsIntersector" is set in script to be 1 or 0 depending on whether the cube we are refering to is used to intersect or is an intersectee. Depending on the scenario, here we set the opacity mask to 1 or 0.
After this we have to define the script to attach to each object. Add a new script and type the following in:
[ExecuteInEditMode]
public class SetVar : MonoBehaviour
{
//Transform of opposite cube
public Transform intersectingCube;
//Is this an intersector or intersectee
public bool isIntersector;
//Material of object
public Material mat;
// Start is called before the first frame update
void Start()
{
//Get material
mat = GetComponent<Renderer>().material;
}
// Update is called once per frame
void OnRenderObject()
{
//Calculate rotation matrix
Matrix4x4 m = Matrix4x4.TRS(-intersectingCube.position, intersectingCube.rotation, Vector3.one);
//Set shader variables
mat.SetMatrix("RotationMatrix", m);
mat.SetVector("_Cubepos", intersectingCube.position);
mat.SetVector("_CubeExtent", intersectingCube.localScale / 2.0f);
mat.SetFloat("_IsIntersector", (isIntersector) ? 0 : 1);
}
}
Then we can set the correct inspector values depending if the cube is an intersector or intersectee. Here is an example for the intersector cube:
Make sure to have the IsIntersector ticked depending if the cube is an intersector or not.
Here is a link to the shader: http://paste.amplify.pt/view/raw/4b248bc3. Also to do this for any mesh is a very complicated operation - too complicated for nodes. Learn about shader code and use a raycasting algorithm to determine if the point is inside the cube.
Also, alternatively for any convex shape. You could calculate each plane and then using your method already used, can check if the world position point works for every plane. For a cube there would be 6 planes, however, its a bit slower than the above method (as it is optimized for a cube).
I have been trying to develop a 3D game for a long time now. I went through
this
tutorial and found that I didn't know enough to actually make the game.
I am currently trying trying to add a texture to the icosahedron (in the "Look at Basic Drawing" section) he used in the tutorial, but I cannot get the texture on more than one side. The other sides are completely invisible for no logical reason (they showed up perfectly until I added the texture).
Here are my main questions:
How do I make the texture show up properly without using a million vertices and colors to mimic the results?
How can I move the object based on a variable that I can set in other functions?
Try to think of your icosahedron as a low poly sphere. I suppose Lamarche's icosahedron has it's center at 0,0,0. Look at this tutorial, it is written for directX but it explains the general principle of sphere texture mapping http://www.mvps.org/directx/articles/spheremap.htm. I used it in my project and it works great. You move the 3D object by applying various transformation matrices. You should have something like this
glPushMatrix();
glTranslatef();
draw icosahedron;
glPopMatrix();
Here is my code snippet of how I did texCoords for a semisphere shape, based on the tutorial mentioned above
GLfloat *ellipsoidTexCrds;
Vector3D *ellipsoidNorms;
int numVerts = *numEllipsoidVerticesHandle;
ellipsoidTexCrds = calloc(numVerts * 2, sizeof(GLfloat));
ellipsoidNorms = *ellipsoidNormalsHandle;
for(int i = 0, j = 0; i < numVerts * 2; i+=2, j++)
{
ellipsoidTexCrds[i] = asin(ellipsoidNorms[j].x)/M_PI + 0.5;
ellipsoidTexCrds[i+1] = asin(ellipsoidNorms[j].y)/M_PI + 0.5;
}
I wrote this about a year and a half ago, but I can remember that I calculated my vertex normals as being equal to normalized vertices. That is possible because when you have a spherical shape centered at (0,0,0), then vertices basically describe rays from the center of the sphere. Normalize them, and you got yourself vertex normals.
And by the way if you're planning to use a 3D engine on the iPhone, use Ogre3D, it's really fast.
hope this helps :)