I am building firest person Firearm Simulator. When i fire bullet hall prefab showing on the target board.
this is my target
when i fire
Hole prefabs sticking on the target board like red rounds.
i need to get a range of the holes. need to get measured the holes on 4Inch, 6inch or 10-inch rounds.
First, you need to place a GameObject in the center of the target. Then you need to create a float variable for each circle that is equal to find how far away each circle is from the center. To do this, my suggestion is to copy and paste the center game object and move it to each circle on the x,y, or z axis and record how far away each circle is from the center. Once you have those numbers, you need to create an algorithm to find how far away the bullet is from the center. Finally, once you have how far away the bullet is from the center, develop a new algorithm to find between what circles the bullet is in. You will need to create an algorithm using if statements with greater than(>) and less than(<) values comparing the distance the bullet is from the center to the circles distance from the center to find what circles the bullet is between.
Related
I'm attempting to write a bouncing ball game using flame in flutter. To detect collisions the onCollision and onCollisionStart methods are provided. What I had hoped is that onCollisionStart would give a precise location when two objects first hit each other. However, instead it gives a list of positions indicating where the two objects overlap after the first game-tick when this happens (i.e. onCollisionStart is called at the same time as onCollision, but is not called a second time if the same two objects are still colliding on the next tick).
This is illustrated in the attached picture. The collision points are marked with red dots. If the ball were moving downwards, then the ball would have hit the top of the rectangle and so should bounce upwards. However, if the ball were moving horizontally, then its first point of contact would have been the top left corner of the box, and the ball would bounce upwards and to the left.
If I want to work out correct angle that the ball should fly off, then I would need to do some clever calculations to work out the point that the ball first started hitting the other object (those calculations would depend on the precise shape of the other object). Is there some way to work out the point at which the two objects first started colliding? Thanks
What you usually need for this is the normal of the collision, but unfortunately we don't have that for the collision detection system yet.
We do have it in the raytracing system though, so what you could do is send out a ray and see how it will bounce and then just bounce the ball in the same way.
If you don't want to use raytracing I suggest that you calculate the direction of the ball, which you might already have, but if you don't you can just store the last position and subtract it from the current position.
After that you need to find the normals of the edges where the intersection points are.
Let's say the ball direction vector is v, and the two normal vectors are n1 and n2.
Calculate the dot product (this is build in to the vector_math library) of the ball direction vector and each of the normal vectors:
dot1 = v.dot(n1)
dot2 = v.dot(n2)
Compare the results of the dot products:
If dot1 > 0, n1 is facing the ball.
If dot2 > 0, n2 is facing the ball.
After that you can use v.reflect(nx) to get the direction where your ball should be going (where nx is the normal facing the ball).
Hopefully we'll have this built-in to Flame soon!
Swift beginner struggling with moving a scene node in ARkit in response to the device motion.
What I want to achieve is: First detect the floor plane, then place a sphere on the floor. From that point onwards depending on the movement of the device, I want to move the sphere along its x and z axis to move it around the floor of the room. (The sphere once created needs to be in the center of the device screen and locked to that view)
So far I can detect the floor and place a node no problem. I can use device motion to obtain the device attitude (pitch, roll and yaw) but how to translate these values into meaningful x, y, z positions that I can update my node with?
Are there any formulas or methods that are used to calculate such information or is this the wrong approach? I would appreciate a link to some info or an explanation of how to go about this. Also I am unsure how to ensure the node would be always at the center of the device screen.
so, as far as I understood you want to have a following workflow:
Step 1. You create a sphere on a plane (which is already done)
Step 2. Move the sphere with respect to the camera's horizontal plane (i.e. along its x and z axis to move it around the floor of the room depending on the movement of the device)
Assuming that the Step 1 is done, what you can do:
Get the position of the camera and the sphere
This should be first called within the function that is invoked after sphere creation (be it a tapGestureRecognizer(), touchesBegan(), etc.).
You can do it by calling position property of SCNNode for sphere and for camera position and/or orientation by calling sceneView.session.currentFrame's .camera.transform which contains all necessary parameters about current position of the camera
Move the sphere as camera moves
Having the sphere position on the Scene and the transformation matrix of the camera, you can find the distance relation between them. Here you can find a good explanation of how exactly you can do it
After you get those things you should implement a proper logic within renderer(_:didUpdate:for:) to obtain continuous lock of the ball with respect to the camera position
If you are interested about the math behind it, you can kick off by reading more about transformation matrices which is a big part of Image Processing and many other areas
Hope that this will help!
I have two directions and i am trying to calculate the angle on a specific axis. The object from which the directions are derived is not a static object so i'm struggling to get my head round the maths to work out what i need to do.
FYI, I have the start and end points that i have used to calculate the directions if they are needed.
Here's a diagram to show what i am looking for:
The above image is from a top-down view in unity and it shows the angle i want.
The problem can be seen from the above image, which is that the directions are not on the same height so i can't use the vector3.angle function as it won't give me the correct angle value.
In essence i want to know how much would i have to rotate the red line to the left (top view) so that it would line up with the blue (top-view).
The reason i need this is as i am trying to find a way of getting the side-to-side angles of fingers from my leap motion sensor.
This a generic version of my other question:
Leap Motion - Angle of proximal bone to metacarpal (side to side movement)
It will provide more specific information as to the problem if you need it and it has more specific screenshots.
**UPDATE:
After re-reading my question i can see it wasn't particularly clear so here i will hopefully make it clearer. I am trying to calculate the angle of a finger from the leap motion tracking data. Specifically the angle of the finger relative to the metacarpal bone (bone is back of hand). An easy way to demonstrate what i mean would be for you to move your index finger side-to-side (i.e. towards your thumb and then far away from your thumb).
I have put two diagrams below to hopefully illustrate this.
The blue line follows the metacarpal bone which your finger would line up with in a resting position. What i want to calculate is the angle between the blue and red lines (marked with a green line). I am unable to use Vector3.Angle as this value also takes into account the bending of the finger. I need someway of 'flattening' the finger direction out, thus essentially ignoring the bending and just looking at the side to side angle. The second diagram will hopefully show what i mean.
In this diagram:
The blue line represents the actual direction of the finger (taken from the proximal bone - knuckle to first joint)
The green line represents the metacarpal bone direction (the direction to compare to)
The red line represents what i would like to 'convert' the blue line to, whilst keeping it's side to side angle (as seen in the first hand diagram).
It is also worth mentioning that i can't just always look at the x and z axis as this hand will be moving at rotating.
I hope this helps clear things up and truly appreciate the help received thus far.
If I understand your problem correctly, you need to project your two vectors onto a plane. The vectors might not be in that plane currently (your "bent finger" problem) and thus you need to "project" them onto the plane (think of a tree casting a shadow onto the ground; the tree is the vector and the shadow is the projection onto the "plane" of the ground).
Luckily Unity3D provides a method for projection (though the math is not that hard). Vector3.ProjectOnPlane https://docs.unity3d.com/ScriptReference/Vector3.ProjectOnPlane.html
Vector3 a = ...;
Vector3 b = ...;
Vector3 planeNormal = ...;
Vector3 projectionA = Vector3.ProjectOnPlane(a, planeNormal);
Vector3 projectionB = Vector3.ProjectOnPlane(b, planeNormal);
float angle = Vector3.Angle(projectionA, projectionB);
What is unclear in your problem description is what plane you need to project onto? The horizontal plane? If so planeNormal is simply the vertical. But if it is in reference to some other transform, you will need to define that first.
I have tried to find any information on how the Unity assigns pivot points to object but all I keep finding is threads on how to move pivot points and that it can't be done. I am creating a 2D game with a background that is randomly created with meshes that are wrapped in empty GameObjects. These objects are organically shaped but they have a property that returns a rectangle that bounds the object so that they can be placed in a way that they are not overlapping. The trouble is that the algorithm assumes that the pivot point is going to be the center of the object. What I would like to know is how does Unity decide where the pivot point will be set to so that I can predict how much I will need to move my mesh inside the parent object so that the pivot point will be in the center of the bounding rectangle.
Possible fix:
Try create the meshes during runtime and see if it always places the pivot points at a certain corner or at least relatively speaking the same location.
If it does that you would know where the pivot point is and could take it into account in your code, if you also know the size of the mesh you spawn.
So I think most general and correct answer that I can come up with is that unity assigns the pivot point to the center of the GameObject that you apply the Mesh to. The local coordinates of the vertices of the mesh depending on how you create them mighht place your mesh so that its logical center is not the same as the that of the empty GameObject that it is attached to. What I did to fix the issue was to make a vector from local point (0,0,0) to the center of bounding rectangle and translate the vertices I use to make my mesh by that vector inverted. It wasn't perfect but by far close enough to ensure that I won't have any overlapping meshes.
I have a 2D game where the users can create cars by clicking on the screen to add vertices and then drag these around as they see fit. These vertices are then used to draw a mesh. Users can also add wheels to the vertices (1 per vertex).
In the game there is also an option to have the computer generate a random car, which it does by creating eight vertices at random points inside of a unit circle.
These cars are stored and it is possible for the user to load them and re-use them. The two pictures below show a square button with a computer generated car (The green one with red wheels) which sits inside of the square as intended, and a user generated car (The white one with pink wheels) which sits outside of the square.
I know that this is because the mesh of the computer generated one draws triangles from Vector3.zero which is the center of the object, while the user created one only triangles between the vertices, which aren't necessarily placed around the center of the object (As in this example where they are all placed above and to the right of the center).
How would I go about centering the mesh of the user created car?
I could perhaps calculate the center of the mesh and then subtract that from the position of all the vertices. Or I could subtract that position from the position of the GameObject that holds the mesh when presenting it in the gui. Or are there better alternative solutions?
Using Mesh.Bounds
Transform car;
Bounds carBounds = car.GetComponent<MeshFilter>().mesh.bounds;
Vector3 whereYouWantMe;
Vector3 offset = car.transform.position - car.transform.TransformPoint(carBounds.center);
car.transform.position = whereYouWantMe + offset;
This will also give you access to a bunch of cool fields like center min max and size