Unity3D Arange multiple gameobjects on top of cube without falling - unity3d

This is for a board game. Each tile (a cube) can have up to 6 tokens on top. What happens is based on some logic, I move a token to the top center of cube (which works fine). When another token come into the same tile surface, I want any tokens already on the surface to move away and give room to the incoming tile.
Right now, since these tokens have rigidbodies they get pushed away, but I don't have much control over how those tokens should be placed. Main problem is I need to utilize the surface area of the cube to determine exact point a tile should move. I don't want it to fall or push away from the cube bounds.
I was thinking to place 6 empty gameobjects as children in the cube that has possible areas a token can move on the surface. But then the child local coordinates and token coordinates are different.
What kind of approaches are available?

An approach would be creating a script for the block like the one below containing.
An index of how many tokens are currently present on it and
An array of vector2 positions (consider a 3*3 matrix )
And a distance from center value to multiple with the positions.
So when adding a token to the cube you'll have to pass the cube tiles position (it will be the center of the cube tile so I called it cubeCenter) and the token to be added.
first check whether the index is -1, meaning there are not tokens so place the token in the center(cubeCenter) and increment the index to 0.
else
change the position of the previous token to pos[index]*distanceFromCubeCenter +cubeCenter, increment the index and set new token's position to cubeCenter.
Consider if the cube tile is at (3,0,3) and there is already a token in the center so the index will be 0. if another token comes then the position of the old token will be will be (-1*0.3+3,0,-1*0.3+3) =(2.7,0,2.7) and will place the token like the bottom left in your image and the new token will be in the center. Similarly, if another token comes then the token in the center will go to (3,0,2.7) like the bottom center of your image.
3*3 matrix but in this case, only 5 positions are used.
| (-1,0,1) |(0,0,1) |(1,0,1) |
| (-1,0,0) |(0,0,0) |(1,0,0) |
| (-1,0,-1)|(0,0,-1)|(1,0,-1)|
the code is in 3d since I saw the blue z-axis, you will have to change it depending on how you are storing/moving the tokens.
int index = -1;
public Vector3[] pos; //(-1,0,-1),(0,0,-1),(1,,0,0),(-1,,0,1),(0,0,1)
public float distanceFromCubeCenter = 0.3f; // spacing from the center of the cube
public void AddToCube(Vector3 cubeCenter,GameObject token)
{
if (index == -1) //only for the first token on cube
{
token.transform.position = cubeCenter;
index++; // increment index to 0
}
else
{
previousTokenOnThisCube.transform.position = pos[index] * distanceFromCubeCenter + cubeCenter;
index++;
token.transform.position = cubeCenter;
}
}

Related

Checking if something is in the way or not in a 3d environment while also snapping to a different object

I have written some code that snaps a cylinder to an existing cylinder using a for loop on a gameobject list I call cylinders. Below is the code I use for "snapping" the cylinder to another cylinder using the mouse position and a "translucentPrefab". I would like to know if there is another object obstructing the placement. For performance reasons I would like to avoid using another for loop through my list to check each position. Is there any good solution for this. Could I use a "fake" 2d array since I mostly use full integer boxes and set squares to occupied in that array. Or is there a smarter approach?
`if (worldMousePosition.x > centerPoint.x && Vector3.Distance(worldMousePosition, centerPoint) < snappingRange)
{
translucentPrefab.transform.position = rightPosition;
snapped = true;
left = false;
if (renderer != null)
{
// Set the prefab material to translucent and green
material.color = new Color(0, 1, 1, 0.5f);
}
}`
I have tried using box colliders in many ways to check in the same space as the new cylinder, however all attempts have been a faliure.
I suggest you to use Physics.SphereCastAll at point where you cursor is and just iterate over all object that SphereCastAll returns.
And if you don't want objects to count toward physics you can add special physic layer for just this. And that adjust collision matrix.

Leap Motion - Angle of proximal bone to metacarpal (side to side movement)

I am trying to get the angle between the bones, such as the metacarpal bone and the proximal bone (angle of moving the finger side to side, for example the angle when your index finger is as close to your thumb as you can move it and then the angle when your index finger is as close to your middle finger as you can move it).
I have tried using Vector3.Angle with the direction of the bones but that doesn't work as it includes the bending of the finger, so if the hand is in a fist it gives a completely different value to an open hand.
What i really want is a way i can "normalize" (i know normalizing isn't the correct term but it's the best i could think of) the direction of the bones so that even if the finger is bent, the direction vector would still point out forwards and not down, but would be in the direction of the finger (side to side).
I have added a diagram below to try and illustrate what i mean.
In the second diagram, the blue represents what i currently get if i use the bone's directions, the green is the metacarpal direction and the red is what i want (from the side view). The first diagram shows what i am looking for from a top-down view. The blue line is the metacarpal bone direction and in this example the red line is the proximal bone direction, with the green smudge representing the angle i am looking for.
To get this value, you need to "uncurl" the finger direction based on the current metacarpal direction. It's a little involved in the end; you have to construct some basis vectors in order to uncurl the hand along juuust the right axis. Hopefully the comments in this example script will explain everything.
using Leap;
using Leap.Unity;
using UnityEngine;
public class MeasureIndexSplay : MonoBehaviour {
// Update is called once per frame
void Update () {
var hand = Hands.Get(Chirality.Right);
if (hand != null) {
Debug.Log(GetIndexSplayAngle(hand));
}
}
// Some member variables for drawing gizmos.
private Ray _metacarpalRay;
private Ray _proximalRay;
private Ray _uncurledRay;
/// <summary>
/// This method returns the angle of the proximal bone of the index finger relative to
/// its metacarpal, when ignoring any angle due to the curling of the finger.
///
/// In other words, this method measures the "side-to-side" angle of the finger.
/// </summary>
public float GetIndexSplayAngle(Hand h) {
var index = h.GetIndex();
// These are the directions we care about.
var metacarpalDir = index.bones[0].Direction.ToVector3();
var proximalDir = index.bones[1].Direction.ToVector3();
// Let's start with the palm basis vectors.
var distalAxis = h.DistalAxis(); // finger axis
var radialAxis = h.RadialAxis(); // thumb axis
var palmarAxis = h.PalmarAxis(); // palm axis
// We need a basis whose forward direction is aligned to the metacarpal, so we can
// uncurl the finger with the proper uncurling axis. The hand's palm basis is close,
// but not aligned with any particular finger, so let's fix that.
//
// We construct a rotation from the palm "finger axis" to align it to the metacarpal
// direction. Then we apply that same rotation to the other two basis vectors so
// that we still have a set of orthogonal basis vectors.
var metacarpalRotation = Quaternion.FromToRotation(distalAxis, metacarpalDir);
distalAxis = metacarpalRotation * distalAxis;
radialAxis = metacarpalRotation * radialAxis;
palmarAxis = metacarpalRotation * palmarAxis;
// Note: At this point, we don't actually need the distal axis anymore, and we
// don't need to use the palmar axis, either. They're included above to clarify that
// we're able to apply the aligning rotation to each axis to maintain a set of
// orthogonal basis vectors, in case we wanted a complete "metacarpal-aligned basis"
// for performing other calculations.
// The radial axis, which has now been rotated a bit to be orthogonal to our
// metacarpal, is the axis pointing generally towards the thumb. This is our curl
// axis.
// If you're unfamiliar with using directions as rotation axes, check out the images
// here: https://en.wikipedia.org/wiki/Right-hand_rule
var curlAxis = radialAxis;
// We want to "uncurl" the proximal bone so that it is in line with the metacarpal,
// when considered only on the radial plane -- this is the plane defined by the
// direction approximately towards the thumb, and after the above step, it's also
// orthogonal to the direction our metacarpal is facing.
var proximalOnRadialPlane = Vector3.ProjectOnPlane(proximalDir, radialAxis);
var curlAngle = Vector3.SignedAngle(metacarpalDir, proximalOnRadialPlane,
curlAxis);
// Construct the uncurling rotation from the axis and angle and apply it to the
// *original* bone direction. We determined the angle of positive curl, so our
// rotation flips its sign to rotate the other direction -- to _un_curl.
var uncurlingRotation = Quaternion.AngleAxis(-curlAngle, curlAxis);
var uncurledProximal = uncurlingRotation * proximalDir;
// Upload some data for gizmo drawing (optional).
_metacarpalRay = new Ray(index.bones[0].PrevJoint.ToVector3(),
index.bones[0].Direction.ToVector3());
_proximalRay = new Ray(index.bones[1].PrevJoint.ToVector3(),
index.bones[1].Direction.ToVector3());
_uncurledRay = new Ray(index.bones[1].PrevJoint.ToVector3(),
uncurledProximal);
// This final direction is now uncurled and can be compared against the direction
// of the metacarpal under the assumption it was constructed from an open hand.
return Vector3.Angle(metacarpalDir, uncurledProximal);
}
// Draw some gizmos for debugging purposes.
public void OnDrawGizmos() {
Gizmos.color = Color.white;
Gizmos.DrawRay(_metacarpalRay.origin, _metacarpalRay.direction);
Gizmos.color = Color.blue;
Gizmos.DrawRay(_proximalRay.origin, _proximalRay.direction);
Gizmos.color = Color.red;
Gizmos.DrawRay(_uncurledRay.origin, _uncurledRay.direction);
}
}
For what it's worth, while the index finger is curled, tracked Leap hands don't have a whole lot of flexibility on this axis.

What is the best way to evenly distribute objects to fill a curved space in Unity 3D?

I would like to fill this auditorium seating area with chairs (in the editor) and have them all face the same focal point (the stage). I will then be randomly filling the chairs with different people (during runtime). After each run the chairs should stay the same, but the people should be cleared so that during the next run the crowd looks different.
The seating area does not currently have a collider attached to it, and neither do the chairs or people.
I found this code which has taken care of rotating the chairs so they target the same focal point. But I'm still curious if there are any better methods to do this.
//C# Example (LookAtPoint.cs)
using UnityEngine;
[ExecuteInEditMode]
public class LookAtPoint : MonoBehaviour
{
public Vector3 lookAtPoint = Vector3.zero;
void Update()
{
transform.LookAt(lookAtPoint);
}
}
Additional Screenshots
You can write a editor script to automatically place them evenly. In this script,
I don't handle world and local/model space in following code. Remember to do it when you need to.
Generate parallel rays that come from +y to -y in a grid. The patch size of this grid depends on how big you chair and the mesh(curved space) is. To get a proper patch size. Get the bounding box of a chair(A) and the curved space mesh(B), and then devide them(B/A) and use the result as the patch size.
Mesh chairMR;//Mesh of the chair
Mesh audiMR;//Mesh of the auditorium
var patchSizeX = audiMR.bounds.size.X;
var patchSizeZ = audiMR.bounds.size.Z;
var countX = audiMR.bounds.size.x / chairMR.bounds.size.x;
var countZ = audiMR.bounds.size.z / chairMR.bounds.size.z;
So the number of rays you need to generate is about countX*countZ. Patch size is (patchSizeX, patchSizeZ).
Then, origin points of the rays can be determined:
//Generate parallel rays that come form +y to -y.
List<Ray> rays = new List<Ray>(countX*countZ);
for(var i=0; i<countX; ++i)
{
var x = audiMR.bounds.min.x + i * sizeX + tolerance /*add some tolerance so the placed chairs not intersect each other when rotate them towards the stage*/;
for(var i=0; i<countZ; ++i)
{
var z = audiMR.bounds.min.z + i * sizeZ + tolerance;
var ray = new Ray(new Vector3(x, 10000, z), Vector3.down);
//You can also call `Physics.Raycast` here too.
}
}
Get postions to place chairs.
attach a MeshCollider to your mesh temporarily
foreach ray, Physics.Raycast it (you can place some obstacles on places that will not have a chair placed. Set special layer for those obstacles.)
get hit point and create a chair at the hit point and rotate it towards the stage
Reuse these hit points to place your people at runtime.
Convert each of them into a model/local space point. And save them into json or asset via serialization for later use at runtime: place people randomly.

How to calculate number of sprites to spawn across the device's screen height?

In my Unity2D project, I am trying to spawn my sprite on top of each other and across the entire height of the device's screen. For example to give an idea, think of a box on top of each other across the entire device's screen height. In my case, I'm spawning arrow sprites instead of boxes
I already got the sprites spawning on top of each other successfully. My problem now is how to calculate how many sprites to spawn to make sure it spreads across the screen's height.
I currently have this snippet of code:
public void SpawnInitialArrows()
{
// get the size of our sprite first
Vector3 arrowSizeInWorld = dummyArrow.GetComponent<Renderer>().bounds.size;
// get screen.height in world coords
float screenHeightInWorld = Camera.main.ScreenToWorldPoint(new Vector3(0, Screen.height, 0)).y;
// get the bottom edge of the screen in world coords
Vector3 bottomEdgeInWorld = Camera.main.ScreenToWorldPoint(new Vector3(0,0,0));
// calculate how many arrows to spawn based on screen.height/arrow.size.y
int numberOfArrowsToSpawn = (int)screenHeightInWorld / (int)arrowSizeInWorld.y;
// create a vector3 to store the position of the previous arrow
Vector3 lastArrowPos = Vector3.zero;
for(int i = 0; i < numberOfArrowsToSpawn; ++i)
{
GameObject newArrow = this.SpawnArrow();
// if this is the first arrow in the list, spawn at the bottom of the screen
if(LevelManager.current.arrowList.Count == 0)
{
// we only handle the y position because we're stacking them on top of each other!
newArrow.transform.position = new Vector3(newArrow.transform.position.x,
bottomEdgeInWorld.y + arrowSizeInWorld.y/2,
newArrow.transform.position.z);
}
else
{
// else, spawn on top of the previous arrow
newArrow.transform.position = new Vector3(newArrow.transform.position.x,
lastArrowPos.y + arrowSizeInWorld.y,
newArrow.transform.position.z);
}
// save the position of this arrow so that we know where to spawn the next arrow!
lastArrowPos = new Vector3(newArrow.transform.position.x,
newArrow.transform.position.y,
newArrow.transform.position.z);
LevelManager.current.arrowList.Add(newArrow);
}
}
The problem with my current code is that it doesn't spawn the correct number of sprites to cover the entire height of the device's screen. It only spawns my arrow sprites approximately up to the middle of the screen. What I want is for it to be able to spawn up to the top edge of the screen.
Anyone know where the calculation went wrong? and how to make the current code cleaner?
If sprites are rendered via camera mode in perspective and the sprites appear to have varying sizes when displayed on the screen (sprites farther away from the camera are smaller than sprites that are closer to the camera) then a new way to calculate the numberOfArrowsToSpawn value is needed.
You could try adding sprites with a while loop, instead of using a for loop, just continue creating sprites until the calculated world position for the sprite will no longer be visible to the camera. Check to see if a point will be visible in camera by using the technique Jessy provides in this link:
http://forum.unity3d.com/threads/point-in-camera-view.72523/
I think your screenHeightInWorld is really a screenTopInWorld, a point can be anywhere in the space.
You need the relative screen height in world coordinate.
Which is actially the half of the camera frustum size if you use ortographic projection, as you think of it.
float screenHeightInWorld = Camera.main.orthographicSize / 2.0f;
I did not read the rest, but is probably fine, up to you how you implement this.
I'd simply create an arrow method, something like bool SpawnArrowAboveIfFits(), which can call itself iteratively on the new instances.

How to find the 3D coordinates of a surface from the click location of the mouse on the ILNumerics surface plots?

Currently our system uses the ILNumerics 3D plot cube class with an ILNumerics surface component to display a 3D meshed surface. An aim for our system is to be able to interrogate individual points on the surface from a mouse click on the plot. We have the MouseClick event set up on our plot the problem is I am unsure on how to get the values for the particular point on the surface that has been clicked, could anyone help with this issue?
The conversion from 2D mouse coordinates to 3D 'model' coordinates is possible - under some limitations:
The conversion is not unambiguous. The mouse event only provides 2 dimensions: X and Y screen coordinates. In the 3D model there might be more than one point 'behind' this 2D screen point. Therefore, the best you can get is to compute a line in 3D, starting at the camera and ending in infinite depth.
While in theory it would be possible at least to try to find the crossing of the line with the 3D objects, ILNumerics currently does not. Even in the simple case of a surface it is easy to construct a 3D model which crosses the line at more than one point.
For a simplified situation a solution exists: If the Z coordinate in 3D does not matter, one can use common matrix conversions in order to acquire the X and Y coordinates in 3D and use these only. Let's say, your plot is a 2D line plot or a surface plot - but only watched from
'above' (i.e. The unrotated X-Y plane). The Z coordinate of the point clicked may not be of interest. Let's further assume, you have setup an ILScene scene in a common windows application with ILPanel:
private void ilPanel1_Load(object sender, EventArgs e) {
var scene = new ILScene() {
new ILPlotCube(twoDMode: true) {
new ILSurface(ILSpecialData.sincf(20,30))
}
};
scene.First<ILSurface>().MouseClick += (s,arg) => {
// we start at the mouse event target -> this will be the
// surface group node (the parent of "Fill" and "Wireframe")
var group = arg.Target.Parent;
if (group != null) {
// walk up to the next camera node
Matrix4 trans = group.Transform;
while (!(group is ILCamera) && group != null) {
group = group.Parent;
// collect all nodes on the path up
trans = group.Transform * trans;
}
if (group != null && (group is ILCamera)) {
// convert args.LocationF to world coords
// The Z coord is not provided by the mouse! -> choose arbitrary value
var pos = new Vector3(arg.LocationF.X * 2 - 1, arg.LocationF.Y * -2 + 1, 0);
// invert the matrix.
trans = Matrix4.Invert(trans);
// trans now converts from the world coord system (at the camera) to
// the local coord system in the 'target' group node (surface).
// In order to transform the mouse (viewport) position, we
// left multiply the transformation matrix.
pos = trans * pos;
// view result in the window title
Text = "Model Position: " + pos.ToString();
}
}
};
ilPanel1.Scene = scene;
}
What it does: it registers a MouseClick event handler on the surface group node. In the handler it accumulates the transformation matrices on the path from the clicked target (the surface group node) up to the next camera node the surface is a child of. While rendering, the (model) coordinates of the vertices are transformed by the local coordinate transformation matrix, hosted in every group node. All transformations are accumulated and so the vertex coordinates end up in the 'world coordinate' system, established by every camera. So rendering finds the 2D screen position from the 3D model vertex positions.
In order to find the 3D position from the 2D screen coordinates - one must go the other way around. In the example, we acquire the transformation matrices for every group node, multiply them all up and invert the resulting transformation matrix. This is needed, because such transforms naturally describe the conversion from the child node to the parent. Here, we need the other way around - hence the inversion is necessary.
This method gives the correct 3D coordinates at the mouse position. However, keep the limitations in mind! Here, we do not take into account any rotation of the plot cube (the plot cube must be left unrotated) and no projection transforms (plot cubes do use orthographic transform by default, which basically is a noop). In order to recognize those variables as well, you may extend the example accordingly.