I am trying to create a basic tower defense game where I have set up my cell size to be 0.5 X and 0.5 Y (so that the player can place their towers more freely, think WC3).
This causes problems when I later in the game wants to check if a grid cell is occupied, because some cells will seem to be taken, but actually are not.
Here's an image to illustrate my problem:
The black square is rendered over 4 cells, but only 1 of the cells are occupied (the white square in the lower left corner of the black square).
Have anyone else faced this particular problem and knows how to solve this or are there any other solutions that you would like to recommend? :)
Thanks in advance!
Though I'm unsure how the unity Grid component works I have used the following approach in one of my grid based games. Note however that I implemented my own grid for this, which contain custom grid tiles. But maybe the same logic can be applied to the unity grid
This grid was just a simple grid made like this
for (int i = 0; i < terrainLength; i++)
{
for (int j = 0; j < terrainWidth; j++)
{
GameObject tile = Instantiate(gridTilePrefab, new Vector3(posX + i * gridCube.transform.localScale.x, posY + terrainHeight, posZ + j * gridCube.transform.localScale.z), Quaternion.identity);
tile.name = "grid[" + i + "," + j + "]";
tile.transform.parent = gridParent.transform;
}
}
These grid tiles would have a boolean isOccupied. That would be set to true if an object is placed on it, and false if not.
To check wether or not it was occupied I would simply cast a raycast up from the center of the tile and check for any collision while in the builder phase (No need to do these checks during play!) the implemented was a simple as this:
Class GridTile
{
public bool isOccupied {get; private set;}
public void BuildStageLoop()//this loops like an update while we're in building stage
{
if (Physics.Raycast(transform.position, Vector3.up * 2, out hit))
{
tileOccupied = true;
}
else
{
tileOccupied = false;
}
}
}
And on the placing object I would just check if every tile underneath it had isOccupied set to false. To check for the tiles underneath it I would do a boxRayCast downwards with the width and length of the object you're trying to place, and extending a bit underneath the object so it can collide with the grid tiles.
Related
I would like to fill this auditorium seating area with chairs (in the editor) and have them all face the same focal point (the stage). I will then be randomly filling the chairs with different people (during runtime). After each run the chairs should stay the same, but the people should be cleared so that during the next run the crowd looks different.
The seating area does not currently have a collider attached to it, and neither do the chairs or people.
I found this code which has taken care of rotating the chairs so they target the same focal point. But I'm still curious if there are any better methods to do this.
//C# Example (LookAtPoint.cs)
using UnityEngine;
[ExecuteInEditMode]
public class LookAtPoint : MonoBehaviour
{
public Vector3 lookAtPoint = Vector3.zero;
void Update()
{
transform.LookAt(lookAtPoint);
}
}
Additional Screenshots
You can write a editor script to automatically place them evenly. In this script,
I don't handle world and local/model space in following code. Remember to do it when you need to.
Generate parallel rays that come from +y to -y in a grid. The patch size of this grid depends on how big you chair and the mesh(curved space) is. To get a proper patch size. Get the bounding box of a chair(A) and the curved space mesh(B), and then devide them(B/A) and use the result as the patch size.
Mesh chairMR;//Mesh of the chair
Mesh audiMR;//Mesh of the auditorium
var patchSizeX = audiMR.bounds.size.X;
var patchSizeZ = audiMR.bounds.size.Z;
var countX = audiMR.bounds.size.x / chairMR.bounds.size.x;
var countZ = audiMR.bounds.size.z / chairMR.bounds.size.z;
So the number of rays you need to generate is about countX*countZ. Patch size is (patchSizeX, patchSizeZ).
Then, origin points of the rays can be determined:
//Generate parallel rays that come form +y to -y.
List<Ray> rays = new List<Ray>(countX*countZ);
for(var i=0; i<countX; ++i)
{
var x = audiMR.bounds.min.x + i * sizeX + tolerance /*add some tolerance so the placed chairs not intersect each other when rotate them towards the stage*/;
for(var i=0; i<countZ; ++i)
{
var z = audiMR.bounds.min.z + i * sizeZ + tolerance;
var ray = new Ray(new Vector3(x, 10000, z), Vector3.down);
//You can also call `Physics.Raycast` here too.
}
}
Get postions to place chairs.
attach a MeshCollider to your mesh temporarily
foreach ray, Physics.Raycast it (you can place some obstacles on places that will not have a chair placed. Set special layer for those obstacles.)
get hit point and create a chair at the hit point and rotate it towards the stage
Reuse these hit points to place your people at runtime.
Convert each of them into a model/local space point. And save them into json or asset via serialization for later use at runtime: place people randomly.
I've been working on a scene in Unity3D where I have the KinectV2 depth information coming in at 512 x 424 and I'm converting that in real time to Mesh that is also 512 x 424. So there is a 1:1 ratio of pixel data (depth) and vertices (mesh).
My end goal is to make the 'Monitor 3D View' scene found in 'Microsoft Kinect Studio v2.0' with the Depth.
I've pretty much got it working in terms of the point cloud. However, there is a large amount of warping in my Unity scene. I though it might of been down to my maths, etc.
However I noticed that its the same case for the Unity Demo kinect supplied in their Development kit.
I'm just wondering if I'm missing something obvious here? Each of my pixels (or vertices in this case) is mapped out in a 1 by 1 fashion.
I'm not sure if its because I need to process the data from the DepthFrame before rendering it to scene? Or if there's some additional step I've missed out to get the true representation of my room? Because it looks like theres a slight 'spherical' effect being added right now.
These two images are a top down shot of my room. The green line represents my walls.
The left image is the Kinect in a Unity scene, and the right is within Microsoft Kinect Studio. Ignoring the colour difference, you can see that the left (Unity) is warped, whereas the right is linear and perfect.
I know it's quite hard to make out, especially that you don't know the layout of the room I'm sat in :/ Side view too. Can you see the warping on the left? Use the green lines as a reference - these are straight in the actual room, as shown correctly on the right image.
Check out my video to get a better idea:
https://www.youtube.com/watch?v=Zh2pAVQpkBM&feature=youtu.be
Code C#
Pretty simple to be honest. I'm just grabbing the depth data straight from the Kinect SDK, and placing it into a point cloud mesh on the Z axis.
//called on application start
void Start(){
_Reader = _Sensor.DepthFrameSource.OpenReader();
_Data = new ushort[_lengthInPixels];
_Sensor.Open();
}
//called once per frame
void Update(){
if(_Reader != null){
var dep_frame = _Reader.AcquireLatestFrame();
dep_frame.CopyFrameDataToArray(_Data);
dep_frame.Dispose();
dep_frame = null;
UpdateScene();
}
}
//update point cloud in scene
void UpdateScene(){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int index = (y * width) + x;
float depthAdjust = 0.1;
Vector3 new_pos = new Vector3(points[index].x, points[index].y, _Data[index] * depthAdjust;
points[index] = new_pos;
}
}
}
Kinect API can be found here:
https://msdn.microsoft.com/en-us/library/windowspreview.kinect.depthframe.aspx
Would appreciate any advise, thanks!
With thanks to Edward Zhang, I figured out what I was doing wrong.
It's down to me not projecting my depth points correctly, in where I need to use the CoordinateMapper to map my DepthFrame into CameraSpace.
Currently, my code assumes an orthogonal depth instead of using a perspective depth camera. I just needed to implement this:
https://msdn.microsoft.com/en-us/library/windowspreview.kinect.coordinatemapper.aspx
//called once per frame
void Update(){
if(_Reader != null){
var dep_frame = _Reader.AcquireLatestFrame();
dep_frame.CopyFrameDataToArray(_Data);
dep_frame.Dispose();
dep_frame = null;
CameraSpacePoint[] _CameraSpace = new CameraSpacePoint[_Data.Length];
_Mapper.MapDepthFrameToCameraSpace(_Data, _CameraSpace);
UpdateScene();
}
}
//update point cloud in scene
void UpdateScene(){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int index = (y * width) + x;
Vector3 new_pos = new Vector3(_CameraSpace[index].X, _CameraSpace[index].Y, _CameraSpace[index].Z;
points[index] = new_pos;
}
}
}
I am using the MVN-Unity plug-in to access Xsens motion capture data and animate a Unity character in real-time. The character in white and green and yellow is a Unity skeleton that is animated based on Xsens motion data.
I am now trying to animate a different (non-Unity) character using Xsens (the other character that looks like a human) so similar to what the plug-in does, the motion data (positions & orientations) are being mapped to his joints/bones.
But as you can see below, something is wrong with orientations...
I think the reason might be that the rotations from MVN are not properly offset. As you can see in the next two pictures, the MVN hips have the x-axis (red) pointing to the puppet's backside, whereas for the guy's hips, the x-axis points to the right of him.
It might also be that the plug-in is using global rotations somewhere where it should use local rotations. That this must be the case can be demonstrated when I rotate the guy around before I start the Unity app; i.e. select the guy's root game object and try setting the y-rotation to 0/90/180/270 before pressing play, and compare the results: every time the distortions are different.
I don't know how to properly fix this. The code snippet that updates the Unity model (mapped to the MVN puppet or the guy) is as follows. I took this from the plug-in scripts:
private void updateModel(Transform[] pose, Transform[] model)
{
// Re-set the target, then set it up based on the segments.
Vector3 pelvisPos = new Vector3();
Vector3 lastPos = target.position;
target.position = Vector3.zero;
// Map only 23 joints.
for (int i = 0; i < 23; i++)
{
switch (i)
{
// Position only on y axis, and leave x and z to the body. Apply the 'global' position & orientation to the pelvis.
case (int)XsAnimationSegment.Pelvis:
pelvisPos = pose[i].position * scale;
model[i].position = new Vector3( model[i].position.x, pelvisPos.y, model[i].position.z );
model[i].rotation = pose[i].rotation * modelRotTP[i];
break;
// Update only the 'orientation' for the rest of the segments.
default:
if ( model[i] != null )
{
model[i].rotation = pose[i].rotation * modelRotTP[i];
}
break;
}
}
// Apply root motion if the flag is enabled; i.e. true.
if (applyRootMotion)
{
// Only update x and z, since pelvis is already modified by y previously.
target.position = new Vector3(pelvisPos.x + pelvisPosTP.x, lastPos.y, pelvisPos.z + pelvisPosTP.z);
}
// Set the final rotation of the full body, but only position it to face similar as the pelvis.
Quaternion q = Quaternion.Inverse(modelRotTP[(int)XsAnimationSegment.Pelvis]) * model[(int)XsAnimationSegment.Pelvis].rotation;
target.rotation = new Quaternion(target.rotation.x, q.y, target.rotation.z, target.rotation.w);
}
I sort of understand what the code does, but I don't know how to fix this problem. Most probably to do with the axes being different? I would appreciate any help...
You can modify XsLiveAnimator.cs script in the line: 496
with that
model[segmentOrder[i]].transform.rotation = orientations[segmentOrder[i]];
model[segmentOrder[i]].transform.Rotate(rotOffset, Space.World);
rotOffset is a Vector3 of your rotation
I am attempting to apply a gradient effect on a Unity3D(5.2) gui object but its as if one of the gradient color keys is being completely ignored. I have tried with both instantiating a new gradient field and declaring a gradient field public and edit its keys in the editor but yet the effects remain the same.
I'm beginning to think that I am not supposed to use Gradients in a BaseMeshEffect in the way I am using it. If only have 2 keys, the colors render properly. Where am I wrong?
Here is a code sample followed by a screen shot.
public class GradientUI : BaseMeshEffect
{
[SerializeField]
public Gradient Grad;
public override void ModifyMesh(VertexHelper vh)
{
if (!IsActive())
{
return;
}
List<UIVertex> vertexList = new List<UIVertex>();
vh.GetUIVertexStream(vertexList);
ModifyVertices(vertexList);
vh.Clear();
vh.AddUIVertexTriangleStream(vertexList);
}
void ModifyVertices(List<UIVertex> vertexList)
{
int count = vertexList.Count;
float bottomY = vertexList[0].position.y;
float topY = vertexList[0].position.y;
for (int i = 1; i < count; i++)
{
float y = vertexList[i].position.y;
if (y > topY)
{
topY = y;
}
else if (y < bottomY)
{
bottomY = y;
}
}
float uiElementHeight = topY - bottomY;
for (int i = 0; i < count; i++)
{
UIVertex uiVertex = vertexList[i];
float percentage = (uiVertex.position.y - bottomY) / uiElementHeight;
// Debug.Log(percentage);
Color col = Grad.Evaluate(percentage);
uiVertex.color = col;
vertexList[i] = uiVertex;
Debug.Log(uiVertex.position);
}
}
Screen shot
Your script is actually OK, no problem with it. The problem here is that UI elements simply don't have enough geometry for you to actually see the whole gradient.
Let me explain. In a nutshell, each UI element is actually a mesh made of several 3D triangles, each one rotated to face the camera exactly with its front so it looks 2D. You filter works by assigning a color value to each vertex of those triangles. The color values in the middle of triangles are interpolated based on the proximity to each of the colored vertices.
If you look at UI element in wireframe, you will see that its geometry is very simple. This is for example how a sliced image looks:
As you can see, all of its vertices are concentrated at the corners, and there are no vertices in the middle. So, lets assume your gradient is 2 keys WHITE=>RED. The upper vertices get value WHITE or close to WHITE, the bottom values get value RED or close to RED. This works OK for 2 keys.
Now assume you have 3 keys WHITE=>BLUE=>RED. The upper value is WHITE or close to WHITE, the bottom values get value RED or close to RED, the BLUE value is supposed to be somewhere in the middle, but there is no vertex in the middle, so it is not assigned to anything. So you still get WHITE to RED gradient.
Now, what you can do:
1) You can add some more geometry programmatically, for example by simply subdividing the whole mesh. This may help you: http://answers.unity3d.com/questions/259127/does-anyone-have-any-code-to-subdivide-a-mesh-and.html. Pay attention that in this case, the more keys your gradient has, the more subdivisions are required.
2) Use texture that looks like a gradient gradient.
How can I create dynamically objects (cubes) from one wall to the other in code in Unity? Such here:
It's funny that Unity guys have written an example which looks like exactly the same thing you are trying to achieve :)
http://docs.unity3d.com/Manual/InstantiatingPrefabs.html
You can instantiate a prefab with 2 for sentences.
public GameObject objToCreate;
void Start ()
{
//2 for sentences to create a vidirectional array
for(int i = 0; i<5;i++){ //for move on Y
for(int j = 0; j<5;j++){ //for move on X
Instantiate(objToCreate, transform.position,transform.rotation);
transform.Translate(1f,0,0); //Move on X
}
transform.Translate(0,1f,0); //When fill X translate on Y and start again
transform.Translate(-5f,0,0); //Reset out transform position 5 units as set in the first for sentence
//For mor space between objects just change the values in Translate function
}
}