Unity create dynamically objects - unity3d

How can I create dynamically objects (cubes) from one wall to the other in code in Unity? Such here:

It's funny that Unity guys have written an example which looks like exactly the same thing you are trying to achieve :)
http://docs.unity3d.com/Manual/InstantiatingPrefabs.html

You can instantiate a prefab with 2 for sentences.
public GameObject objToCreate;
void Start ()
{
//2 for sentences to create a vidirectional array
for(int i = 0; i<5;i++){ //for move on Y
for(int j = 0; j<5;j++){ //for move on X
Instantiate(objToCreate, transform.position,transform.rotation);
transform.Translate(1f,0,0); //Move on X
}
transform.Translate(0,1f,0); //When fill X translate on Y and start again
transform.Translate(-5f,0,0); //Reset out transform position 5 units as set in the first for sentence
//For mor space between objects just change the values in Translate function
}
}

Related

Unity: Make a tile take up more grid cells

I am trying to create a basic tower defense game where I have set up my cell size to be 0.5 X and 0.5 Y (so that the player can place their towers more freely, think WC3).
This causes problems when I later in the game wants to check if a grid cell is occupied, because some cells will seem to be taken, but actually are not.
Here's an image to illustrate my problem:
The black square is rendered over 4 cells, but only 1 of the cells are occupied (the white square in the lower left corner of the black square).
Have anyone else faced this particular problem and knows how to solve this or are there any other solutions that you would like to recommend? :)
Thanks in advance!
Though I'm unsure how the unity Grid component works I have used the following approach in one of my grid based games. Note however that I implemented my own grid for this, which contain custom grid tiles. But maybe the same logic can be applied to the unity grid
This grid was just a simple grid made like this
for (int i = 0; i < terrainLength; i++)
{
for (int j = 0; j < terrainWidth; j++)
{
GameObject tile = Instantiate(gridTilePrefab, new Vector3(posX + i * gridCube.transform.localScale.x, posY + terrainHeight, posZ + j * gridCube.transform.localScale.z), Quaternion.identity);
tile.name = "grid[" + i + "," + j + "]";
tile.transform.parent = gridParent.transform;
}
}
These grid tiles would have a boolean isOccupied. That would be set to true if an object is placed on it, and false if not.
To check wether or not it was occupied I would simply cast a raycast up from the center of the tile and check for any collision while in the builder phase (No need to do these checks during play!) the implemented was a simple as this:
Class GridTile
{
public bool isOccupied {get; private set;}
public void BuildStageLoop()//this loops like an update while we're in building stage
{
if (Physics.Raycast(transform.position, Vector3.up * 2, out hit))
{
tileOccupied = true;
}
else
{
tileOccupied = false;
}
}
}
And on the placing object I would just check if every tile underneath it had isOccupied set to false. To check for the tiles underneath it I would do a boxRayCast downwards with the width and length of the object you're trying to place, and extending a bit underneath the object so it can collide with the grid tiles.

How can I see the inside of a cylinder in Unity?

I place my camera inside of a cylinder but I am not able to see it. All I can see is the outside of it. What should I do?
I found out! You have to add a material to the cylinder, then select Shader/Legacy Shaders/Particles/Alpha Blended.
Instead of using double sided materials (which will render twice) you can just flip your mesh (aka invert normals).
Inverting normals is performed by taking the triangles array which contains indexes of vertexes that the mesh is referring to when renderig triangles.
A simplest code that does it looks kind of like this:
[RequireComponent(typeof(MeshFilter))]
public class MeshInverter : MonoBehaviour
{
void Start()
{
var meshFilter = GetComponent<MeshFilter>();
var triss = meshFilter.sharedMesh.triangles;
var normals=meshFilter.sharedMesh.normals;
for (int i=0;i<normals.Length;i++)
normals[i]=-normals[i];
for (int i = 0; i < triss.Length / 3; i++)
{
int temp = triss[i * 3 + 1];
triss[i * 3 + 1] = triss[i * 3];
triss[i * 3] = temp;
}
Mesh mesh=Instantiate(meshFilter.sharedMesh);
mesh.triangles=triss;
mesh.normals=normals;
meshFilter.mesh=mesh;
}
}
It will make the inside of the capsule visible in play mode

Microsoft Kinect V2 + Unity 3D Depth = Warping

I've been working on a scene in Unity3D where I have the KinectV2 depth information coming in at 512 x 424 and I'm converting that in real time to Mesh that is also 512 x 424. So there is a 1:1 ratio of pixel data (depth) and vertices (mesh).
My end goal is to make the 'Monitor 3D View' scene found in 'Microsoft Kinect Studio v2.0' with the Depth.
I've pretty much got it working in terms of the point cloud. However, there is a large amount of warping in my Unity scene. I though it might of been down to my maths, etc.
However I noticed that its the same case for the Unity Demo kinect supplied in their Development kit.
I'm just wondering if I'm missing something obvious here? Each of my pixels (or vertices in this case) is mapped out in a 1 by 1 fashion.
I'm not sure if its because I need to process the data from the DepthFrame before rendering it to scene? Or if there's some additional step I've missed out to get the true representation of my room? Because it looks like theres a slight 'spherical' effect being added right now.
These two images are a top down shot of my room. The green line represents my walls.
The left image is the Kinect in a Unity scene, and the right is within Microsoft Kinect Studio. Ignoring the colour difference, you can see that the left (Unity) is warped, whereas the right is linear and perfect.
I know it's quite hard to make out, especially that you don't know the layout of the room I'm sat in :/ Side view too. Can you see the warping on the left? Use the green lines as a reference - these are straight in the actual room, as shown correctly on the right image.
Check out my video to get a better idea:
https://www.youtube.com/watch?v=Zh2pAVQpkBM&feature=youtu.be
Code C#
Pretty simple to be honest. I'm just grabbing the depth data straight from the Kinect SDK, and placing it into a point cloud mesh on the Z axis.
//called on application start
void Start(){
_Reader = _Sensor.DepthFrameSource.OpenReader();
_Data = new ushort[_lengthInPixels];
_Sensor.Open();
}
//called once per frame
void Update(){
if(_Reader != null){
var dep_frame = _Reader.AcquireLatestFrame();
dep_frame.CopyFrameDataToArray(_Data);
dep_frame.Dispose();
dep_frame = null;
UpdateScene();
}
}
//update point cloud in scene
void UpdateScene(){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int index = (y * width) + x;
float depthAdjust = 0.1;
Vector3 new_pos = new Vector3(points[index].x, points[index].y, _Data[index] * depthAdjust;
points[index] = new_pos;
}
}
}
Kinect API can be found here:
https://msdn.microsoft.com/en-us/library/windowspreview.kinect.depthframe.aspx
Would appreciate any advise, thanks!
With thanks to Edward Zhang, I figured out what I was doing wrong.
It's down to me not projecting my depth points correctly, in where I need to use the CoordinateMapper to map my DepthFrame into CameraSpace.
Currently, my code assumes an orthogonal depth instead of using a perspective depth camera. I just needed to implement this:
https://msdn.microsoft.com/en-us/library/windowspreview.kinect.coordinatemapper.aspx
//called once per frame
void Update(){
if(_Reader != null){
var dep_frame = _Reader.AcquireLatestFrame();
dep_frame.CopyFrameDataToArray(_Data);
dep_frame.Dispose();
dep_frame = null;
CameraSpacePoint[] _CameraSpace = new CameraSpacePoint[_Data.Length];
_Mapper.MapDepthFrameToCameraSpace(_Data, _CameraSpace);
UpdateScene();
}
}
//update point cloud in scene
void UpdateScene(){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int index = (y * width) + x;
Vector3 new_pos = new Vector3(_CameraSpace[index].X, _CameraSpace[index].Y, _CameraSpace[index].Z;
points[index] = new_pos;
}
}
}

Orientations Offset

I am using the MVN-Unity plug-in to access Xsens motion capture data and animate a Unity character in real-time. The character in white and green and yellow is a Unity skeleton that is animated based on Xsens motion data.
I am now trying to animate a different (non-Unity) character using Xsens (the other character that looks like a human) so similar to what the plug-in does, the motion data (positions & orientations) are being mapped to his joints/bones.
But as you can see below, something is wrong with orientations...
I think the reason might be that the rotations from MVN are not properly offset. As you can see in the next two pictures, the MVN hips have the x-axis (red) pointing to the puppet's backside, whereas for the guy's hips, the x-axis points to the right of him.
It might also be that the plug-in is using global rotations somewhere where it should use local rotations. That this must be the case can be demonstrated when I rotate the guy around before I start the Unity app; i.e. select the guy's root game object and try setting the y-rotation to 0/90/180/270 before pressing play, and compare the results: every time the distortions are different.
I don't know how to properly fix this. The code snippet that updates the Unity model (mapped to the MVN puppet or the guy) is as follows. I took this from the plug-in scripts:
private void updateModel(Transform[] pose, Transform[] model)
{
// Re-set the target, then set it up based on the segments.
Vector3 pelvisPos = new Vector3();
Vector3 lastPos = target.position;
target.position = Vector3.zero;
// Map only 23 joints.
for (int i = 0; i < 23; i++)
{
switch (i)
{
// Position only on y axis, and leave x and z to the body. Apply the 'global' position & orientation to the pelvis.
case (int)XsAnimationSegment.Pelvis:
pelvisPos = pose[i].position * scale;
model[i].position = new Vector3( model[i].position.x, pelvisPos.y, model[i].position.z );
model[i].rotation = pose[i].rotation * modelRotTP[i];
break;
// Update only the 'orientation' for the rest of the segments.
default:
if ( model[i] != null )
{
model[i].rotation = pose[i].rotation * modelRotTP[i];
}
break;
}
}
// Apply root motion if the flag is enabled; i.e. true.
if (applyRootMotion)
{
// Only update x and z, since pelvis is already modified by y previously.
target.position = new Vector3(pelvisPos.x + pelvisPosTP.x, lastPos.y, pelvisPos.z + pelvisPosTP.z);
}
// Set the final rotation of the full body, but only position it to face similar as the pelvis.
Quaternion q = Quaternion.Inverse(modelRotTP[(int)XsAnimationSegment.Pelvis]) * model[(int)XsAnimationSegment.Pelvis].rotation;
target.rotation = new Quaternion(target.rotation.x, q.y, target.rotation.z, target.rotation.w);
}
I sort of understand what the code does, but I don't know how to fix this problem. Most probably to do with the axes being different? I would appreciate any help...
You can modify XsLiveAnimator.cs script in the line: 496
with that
model[segmentOrder[i]].transform.rotation = orientations[segmentOrder[i]];
model[segmentOrder[i]].transform.Rotate(rotOffset, Space.World);
rotOffset is a Vector3 of your rotation

Cocos2d / Box2d CCRibbon Collision Detection

I'm developing a game on iOS w/ cocos2d+box2d as the game engine, and am trying to add a CCRibbon (wherein the points get populated with touches), that I know how to, and to get that CCRibbon's shape linked up to box2d, so when an object collides with it (due to gravity), it bounces off as if it were a normal thing. Would anyone happen to know how to do this / give me alternatives ?
Many thanks,
Alexandre Cassagne
Take each point and create a thin static rectangular box2d polygon using the points + the adjustment to make it a shape.
for (int i = 0; i < ccribbon.points.length - 1; i++)
{
int j = i;
j++;
int width = 2;
Array ar = [];
ar[0] = new b2Vec2(ccribbon.points[i].x, ccribbon.points[i].y);
ar[1] = new b2Vec2(ccribbon.points[i].x + width, ccribbon.points[i].y + width);
ar[2] = new b2Vec2(ccribbon.points[j].x, ccribbon.points[j].y);
ar[3] = new b2Vec2(ccribbon.points[j].x + width, ccribbon.points[j].y + width);
//create new static object
b2Polygon b2p = new b2Polygon();
b2p.setAsArray(ar);
//do rest to add it to world etc.
}
of course don't copy that code exactly its just from what i remember and i'm also sure its a combination of C# and Actionscript 3. its kindof a not so pseudo code with lots of blanks you'll need to fill in. Why the comments are there :P.
Thats basically how i would do it though. My experience is only in box2d for flash though.
Have you read this....http://www.raywenderlich.com/606/how-to-use-box2d-for-just-collision-detection-with-cocos2d-iphone