I'm using 2D.
I wanna get all Objects on Layer X in a specific radius.
transform.position -> (0.0, 0.0, 0.0)
viewRadius -> Distance, which the player can see. This is working fine.
targetMask -> The Layer X. All GameObjects that I want, are on this layer.
Collider2D[] targetInViewRadius = Physics2D.OverlapCircleAll(transform.position, viewRadius, targetMask);
If you just want the objects you can do
foreach (var item in targetInViewRadius)
{
var objects = item.gameObject;
}
Related
I am attempting to instantiate differently shaped prefabs next to eachother. I have a folder full of differently sized objects that allow for me to string them together randomly to create a 2D endless runner. My issue is that all the answers online about placing objects beside eachother don't work when different sized objects are used. Here is the code I am using to instantiate the sections. It works when it only randomly selects similar sized objects. When the selected prefab is a different length it leaves either a gap between, or overlaps, the previous object.
for(int i = 0; i < activeSectionsAtOnce; i++)
{
int index = Random.Range(0, sectionPrefabs.Length);
GameObject currentOb = sectionPrefabs[index];
Vector2 position = new Vector2();
if(activeSections.Count == 0)
{
//set a (0, 0) position for first object
position = Vector2.zero;
}
else
{
//get most recently placed object from list
GameObject lastOb = activeSections[activeSections.Count - 1];
//my attempt at getting bounds for the last object and for the next prefab that will be used
Bounds lasObbounds = new Bounds(lastOb.transform.position, Vector2.zero);
Bounds currentBounds = new Bounds(currentOb.transform.position, Vector2.zero);
//Include child objects(all have sprite renderers)
foreach (Renderer r in lastOb.GetComponentsInChildren<Renderer>())
{
lasObbounds.Encapsulate(r.bounds);
}
foreach (Renderer rend in currentOb.GetComponentsInChildren<Renderer>())
{
currentBounds.Encapsulate(rend.bounds);
}
//position object will be instantiated at
position.x = lastOb.transform.position.x + (lasObbounds.size.x / 2) + (currentBounds.size.x/2);
}
//instantiate at selected position
activeSections.Add(
Instantiate(currentOb, position, Quaternion.identity)
);
}
This is in the start method. I'm not sure if the problem is how I'm creating the bounds, or how im mathematicaly creating the x coordinate. This is the expected result that happens when I use same sized prefabs. This is what happens with different prefabs
The shorter protrusion from the first object should be cleanly between the other two without a gap.
Here's an image of one prefab
Thanks for any advice
I'm working on a 2D game in which some objects have a triangular vision, done with a polygon collider
Is it possible to paint this collider so that it is visible during the gameplay?
Assuming your mean PolygonCollider2D and that it always has 3 points in the correct order I guess you could do something like
using System.Linq;
...
[RequireComponent(typeof(MeshRenderer), typeof(MeshFilter), typeof(PolygonCollider2D))]
public class MeshCreator : MonoBehaviour
{
void Awake ()
{
var points = GetComponent<PolygonCollider2D>().points;
var meshFilter = GetComponent<MeshFilter>();
var mesh = new Mesh();
// Just a shorthand for something like
//var list = new List<Vector3>();
//foreach(var p in points)
//{
// list.Add(p);
//}
//mesh.vertices = list.ToArray();
mesh.vertices = points.Select(p -> (Vector3) p).ToArray();
// Here create two triangles (each 3 vertices)
// just because I don't know if you have the points in the correct order
mesh.triangles = new int[]{0,1,2,0,2,1};
meshFilter.mesh = mesh;
}
}
and put this component next to the PolygonCollider2D. It will then replace this Object's mesh with the triangle.
The material you can already set beforehand in the MeshRenderer.
For getting this overlap area the simplest solution would be Semi-Transparent materials. Otherwise you might need a special shader.
I'm currently trying to implement a location-based AR app for Android using Unity C# and ARCore. I have managed to implement an algorithm which gets latitude/longitude and translates it to unity meters. However, this is not enough to place objects at a GPS location since the orientation of the world will be off if you do not align the unity scene with the real world orientation.
Anyway, the problem with the compass is that I cannot find a way to align an object with the true north.
I have tried aligning an object with true north by using
transform.rotation = Quaternion.Euler(0, -Input.compass.trueHeading, 0);
However this causes the object to constantly change rotation based on where my phone's heading so in a way true north is always changing.
What I am currently trying is:
var xrot = Mathf.Atan2(Input.acceleration.z, Input.acceleration.y);
var yzmag = Mathf.Sqrt(Mathf.Pow(Input.acceleration.y, 2) + Mathf.Pow(Input.acceleration.z, 2));
var zrot = Mathf.Atan2(Input.acceleration.x, yzmag);
xangle = xrot * (180 / Mathf.PI) + 90;
zangle = -zrot * (180 / Mathf.PI);
TheAllParent.transform.eulerAngles = new Vector3(xangle, 0, zangle - Input.compass.trueHeading);
This seems to be working better since the object will point towards a single direction with minimum shaking/jittering of the object due to erratic compass reading but points in one direction nonetheless. However, the problem with this is that it points to a different direction every time the app starts, so true north is never in one place according to the code. However the compass readings are fine as I have checked the readings against other GPS apps which implement a compass.
I have also tried transform.rotation = Quaternion.Euler(0, -Input.compass.magneticHeading, 0); but this gives the same always-changing result as the first one. Any help would be appreciated.
its how i solve this issue in my project
i wish it help you.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class newway : MonoBehaviour
{
GameObject cam;
void Start()
{
cam = Camera.main.gameObject;
Input.location.Start();
Input.compass.enabled = true;
}
void Update()
{
Quaternion cameraRotation = Quaternion.Euler(0, cam.transform.rotation.eulerAngles.y, 0);
Quaternion compass = Quaternion.Euler(0, -Input.compass.magneticHeading, 0);
Quaternion north = Quaternion.Euler(0, cameraRotation.eulerAngles.y + compass.eulerAngles.y, 0);
transform.rotation = north;
}
}
Possibly, your GameObject is rotating on its own axes, but you need your GameObject to pivot around the camera, or the center.
If that's the case, try creating another empty GameObject, and set its position to (0, 0, 0), or wherever you wish the center to be. Then add original GameObject as a child to this new one.
Apply your transform.rotation script to this new GameObject.
I have just started getting used to using Unity's new tilemap tool (UnityEngine.Tilemaps).
One issue I am having is that I don't know how to get the x,y coordinates of a placed tile via script. I am trying to move a scriptableObject on the tilemap in script to a new location that the player clicks, but I don't know how to get the coordinates of the clicked tile location. There doesn't seem to be any position property for the Tile class (the Tile knows nothing about its location), so the Tilemap must have the answer. I was not able to find anything in the Unity documentation for how to get the Vector3 coordinates for a selected tile in the Tilemap.
If you have access to the Tile instance you can use its transform (or the raycast hit from the click) to get its world position and than get the tile coordinates via the WorldToCell method of your Grid component (look at the documentation).
EDIT:
Unity seems to not instantiate the tiles but instead uses only one tile object to manage all tiles of that type, i wasn't aware of that.
To get the correct position you have to calculate it yourself. Here is an example of how to get the tile position at the mouse cursor if the grid is positioned at the xy-plane with z = 0
// get the grid by GetComponent or saving it as public field
Grid grid;
// save the camera as public field if you using not the main camera
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
// get the collision point of the ray with the z = 0 plane
Vector3 worldPoint = ray.GetPoint(-ray.origin.z / ray.direction.z);
Vector3Int position = grid.WorldToCell(worldPoint);
I couldn't find a way to get the Grid's position from mouse click, so instead I used a Raycast to Vector3, and then converted that to coordinates via the WorldToCell method of the Grid component as per Shirotha's suggestion. This allows me to move the selected GameObject to a new position.
public class ClickableTile : MonoBehaviour
{
public NormalTile normalTile;
public Player selectedUnit;
private void OnMouseUp()
{
// left click - get info from selected tile
if (Input.GetMouseButtonUp(0))
{
// get mouse click's position in 2d plane
Vector3 pz = Camera.main.ScreenToWorldPoint(Input.mousePosition);
pz.z = 0;
// convert mouse click's position to Grid position
GridLayout gridLayout = transform.parent.GetComponentInParent<GridLayout>();
Vector3Int cellPosition = gridLayout.WorldToCell(pz);
// set selectedUnit to clicked location on grid
selectedUnit.setLocation(cellPosition);
Debug.Log(cellPosition);
}
}
}
On an aside, I know how to get the Grid location, but now how to query it. I need to get the GridSelection static object from Grid, and then get its position.
I have to use the "GUI.DrawTexture(new Rect..." function to draw a rectangle on my screen(actually, it's a gradient GUI bar i got from https://www.assetstore.unity3d.com/en/#!/content/19972, but deep inside it's just a rectangle). I want to do it right beside a gameobject i have on my screen.
The problem is: It seems like rectangle's coordinates are not the same as a gameobjects coordinates(i have searched online and i guess it's true). I have tried the function below to convert a gameobject's position to a rectangle's coordinates but still no luck:
public Vector2 WorldToGuiPoint(Vector2 position)
{
var guiPosition = Camera.main.WorldToScreenPoint(position);
guiPosition.y = Screen.currentResolution.height - guiPosition.y;
return guiPosition;
}
My best strike was when i did this:
Vector3 gameObjectsPosition = Camera.main.WorldToScreenPoint(gameObject);
And then when i wanted to draw the rectangle i did:
GUI.DrawTexture(new Rect(gameObjectsPosition.x, Screen.height - gameObjectsPosition.y, Background.width * ScaleSize, Background.height * ScaleSize), Background);
But still, the Rect isn't on the exact gameObject's position. It's x is right but y don't and when i searched the internet, they said I only had to do the "Screen.height - gameObjectsPosition.y" when drawing the Rect. It didn't work for me, the y is still wrong.
What should i do to create a rectangle that's like, right beside a current gameobject on screen (like, if the gameobject's position is x = -401 and y = -80, i want it on y=-80 and x=-300)
You need to use Screen.height instead of
Screen.currentResolution.height
using UnityEngine;
using System.Collections;
public class GuiPosition : MonoBehaviour
{
public Vector2 WorldToGuiPoint(Vector3 GOposition)
{
var guiPosition = Camera.main.WorldToScreenPoint(GOposition);
// Y axis coordinate in screen is reversed relative to world Y coordinate
guiPosition.y = Screen.height - guiPosition.y;
return guiPosition;
}
void OnGUI()
{
var guiPosition = WorldToGuiPoint(gameObject.transform.position);
var rect = new Rect(guiPosition, new Vector2(200, 70));
GUI.Label(rect, "TEST");
}
}
I've just tested attaching that MonoBehaviour to a Cube GameObject and it works