MeshRenderer has wrong bounds when rotated - unity3d

When I try to get the bounds of my models (created in Blender) and show them in Inspector:
As you can see the bounds are correct when the objects are not rotated. But when they are (left-most object) bounds start getting totally wrong.
Here is a script that shows / gets the bounds:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class GetBounds : MonoBehaviour
{
public MeshRenderer mesh_renderer = null;
public bool show_bounds = false;
private void OnDrawGizmos()
{
if (!show_bounds) return;
Gizmos.DrawWireCube(mesh_renderer.bounds.center, mesh_renderer.bounds.size);
Gizmos.DrawWireSphere(mesh_renderer.bounds.center, 0.3f);
}
}
How can I fix this?

In this thread I have come across this image which explains it pretty much
Unity dos not recalculate the Mesh.bounds all the time except when you add a mesh for the first time or "manually" invoke Mesh.RecalculateBounds.
It then uses this local space Mesh.bounds in order to calculate the translated, scaled and rotated Renderer.bounds in global space based on the Mesh.bounds. This way it always has to iterate a fixed amount of 8 vertices of the bounding box.
There was also a solution provided if you want to get the exact bounds calculated directly from the vertices. I adopted and cleaned it up a bit
public class GetBounds : MonoBehaviour
{
public MeshRenderer mesh_renderer;
public bool show_bounds;
public MeshFilter meshFilter;
public Mesh mesh;
private void OnDrawGizmos()
{
if (!mesh_renderer) return;
if (!show_bounds) return;
if (!meshFilter) meshFilter = mesh_renderer.GetComponent<MeshFilter>();
if (!meshFilter) return;
if (!mesh) mesh = meshFilter.mesh;
if (!mesh) return;
var vertices = mesh.vertices;
if (vertices.Length <= 0) return;
// TransformPoint converts the local mesh vertice dependent on the transform
// position, scale and orientation into a global position
var min = transform.TransformPoint(vertices[0]);
var max = min;
// Iterate through all vertices
// except first one
for (var i = 1; i < vertices.Length; i++)
{
var V = transform.TransformPoint(vertices[i]);
// Go through X,Y and Z of the Vector3
for (var n = 0; n < 3; n++)
{
max = Vector3.Max(V, max);
min = Vector3.Min(V, min);
}
}
var bounds = new Bounds();
bounds.SetMinMax(min, max);
// ust to compare it to the original bounds
Gizmos.DrawWireCube(mesh_renderer.bounds.center, mesh_renderer.bounds.size);
Gizmos.DrawWireSphere(mesh_renderer.bounds.center, 0.3f);
Gizmos.color = Color.green;
Gizmos.DrawWireCube(bounds.center, bounds.size);
Gizmos.DrawWireSphere(bounds.center, 0.3f);
}
}
Result:
In WHITE: The MeshRenderer.bounds
In GREEN: The "correct" calculated vertex bounds

Related

Unity - How to set the color of an individual face when clicking a mesh?

Yesterday others on Stack Overflow helped me determine how to recolor a mesh triangle to red by clicking on it, it works great, the only problem is that the 3 vertices that get recolored are shared between triangles. This results in coloration that looks rather smeared. I'm really hoping there's a way to color only a single face (or normal if you will).
I've attached the following script to my mesh that uses a raycast to determine the surface coordinate and translate a green cube there. The gif below will better illustrate this problem.
Once again, any help or insight into this would be greatly appreciated. Thanks!
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class MyRayDraw : MonoBehaviour
{
public GameObject cube;
private MeshRenderer meshRenderer;
Mesh mesh;
Vector3[] vertices;
Color[] colorArray;
private void Start()
{
mesh = transform.GetComponent<MeshFilter>().mesh;
vertices = mesh.vertices;
colorArray = new Color[vertices.Length];
for (int k = 0; k < vertices.Length; k++)
{
colorArray[k] = Color.white;
}
mesh.colors = colorArray;
}
void Update()
{
if (Input.GetMouseButtonDown(0))
{
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
if (Physics.Raycast(ray, out RaycastHit hit))
{
Snap(hit.point); // Moves the green cube
int[] triangles = mesh.triangles;
var vertIndex1 = triangles[hit.triangleIndex * 3 + 0];
var vertIndex2 = triangles[hit.triangleIndex * 3 + 1];
var vertIndex3 = triangles[hit.triangleIndex * 3 + 2];
colorArray[vertIndex1] = Color.red;
colorArray[vertIndex2] = Color.red;
colorArray[vertIndex3] = Color.red;
mesh.colors = colorArray;
}
else
{
Debug.Log("no hit");
}
}
}
}
As you say the issue is that the vertices are shared between triangles but coloring is always vertex based.
The idea for a solution is:
for each vertex of the hit triangle check if it is used by other triangles
if so copy its position to create a new separated vertex
update the triangle to use the newly created vertex indices
(evtl.) use RecalculateNormals to make the triangles face outside without having to care about the order of provided vertices
using System.Linq;
using UnityEngine;
public class MyRayDraw : MonoBehaviour
{
public GameObject cube;
// Better to reference those already in the Inspector
[SerializeField] private MeshFilter meshFilter;
[SerializeField] private MeshRenderer meshRenderer;
[SerializeField] private MeshCollider meshCollider;
private Mesh _mesh;
private void Awake()
{
if (!meshFilter) meshFilter = GetComponent<MeshFilter>();
if (!meshRenderer) meshRenderer = GetComponent<MeshRenderer>();
if (!meshCollider) meshCollider = GetComponent<MeshCollider>();
_mesh = meshFilter.mesh;
// create new colors array where the colors will be created
var colors = new Color[_mesh.vertices.Length];
for (var k = 0; k < colors.Length; k++)
{
colors[k] = Color.white;
}
_mesh.colors = colors;
}
private void Update()
{
if (!Input.GetMouseButtonDown(0)) return;
var ray = Camera.main.ScreenPointToRay(Input.mousePosition);
if (Physics.Raycast(ray, out var hit))
{
Debug.Log(hit.triangleIndex);
//cube.transform.position = hit.point;
// Get current vertices, triangles and colors
var vertices = _mesh.vertices;
var triangles = _mesh.triangles;
var colors = _mesh.colors;
// Get the vert indices for this triangle
var vert1Index = triangles[hit.triangleIndex * 3 + 0];
var vert2Index = triangles[hit.triangleIndex * 3 + 1];
var vert3Index = triangles[hit.triangleIndex * 3 + 2];
// Get the positions for the vertices
var vert1Pos = vertices[vert1Index];
var vert2Pos = vertices[vert2Index];
var vert3Pos = vertices[vert3Index];
// Now for all three vertices we first check if any other triangle if using it
// by simply count how often the indices are used in the triangles list
var vert1Occurrences = 0;
var vert2Occurrences = 0;
var vert3Occurrences = 0;
foreach (var index in triangles)
{
if (index == vert1Index) vert1Occurrences++;
else if (index == vert2Index) vert2Occurrences++;
else if (index == vert3Index) vert3Occurrences++;
}
// Create copied Lists so we can dynamically add entries
var newVertices = vertices.ToList();
var newColors = colors.ToList();
// Now if a vertex is shared we need to add a new individual vertex
// and also an according entry for the color array
// and update the vertex index
// otherwise we will simply use the vertex we already have
if (vert1Occurrences > 1)
{
newVertices.Add(vert1Pos);
newColors.Add(new Color());
vert1Index = newVertices.Count - 1;
}
if (vert2Occurrences > 1)
{
newVertices.Add(vert2Pos);
newColors.Add(new Color());
vert2Index = newVertices.Count - 1;
}
if (vert3Occurrences > 1)
{
newVertices.Add(vert3Pos);
newColors.Add(new Color());
vert3Index = newVertices.Count - 1;
}
// Update the indices of the hit triangle to use the (eventually) new
// vertices instead
triangles[hit.triangleIndex * 3 + 0] = vert1Index;
triangles[hit.triangleIndex * 3 + 1] = vert2Index;
triangles[hit.triangleIndex * 3 + 2] = vert3Index;
// color these vertices
newColors[vert1Index] = Color.red;
newColors[vert2Index] = Color.red;
newColors[vert3Index] = Color.red;
// write everything back
_mesh.vertices = newVertices.ToArray();
_mesh.triangles = triangles;
_mesh.colors = newColors.ToArray();
_mesh.RecalculateNormals();
}
else
{
Debug.Log("no hit");
}
}
}
Note, however, that this works with simple coloring but might not for complex textures with UV mapping. You would have to also update the mesh.uv if using UV mapped textures.

how to limit and clamp distance between two points in a Line renderer unity2d

I am making a game which let you click on a ball and drag to draw a line renderer with two points and point it to a specific direction and when release I add force to the ball,
for now, I just want to know how can I limit the distance between those two points like give it a radius.
You can simply clamp it using a Mathf.Min.
Since you didn't provide any example code unfortunately here is some example code I made up with a simple plane with a MeshCollider, a child object with the LineRenderer and a camera set to Orthographic. You probably would have to adopt it somehow.
public class Example : MonoBehaviour
{
// adjust in the inspector
public float maxRadius = 2;
private Vector3 startPosition;
[SerializeField] private LineRenderer line;
[SerializeField] private Collider collider;
[SerializeField] private Camera camera;
private void Awake()
{
line.positionCount = 0;
line = GetComponentInChildren<LineRenderer>();
collider = GetComponent<Collider>();
camera = Camera.main;
}
// wherever you dragging starts
private void OnMouseDown()
{
line.positionCount = 2;
startPosition = collider.ClosestPoint(camera.ScreenToWorldPoint(new Vector3(Input.mousePosition.x, Input.mousePosition.y, transform.position.z)));
var positions = new[] { startPosition, startPosition };
line.SetPositions(positions);
}
// while dragging
private void OnMouseDrag()
{
var currentPosition = GetComponent<Collider>().ClosestPoint(camera.ScreenToWorldPoint(new Vector3(Input.mousePosition.x, Input.mousePosition.y, transform.position.z)));
// get vector between positions
var difference = currentPosition - startPosition;
// normalize to only get a direction with magnitude = 1
var direction = difference.normalized;
// here you "clamp" use the smaller of either
// the max radius or the magnitude of the difference vector
var distance = Mathf.Min(maxRadius, difference.magnitude);
// and finally apply the end position
var endPosition = startPosition + direction * distance;
line.SetPosition(1, endPosition);
}
}
This is how it could look like
I've written the following pseudo code, which may help you
float rang ;
Bool drag=true;
GameObject ball;
OnMouseDrag () {
if(drag) {
//Put your dragging code here
}
if (ball.transform.position>range)
Drag=false;
else Drage=true;
}

How to resize all particles from a particle system?

I'm trying to dynamically resize particles using a slider, as well as change their colour.
Particles are used to display datapoints in a 3D scatterplot. I'm using this code: https://github.com/PrinzEugn/Scatterplot_Standalone
private ParticleSystem.Particle[] particlePoints;
void Update () {
pointScale = sizeSlider.value;
for (int i = 0; i < pointList.Count; i++) {
Quaternion quaternion = Camera.current.transform.rotation;
Vector3 angles = quaternion.eulerAngles;
// Set point color
particlePoints[i].startColor = new Color(angles.x, angles.y, angles.z, 1.0f);
particlePoints[i].transform.localScale = new Vector3(pointScale, pointScale, pointScale);
}
}
The issue is that there's no transform method for Particles, and changing the "startColour" doesn't change anything.
The API states that "The current size of the particle is calculated procedurally based on this value and the active size modules."
What does that mean, and how can I change the size of the particles ?
Thanks to previous answers I managed to get this working:
In the PlacePrefabPoints method I add every instantiated prefab to a List, and I add a listener to the slider, which looks like this:
void changedPointSize(){
pointScale = sizeSlider.value;
for (int i = 0; i < objects.Count; i++) {
objects[i].transform.localScale = new Vector3(pointScale, pointScale, pointScale);
}
}
Thanks all !
I just had a look at PointRenderer.cs -> CreateParticles and PlacePrefabPoints give a good hint what has to be changed.
So I guess you would simply change the scale values
foreach (var point in particlePoints)
{
Quaternion quaternion = Camera.current.transform.rotation;
Vector3 angles = quaternion.eulerAngles;
// Set point color
point.startColor = new Color(angles.x, angles.y, angles.z, 1.0f);
point.startSize = sizeSlider.value;
}
and than re-call
GetComponent<ParticleSystem>().SetParticles(particlePoints, particlePoints.Length);
it is questionable though if you really would do this in Update. I would rather do it in sizeSlider.onValueChanged to only do it when neccesarry (you could even make a certain treshold that has to be changed before updating the view) but well for the color there might be no other option than doing it in Update but atleast there I would use a Threshold:
privtae ParticleSystem ps;
// I assume you have that referenced in the inspector
public Slider sizeSlider;
// flag to control whether system should be updated
private bool updateSystem;
privtae void Awake()
{
ps = GetComponent<ParticleSystem>();
}
private void OnEnable()
{
// add a listener to onValueChanged
// it is secure to remove it first (even if not there yet)
// this makes sure it is not added twice
sizeSlider.onValueChanged.RemoveListener(OnsliderChanged());
sizeSlider.onValueChanged.AddListener(OnsliderChanged());
}
private void OnDisable()
{
// cleanup listener
sizeSlider.onValueChanged.RemoveListener(OnsliderChanged());
}
private void OnSliderChanged()
{
foreach (var point in particlePoints)
{
point.startSize = sizeSlider.value;
}
// do the same also for the instantiated prefabs
foreach(Transform child in PointHolder.transform)
{
child.localScale = Vecto3.one * sizeSlider.value;
}
updateSystem = true;
}
private Quaternion lastCameraRot;
public float CameraUpdateThreshold;
private void Update()
{
if(Quaternion.Angle(Camera.current.transform.rotation, lastCameraRot) > CameraUpdateThreshold)
{
foreach (var point in particlePoints)
{
Quaternion quaternion = Camera.current.transform.rotation;
Vector3 angles = quaternion.eulerAngles;
// Set point color
point.startColor = new Color(angles.x, angles.y, angles.z, 1.0f);
}
lastCameraRot = Camera.current.transform.rotation;
updateSystem = true;
}
if(!updateSystem) return;
updateSystem = false;
ps.SetParticles(particlePoints, particlePoints.Length);
}

Generate mesh from one-color texture

I make a code to be able to draw and generate a sprite of this drawing. So I get a sprite with white background and my drawing (which is in a different color).
My question : How could I remove the white background at runtime ?(with C# code)
My problem is : I want to generated mesh using the drawing, but with white background I have 4 vertices (the fourth corners of the sprite) and I want to get all the vertices from the real shape I draw on my sprite (so much more than 4 vertices)
My current idea is to convert the drawing into having a transparent background and then use unity's sprite packer to generate a mesh from that.
My project: It’s a game, where we can create his own game circuit : user draw a black and white sprite —> I convert it to a mesh with collider and generated the new game circuit.
I already thin to clean all white pixels, but I don't think I will get many vertices with that technic.
Thanks for help,
Axel
using System.IO;
using UnityEngine.UI;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEditor;
using UnityEngine.Networking;
public class scri : MonoBehaviour
{
// For saving the mesh------------------------
public KeyCode saveKey = KeyCode.F12;
public string saveName = "SavedMesh";
// Concerning mesher--------------------------
public GameObject mesher; //require
public List<Vector3> vertices;
public List<int> triangles;
public Vector3 point0;
public Vector3 point1;
public Vector3 point2;
public Vector3 point3;
public int loop;
public float size;
public Mesh meshFilterMesh;
public Mesh meshColliderMesh;
// Sprite work
public Color[] pixels;
public Texture2D newTexture;
public Texture2D oldTexture; //require
private Sprite mySprite;
private SpriteRenderer spriteRenderer;
public int pathCount;
public GameObject displayerComponent; //require
public PolygonCollider2D polygonColliderAdded; //require
void Start()
{
// Mesher
vertices = new List<Vector3> ();
triangles = new List<int> ();
meshFilterMesh= mesher.GetComponent<MeshFilter>().mesh;
meshColliderMesh= mesher.GetComponent<MeshCollider>().sharedMesh;
size = 10; // lenght of the mesh in Z direction
loop=0;
// Sprite
pixels = oldTexture.GetPixels();
newTexture =new Texture2D(oldTexture.width,oldTexture.height,TextureFormat.ARGB32, false);
spriteRenderer = gameObject.AddComponent<SpriteRenderer>();
ConvertSpriteAndCreateCollider (pixels);
BrowseColliderToCreateMesh (polygonColliderAdded);
}
void Update()
{
// Save if F12 press
if (Input.GetKeyDown(saveKey)){SaveAsset();}
}
public void ConvertSpriteAndCreateCollider (Color[] pixels) {
for (int i = 0 ; i < pixels.Length ; i++ )
{
// delete all black pixel (black is the circuit, white is the walls)
if ((pixels[i].r==0 && pixels[i].g==0 && pixels[i].b==0 && pixels[i].a==1)) {
pixels[i] = Color.clear;
}
}
// Set a new texture with this pixel list
newTexture.SetPixels(pixels);
newTexture.Apply();
// Create a sprite from this texture
mySprite = Sprite.Create(newTexture, new Rect(0, 0, newTexture.width, newTexture.height), new Vector2(10.0f,10.0f), 10.0f, 0, SpriteMeshType.Tight,new Vector4(0,0,0,0),false);
// Add it to our displayerComponent
displayerComponent.GetComponent<SpriteRenderer>().sprite=mySprite;
// Add the polygon collider to our displayer Component and get his path count
polygonColliderAdded = displayerComponent.AddComponent<PolygonCollider2D>();
}
// Method to browse the collider and launch makemesh
public void BrowseColliderToCreateMesh (PolygonCollider2D polygonColliderAdded){
//browse all path from collider
pathCount=polygonColliderAdded.pathCount;
for (int i = 0; i < pathCount; i++)
{
Vector2[] path = polygonColliderAdded.GetPath(i);
// browse all path point
for (int j = 1; j < path.Length; j++)
{
if (j != (path.Length - 1)) // if we aren't at the last point
{
point0 = new Vector3(path[j-1].x ,path[j-1].y ,0);
point1 = new Vector3(path[j-1].x ,path[j-1].y ,size);
point2 = new Vector3(path[j].x ,path[j].y ,size);
point3 = new Vector3(path[j].x ,path[j].y ,0);
MakeMesh(point0,point1,point2,point3);
}
else if(j == (path.Length - 1))// if we are at the last point, we need to close the loop with the first point
{
point0 = new Vector3(path[j-1].x ,path[j-1].y ,0);
point1 = new Vector3(path[j-1].x ,path[j-1].y ,size);
point2 = new Vector3(path[j].x ,path[j].y ,size);
point3 = new Vector3(path[j].x ,path[j].y ,0);
MakeMesh(point0,point1,point2,point3);
point0 = new Vector3(path[j].x ,path[j].y ,0);
point1 = new Vector3(path[j].x ,path[j].y ,size);
point2 = new Vector3(path[0].x ,path[0].y ,size); // First point
point3 = new Vector3(path[0].x ,path[0].y ,0); // First point
MakeMesh(point0,point1,point2,point3);
}
}
}
}
//Method to generate 2 triangles mesh from the 4 points 0 1 2 3 and add it to the collider
public void MakeMesh (Vector3 point0,Vector3 point1,Vector3 point2, Vector3 point3){
// Vertice add
vertices.Add(point0);
vertices.Add(point1);
vertices.Add(point2);
vertices.Add(point3);
//Triangle order
triangles.Add(0+loop*4);
triangles.Add(2+loop*4);
triangles.Add(1+loop*4);
triangles.Add(0+loop*4);
triangles.Add(3+loop*4);
triangles.Add(2+loop*4);
loop = loop + 1;
// create mesh
meshFilterMesh.vertices=vertices.ToArray();
meshFilterMesh.triangles=triangles.ToArray();
// add this mesh to the MeshCollider
mesher.GetComponent<MeshCollider>().sharedMesh=meshFilterMesh;
}
// Save if F12 press
public void SaveAsset()
{
var mf = mesher.GetComponent<MeshFilter>();
if (mf)
{
var savePath = "Assets/" + saveName + ".asset";
Debug.Log("Saved Mesh to:" + savePath);
AssetDatabase.CreateAsset(mf.mesh, savePath);
}
}
}
One approach is to generate the mesh directly on your own terms. The pro of this is that you can have very fine control of exactly what you want pixel boundaries to look like, and you have better information do your own triangulation of the mesh. The downside is that you have to do all of this yourself.
One way of implementing this is to use the Marching Squares algorithm to generate isobands from the pixel data (You can use the blue/green/alpha channel to get the isovalue depending on if the background is white or transparent), and then generate a piece of the mesh from each of the 2x2 pixel grounds that have a part of the isoband.
To get the pixel data from the image you can use Texture2D.GetPixels. Then you can use the marching squares algorithm on that information to determine how to represent every 2x2 cluster of pixels in the mesh. Then you would use that information to find the vertices of each triangle that represents that quad of pixels.
Once you convert each quad of pixels into triangles, arrange the vertices of those triangles into an array (make sure you order the vertices of each triangle in a clockwise direction from the visible side) and use Mesh.SetVertices to create a mesh with those vertices.
Another approach is to set the alpha of any non-red pixel to zero, and let Unity's sprite packer generate the mesh for you.
Here is one way to do that:
If it is an asset and you want to modify it, set the texture asset to have Read/Write enabled checked. If the texture is created at runtime (and is therefore not an asset) this step can be skipped.
Get the pixel data with Texture2D.GetPixels. This will get you an array of pixels in the form of Color[] pixels:
public Texture2D tex;
...
Color[] pixels = tex.GetPixels();
Iterate through each index and replace pixels with any amount of blue (such as white pixels) with clear pixels:
for (int i = 0 ; i < pixels.Length ; i++ )
{
if (
pixels[i].r != 1f
|| pixels[i].g != 0f
|| pixels[i].b != 0f)
pixels[i] = Color.clear;
}
Set the texture pixel data with the modified pixel array:
tex.SetPixels(pixels);
tex.Apply();
The downside to this approach is that I do not know if you can use the Unity spritepacker to pack textures created at run time onto the sprite atlas. If it can not, then a different tool would be needed for this approach to generate meshes from sprites at run time.
Ok I've made something :
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class scri : MonoBehaviour
{
public Texture2D tex;
public Texture2D newText;
public Sprite sprite;
public List<Color> colorList;
private Sprite mySprite;
private SpriteRenderer sr;
// Start is called before the first frame update
void Start()
{
sr = gameObject.AddComponent<SpriteRenderer>() as SpriteRenderer;
newText =new Texture2D(tex.width,tex.height,TextureFormat.ARGB32, false);
Color[] pixels = sprite.texture.GetPixels();
for (int i = 0 ; i < pixels.Length ; i++ )
{
Debug.Log(pixels[i]);
if (pixels[i].r==1) {
pixels[i] = Color.clear;
}
}
newText.SetPixels(pixels);
newText.Apply();
mySprite = Sprite.Create(newText, new Rect(0.0f, 0.0f, newText.width, newText.height), new Vector2(0.5f, 0.5f), 100.0f);
sr.sprite = mySprite;
}
// Update is called once per frame
// void Update()
// {
// Debug.Log(sprite.triangles.Length);
// Debug.Log(sprite.vertices.Length);
// }
}
Usefull link :
https://forum.unity.com/threads/setting-pixel-to-transparent-turns-out-black.172375/
https://docs.unity3d.com/ScriptReference/Sprite.Create.html
https://forum.unity.com/threads/is-it-possible-to-convert-a-texture2d-from-one-format-to-another-in-standalone-run-time.327141/
https://forum.unity.com/threads/texture-setpixels.431177/
But I don't know why, if the png has a white background at the begining it doesn't work well ... :
With svg it's ok since the begining without my code.
But in sprite Editor I could generate a custom physic shape:

Finding gameObject via direction of input

I have a list of gameObjects added to a List within the area as shown. What I need is a directional input to choose a target from the origin point. I have got it work to get this origin point.
My first attempt of this was to get it via rayCast, but by doing that there were times directional inputs need to directly hit target object by the ray. This is no problem if input was done like case #1.
However, what I need it to really work is when input direction was as if case #2 or #3 the input will still get a target. My second attempt was to do this with sphereCast, but it still needed a target in sphere proximity and multiple targets hit by sphereCast still needed to result in only one and more accurate target selection by input.
Since I have all the transform.position of all the possible targets as well as the origin point I wondered there would be a more elegant way of resolving this via comparing vector3's of these coordinates(origin and targets in the general direction).
Here's my latest approach:
//
// m_targetList is the list containing all GameObjects as GameObjects in other script m_collector. All m_collector does is this.
//
using System.Collections.Generic;
using UnityEngine;
public class TargetSwitcher : MonoBehaviour
{
private TargetCollector m_collector;
private Transform m_origin;
public bool m_targetChanged = false;
public GameObject m_target;
public LayerMask m_targetMask;
private Dictionary<Vector3, float> m_targetInfo = new Dictionary<Vector3, float>();
private void Awake()
{
m_collector = GetComponent<TargetCollector>();
m_origin = GameObject.Find("TargetOrigin").transform;
m_tracker = GameObject.Find("TargetTracker").transform;
m_bound = GetComponent<BoxCollider>();
}
public void UpdateBearing(GameObject origin)
{
m_origin = origin.transform;
foreach (GameObject target in m_collector.m_targetList)
{
Vector3 dir = (target.transform.position - origin.transform.position).normalized
float dist = Vector3.Distance(origin.transform.position, target.transform.position);
m_targetInfo.Add(dir, dist);
}
}
public void SwitchTarget()
{
if (!m_targetChanged)
{
Vector2 dir = new Vector2(Input.GetAxis("Horizontal"), Input.GetAxis("Vertical")).normalized;
// Find closest direction value from Dictionary to dir of here
// Compare distance from origin if multiple targets and choose the nearest
}
}
public void ReturnToIdle()
{
m_origin.position = m_target.transform.position;
m_targetChanged = false;
m_targetInfo.Clear();
}
public struct TargetInfo
{
public Vector3 bearing;
public float distance;
public TargetInfo(Vector3 bearing, float distance)
{
this.bearing = bearing;
this.distance = distance;
}
}
}
Generally, I'm trying to compare the normalized vector of directional input to the normalized vector from the origin to each target before SwitchTarget(). The input method here is Gamepad axis x and y as Horizontal and Vertical.
Reposting this question since a provided answer was very far from the question and marked as duplicate(Given answer was about finding gameObject by distance only, this question is about direction and distance portion is to compare second-handedly when multiple items were found in the direction)
Edit
After some trials with dot product now I'm sure this is much likely where I want to head. There are much inconsistency I need to get on with, though.
Here's my most recent attempt:
private void Update()
{
UpdateBearing();
Vector3 input = new Vector3(Input.GetAxis("Horizontal"), Input.GetAxis("Vertical"), 0);
if (input != Vector3.zero)
{
SwitchTarget();
}
}
public void UpdateBearing(GameObject origin)
{
m_origin.position = origin.transform.position;
foreach (GameObject target in m_collector.m_targetList)
{
Vector3 dir = (target.transform.position - origin.transform.position).normalized;
if (!m_targetInfo.ContainsKey(target))
{
m_targetInfo.Add(target, dir);
}
}
}
public void SwitchTarget()
{
GameObject oldTarget = m_collector.m_target;
if (!m_targetChanged)
{
Vector3 dir = new Vector3(Input.GetAxis("Horizontal"), Input.GetAxis("Vertical"), 0).normalized;
Debug.DrawRay(m_origin.position, dir * 100, Color.yellow, 0.5f);
foreach (KeyValuePair<GameObject, Vector3> possibleTarget in m_targetInfo)
{
float dot = Vector3.Dot(dir, possibleTarget.Value);
if (dot > 0.5f) // Compare DP difference of added dot and input dot
{
GameObject newTarget = possibleTarget.Key;
if (oldTarget != newTarget)
{
Debug.Log(possibleTarget.Value + " // " + dot);
m_target = newTarget;
m_collector.m_target = newTarget;
m_targetChanged = true;
}
}
}
}
}
With this, I'm kind of getting gameObject selection without raycasting and missing any targets. However, I'm sure I need better case comparison than if(dot > 0.5f). Also, my rough assumption is that if I don't update the value of the dictionary m_targetInfo for each Key I'd have another inconsistency if those targets ever move. Anyways, I'm still confused how properly utilize this to achieve my end goal.
Since you have all the desired game objects in the area you can create a for loop and check the angle between your look direction and their position, if it is lower than some value (you can make it super low so it's precise or a little bit higher to allow for some margin of error) put it in a list of gameobjects and if there's more than one object there get the closest one.
The code for getting closest object in angle would look something like this:
GameObject CheckObjects()
{
List<GameObject> InAngle = new List<GameObject>();
for(int i = 0; i < YourObjectsList.Count; i++)
{
GameObject tested = YourObjectsList[i];
Vector3 dir = tested.transform.position - origin.transform.forward;
// I'm assuming here that youre rotating your origin with the
directional input, not then instead of origin.transform.forward
place there your directional vector3
float angle = Vector3.Angle(dir, tested.transform.position);
if(angle <= desiredAngle)
{
InAngle.Add(tested);
}
}
GameObject closest = null;
for(int j = 0; j < InAngle.Count; i++)
{
GameObject tested = InAngle[i];
Vector3 dir1 = tested.transform.position - origin.transform.position;
Vector3 dir2 = closest.transform.position - origin.transform.position;
if(!closest)
{
closest = tested;
}
else
{
if(dir2.sqrMagnitude > dir1.sqrMagnitude)
{
closest = tested;
}
}
}
return closest;
}