How to calculate number of sprites to spawn across the device's screen height? - unity3d

In my Unity2D project, I am trying to spawn my sprite on top of each other and across the entire height of the device's screen. For example to give an idea, think of a box on top of each other across the entire device's screen height. In my case, I'm spawning arrow sprites instead of boxes
I already got the sprites spawning on top of each other successfully. My problem now is how to calculate how many sprites to spawn to make sure it spreads across the screen's height.
I currently have this snippet of code:
public void SpawnInitialArrows()
{
// get the size of our sprite first
Vector3 arrowSizeInWorld = dummyArrow.GetComponent<Renderer>().bounds.size;
// get screen.height in world coords
float screenHeightInWorld = Camera.main.ScreenToWorldPoint(new Vector3(0, Screen.height, 0)).y;
// get the bottom edge of the screen in world coords
Vector3 bottomEdgeInWorld = Camera.main.ScreenToWorldPoint(new Vector3(0,0,0));
// calculate how many arrows to spawn based on screen.height/arrow.size.y
int numberOfArrowsToSpawn = (int)screenHeightInWorld / (int)arrowSizeInWorld.y;
// create a vector3 to store the position of the previous arrow
Vector3 lastArrowPos = Vector3.zero;
for(int i = 0; i < numberOfArrowsToSpawn; ++i)
{
GameObject newArrow = this.SpawnArrow();
// if this is the first arrow in the list, spawn at the bottom of the screen
if(LevelManager.current.arrowList.Count == 0)
{
// we only handle the y position because we're stacking them on top of each other!
newArrow.transform.position = new Vector3(newArrow.transform.position.x,
bottomEdgeInWorld.y + arrowSizeInWorld.y/2,
newArrow.transform.position.z);
}
else
{
// else, spawn on top of the previous arrow
newArrow.transform.position = new Vector3(newArrow.transform.position.x,
lastArrowPos.y + arrowSizeInWorld.y,
newArrow.transform.position.z);
}
// save the position of this arrow so that we know where to spawn the next arrow!
lastArrowPos = new Vector3(newArrow.transform.position.x,
newArrow.transform.position.y,
newArrow.transform.position.z);
LevelManager.current.arrowList.Add(newArrow);
}
}
The problem with my current code is that it doesn't spawn the correct number of sprites to cover the entire height of the device's screen. It only spawns my arrow sprites approximately up to the middle of the screen. What I want is for it to be able to spawn up to the top edge of the screen.
Anyone know where the calculation went wrong? and how to make the current code cleaner?

If sprites are rendered via camera mode in perspective and the sprites appear to have varying sizes when displayed on the screen (sprites farther away from the camera are smaller than sprites that are closer to the camera) then a new way to calculate the numberOfArrowsToSpawn value is needed.
You could try adding sprites with a while loop, instead of using a for loop, just continue creating sprites until the calculated world position for the sprite will no longer be visible to the camera. Check to see if a point will be visible in camera by using the technique Jessy provides in this link:
http://forum.unity3d.com/threads/point-in-camera-view.72523/

I think your screenHeightInWorld is really a screenTopInWorld, a point can be anywhere in the space.
You need the relative screen height in world coordinate.
Which is actially the half of the camera frustum size if you use ortographic projection, as you think of it.
float screenHeightInWorld = Camera.main.orthographicSize / 2.0f;
I did not read the rest, but is probably fine, up to you how you implement this.
I'd simply create an arrow method, something like bool SpawnArrowAboveIfFits(), which can call itself iteratively on the new instances.

Related

Moving camera to proper position in Zoom function in Unity

Hi I have a question that I'm hoping someone can help me work through. I've asked elsewhere to no avail but it seems like a standard problem so I'm not sure why I haven't been getting answers.
Its basically setting up a zoom function that mirrors Google Maps zoom. Like, the camera zooms in/out onto where your mouse is. I know this probably gets asked a lot but I think Unity's new Input System changed things up a bit since the 4-6 year old questions that I've found in my own research.
In any case, I've set up an parent GameObject that holds all 2D sprites that will be in my scene and an orthographic camera. I can set the orthographic size through code to change to zoom, but its moving the camera to the proper place that I am having trouble with.
This was my 1st attempt:
public Zoom(float direction, Vector2 mousePosition) {
// zoom calcs
float rate 1 + direction * Time.deltaTime;
float targetOrtho = Mathf.MoveTowards(mainCam.orthographicSize, mainCam.orthoGraphicSize/rate, 0.1f);
// move calcs
mousePosition = mainCam.ScreenToWorldPoint(mousePosition);
Vector2 deltaPosition = previousPosition - mousePosition;
// move and zoom
transform.position += new Vector3(deltaPosition.x, deltaPosition.y, 0);
// zoomLevels are a generic struct that holds the max/min values.
SetZoomLevel(Mathf.Clamp(targetOrthoSize, zoomLevels.min, zoomLevels.max));
previousPosition = mousePosition;
}
This function gets called through my input controller, activated through Unity's Input System events. When the mouse wheel scrolls, the Zoom function is given a normalized value as direction (1 or -1) and the current mousePosition. When its finished its calculation, the mousePosition is stored in previousPosition.
The code actually works -- except it is extremely jittery. This, of course happens because there is no Time.deltaTime applied to the camera movement, nor is this in LateUpdate; both of which helps to smooth the movements. Except, in the former case, multiplying Time.deltaTime to new Vector3(deltaPosition.x, deltaPosition.y, 0) seems to cause the zoom occur at the camera's centre rather than the mouse position. When i put zoom into LateUpdate, it creates a cool but unwanted vibration effect when the camera moves.
So, after doing some thinking and reading, I thought it may be best to calculate the difference between the mouse position and the camera's center point, then multiply it by a scale factor, which is the camera's orthographic size * 2 (maybe...??). Hence my updated code here:
public void Zoom(float direction, Vector2 mousePosition)
{
// zoom
float rate = 1 + direction * Time.unscaledDeltaTime * zoomSpeed;
float orthoTarget = Mathf.MoveTowards(mainCam.orthographicSize, mainCam.orthographicSize * rate, maxZoomDelta);
SetZoomLevel(Mathf.Clamp(orthoTarget, zoomLevels.min, zoomLevels.max));
// movement
if (mainCam.orthographicSize < zoomLevels.max && mainCam.orthographicSize > zoomLevels.min)
{
mousePosition = mainCam.ScreenToWorldPoint(mousePosition);
Vector2 offset = (mousePosition - new Vector2(transform.position.x, transform.position.y)) / (mainCam.orthographicSize * 2);
// panPositions are the same generic struct holding min/max values
offset.x = Mathf.Clamp(offset.x, panPositions.min.x, panPositions.max.x);
offset.y = Mathf.Clamp(offset.y, panPositions.min.y, panPositions.max.y);
transform.position += new Vector3(offset.x, offset.y, 0) * Time.deltaTime;
}
}
This seems a little closer to what I'm trying to achieve but the camera still zooms in near its center point and zooms out on some point... I'm a bit lost as to what I am missing out here.
Is anyone able to help guide my thinking about what I need to do to create a smooth zoom in/out on the point where the mouse currently is? Much appreciated & thanks for reading through this.
Ok I figured it out for if anyone ever comes across the same problem. it is a standard problem that is easily solved once you know the math.
Basically, its a matter of scaling and translating the camera. You can do one or the other first - it does not matter; the outcome is the same. Imagine your screen looks like this:
The green box is your camera viewport, the arrow is your cursor. When you zoom in, the orthographic size gets smaller and shrinks around its anchor point (usually P1(0,0)). This is the scaling aspect of the problem and the following image explains it well:
So, now we want to move the camera position to the new position:
So how do we do this? Its just a matter of getting distance from the old camera position (P1(0, 0)) to the new camera position (P2(x,y)). Basically, we only want this:
My solution to find the length of the arrow in the picture above was to basically subtract the length of the cursor position from the old camera position (oldLength) from the length of the cursor position to the new camera position (newLength).
But how do you find newLength? Well, since we know the length will be scaled accordingly to the size of the camera viewport, newLength will be either oldLength / scaleFactor or oldLength * scaleFactor, depending on whether you want to zoom in or out, respectively. The scale factor can be whatever you want (zoom in/out by 2, 4, 1.4... whatever).
From there, its just a matter of subtracting newLength from oldLength and adding that difference from the current camera position. The psuedo code is below:
(Note that i changed 'newLength' to 'length' and 'oldLength' to 'scaledLength')
// make sure you're working in world space
mousePosition = camera.ScreenToWorldPoint(mousePosition);
length = mousePosition - currentCameraPosition;
scaledLength = length / scaleFactor // to zoom in, otherwise its length * scaleFactor
deltaLength = length - scaledLength;
// change position
cameraPosition = currentCameraPosition - deltaLength;
// do zoom
camera.orthographicSize /= scaleFactor // to zoom in, otherwise orthographic size *= scaleFactor
Works perfectly for me. Thanks to those who helped me in a discord coding community!

Positioning UI or Camera relatively to each other (and screen border)

I'm struggling with this sort of
Screen disposition.
I want to position my Camera so that the world is positionned like in the image with the origin at bottom left. It's easy to set the orthographicSize of the camera as I know how many unit I want vertically. It is also easy to calculate the Y position of the camera as I just want it to be centered vertically. But I cannot find how to compute the X position of the camera to put the origin of the world in this position, no matter what the aspectRatio of the screen is.
It brings me two questions :
How can I calculate the X position of the camera so that the origin of the world is always as the same distance from the screen left and bottom borders ?
Instead of positioning the camera regarding the UI, should I use RenderMode Worldspace for the UI canvas. And if so, how could I manage responsiveness ?
I don't understand the second question, but regarding positioning the Camera on the X axis so that the lower left corner is always at world 0 you could do the following:
var lowerLeftScreen = new Vector3(0, 0, 10);
var pos = transform.position;
var lowerLeftScreenPoint = Camera.main.ScreenToWorldPoint(lowerLeftScreen).x;
if (lowerLeftScreenPoint > 0)
{
pos.x -= lowerLeftScreenPoint;
}
else
{
pos.x += Mathf.Abs(lowerLeftScreenPoint);
}
transform.position = pos;
Debug.Log(Camera.main.ScreenToWorldPoint(lowerLeftScreen));
Not the nicest code, but gets the job done.
Also the Z component in the Vector does not really matter for our orthographic camera.

Getting positions of a line renderer on moving and rotating a line

I have a line with line renderer attached to it . The user can move the line and rotate it. How do I go about getting the new positions of the line renderer which has been moved or rotated? since the coordinates of vertices of line renderer do not change , only the positions and the rotation of the line object as a whole changes .
The positions in the bottom part of image do not change on moving or rotating it. These positions are returned by the getpositions() method which is not useful in my case.
The LineRenderer in unity takes a list of points (stored as Vector3s) and draws a line through them. It does this in one of two ways.
Local Space: (Default) All points are positioned relative to
transform. So if your GameObject moves or rotates, the line would
also move and rotate.
World Space: (You would need to check the Use World Space
Checkbox) The line will be rendered in a fixed position in the
world that exactly matched the Positions in the list. If the
gameObject moves or rotates, the line would be unchanged
So what you really want to know is
"How do I get the world space position of a local space point in my line?"
This common use case is addressed by methods on a gameObjects transform
Transform.TransformPoint
It takes a local space point (which is how the data is stored in the line renderer by default) and transforms it to world space.
An Example:
using UnityEngine;
using System.Collections;
public class LineRendererToWorldSpace : MonoBehaviour
{
private LineRenderer lr;
void Start()
{
lr = GetComponent<LineRenderer>();
// Set some positions in the line renderer which are interpreted as local space
// These are what you would see in the inspector in Unity's UI
Vector3[] positions = new Vector3[3];
positions[0] = new Vector3(-2.0f, -2.0f, 0.0f);
positions[1] = new Vector3(0.0f, 2.0f, 0.0f);
positions[2] = new Vector3(2.0f, -2.0f, 0.0f);
lr.positionCount = positions.Length;
lr.SetPositions(positions);
}
Vector3[] GetLinePointsInWorldSpace()
{
Vector3[] positions;
//Get the positions which are shown in the inspector
var numberOfPositions = lr.GetPositions(positions);
//Iterate through all points, and transform them to world space
for(var i = 0; i < numberOfPositions; i += 1)
{
positions[i] = transform.TransformPoint(positions[i]);
}
//the points returned are in world space
return positions;
}
}
This code is just for demonstration purposes, as I am not exactly sure of the use case.
Also, my links are to 2018.2 which is a very recent version of unity, however the logic and methods used should be quite similar going back.

Unity: Instantiate GameObject at top of screen plus it's own height

I have this code that instantiates GameObjects at the top of the screen and then they fall down.
float RandX = GetRandomXPos();
float RandY = screenSize.y;
Vector3 ballPos = new Vector3(RandX,RandY,0);
GameObject clone = Instantiate(BallPrefab, ballPos, transform.rotation) as GameObject;
This works fine but it spawn them at the top of the screen so they just blink into existence. I want to spawn them at the top of the screen plus the height of the prefab so that it can appear out of view and then fall down into view.
What is the best way to get that height that I need to offset by?
To get the size of the prefab you need to get the information of the Renderer that the prefab has.
To do this you have to get the component in its hierarchy, you can do this by using GetComponentInChildren<Renderer>()
Once you got the Renderer, you can access the bounds size with renderer.bounds.size
This is a Vector3 which has the dimensions of the object, height being the Y component.
You may have more than one Renderer in a prefab so you will need to get them all with GetComponentsInChildren<Renderer>(). and calculate the sum of all the bounds using bounds.Encapsulate
var renderer = target.GetComponentInChildren<Renderer>();
var height = renderer.bounds.size.y;

Unity - get position of UI Slider Handle

I am working on Unity 4.7 project and need to create shooting on the target. I simulated gunpoint using horizontal and vertical slider moving on the time. When I click the button I need to memorize x and y coordinates of handles and instantiate bullet hole at this point but don't know how to get cords of sliders handle. It is possible to get values but it seems that it doesn't correspond to coordinates. If horizontal slider changes its value for 1, would its handle change x position for 1?
Use this then:
public static Vector3 GetScreenPositionFromWorldPosition(Vector3 targetPosition)
{
Vector3 screenPos = Camera.main.WorldToScreenPoint(targetPosition);
return screenPos;
}
Have the reference to Handles of the horizontal and vertical sliders, and use them like:
Vector3 pos = GetScreenPositionFromWorldPosition(horizontalHandle.transform.position);