What do I want to achieve ?
I'd like to achieve an effect in Unity3D, where I superpose a few cameras on top of each other. Each cameras would draw to a specific area of the screen. If possible, I'd like these areas to change dynamically.
I am using unity (latest version), and URP.
How technically I see it :
For implementation and performances reasons, it seems writing to the stencil buffer is the way to go. That way, I can only render what part of the screen I want for each camera. It is also quite easy once the stencil is made, cause the ForwardRendering settings in Unity offer such capabilities out of the box.
What I can't figure out :
The problem is, I don't know to efficiently write to the whole stencil buffer (each frame). The best way would be to use a compute shader (or maybe a simple script), that directly write the values after some calculations. Is there a way for that ? If yes, How ?
Another alternative may be to use a transparent quad in front of one of each camera, and to write to the stencil buffers like that. But 1) It seems there exist a SV_StencilRef keyword in the fragment buffer, but not supported by Unity yet ? 2) I will still lose performance nevertheless.
Thanks for any help / ideas about how to tackle this problem.
Edit (Clarification) : I'd like to be able to render free shapes, and not only rects, which prevent the use of the standard ViewportRect.
After some search, I found the Voronoi split screen to be quite similar (with a technical view) to what I'd like to achieve (See here)
If I understand correctly, you only need to play with the different camera Viewport Rect (https://docs.unity3d.com/ScriptReference/Camera-rect.html) to determine what camera should render what part of the screen.
Response to comment: no, it's not stretched. Here is an example with four cameras:
Create a scene with four cameras, add this script to it and add the cameras to the array on the script. I added the _movingObject just to see something moving, but it's not necessary.
using UnityEngine;
public class CameraHandler : MonoBehaviour
{
[SerializeField] private Transform _movingObject;
[SerializeField] private float _posMod = 10.0f;
[SerializeField] private float _cameraPosMod = 0.1f;
[SerializeField] private Camera[] _cameras;
private void Update()
{
float t = Time.time;
float x = Mathf.Sin(t);
float y = Mathf.Cos(t);
if (_movingObject) _movingObject.position = new(x * _posMod, 1.0f, y * _posMod);
Vector2 center = new(0.5f + x * _cameraPosMod, 0.5f + y * _cameraPosMod);
// bottom left camera
_cameras[0].rect = new(0.0f, 0.0f, center.x, center.y);
// bottom right camera
_cameras[1].rect = new(center.x, 0.0f, 1.0f - center.x, center.y);
// upper left camera
_cameras[2].rect = new(0.0f, center.y, center.x, 1.0f - center.y);
// upper right camera
_cameras[3].rect = new(center.x, center.y, 1.0f - center.x, 1.0f - center.y);
}
}
Not exactly an answer to your question about stencil buffer but I had a (hopefully) similar use case recently.
The main issue: In the URP Camera stack
If your camera is set to Base it will overdraw the entire screen
you can not adjust the Viewport on any Overlay camera
You can actually try to set the viewport via code -> result your camera renders only the correct part of the scene ... but it gets stretched to the entire screen ^^
What I did in the end was
Leave all content and cameras at the origin position
Apply according masks to filter the content per camera
Make your camera Overlay (as usual)
go through a custom Camera.projectionMatrix
m_Camera.projectionMatrix = Matrix4x4.Translate(projectionOffset) * Matrix4x4.Perspective(m_Camera.fieldOfView, m_Camera.aspect, m_Camera.nearClipPlane, m_Camera.farClipPlane);
where the projectionOffset is an offset in viewport space (normalized 0 - 1) from the bottom left corner.
For example in my case I wanted a minimap at 400, 400 pixels from the top-right corner so I did
var topRightOffsetPixels = new Vector2(400, 400);
var topRightOffsetViewport = Vector2.one - new Vector2(topRightOffsetPixels.x * 2 / Screen.width, topRightOffsetPixels.y * 2 / Screen.height);
m_Camera.projectionMatrix = Matrix4x4.Translate(topRightOffsetViewport) * Matrix4x4.Perspective(m_Camera.fieldOfView, m_Camera.aspect, m_Camera.nearClipPlane, m_Camera.farClipPlane);
See also Matrix4x4.Perspective
Related
Hi I have a question that I'm hoping someone can help me work through. I've asked elsewhere to no avail but it seems like a standard problem so I'm not sure why I haven't been getting answers.
Its basically setting up a zoom function that mirrors Google Maps zoom. Like, the camera zooms in/out onto where your mouse is. I know this probably gets asked a lot but I think Unity's new Input System changed things up a bit since the 4-6 year old questions that I've found in my own research.
In any case, I've set up an parent GameObject that holds all 2D sprites that will be in my scene and an orthographic camera. I can set the orthographic size through code to change to zoom, but its moving the camera to the proper place that I am having trouble with.
This was my 1st attempt:
public Zoom(float direction, Vector2 mousePosition) {
// zoom calcs
float rate 1 + direction * Time.deltaTime;
float targetOrtho = Mathf.MoveTowards(mainCam.orthographicSize, mainCam.orthoGraphicSize/rate, 0.1f);
// move calcs
mousePosition = mainCam.ScreenToWorldPoint(mousePosition);
Vector2 deltaPosition = previousPosition - mousePosition;
// move and zoom
transform.position += new Vector3(deltaPosition.x, deltaPosition.y, 0);
// zoomLevels are a generic struct that holds the max/min values.
SetZoomLevel(Mathf.Clamp(targetOrthoSize, zoomLevels.min, zoomLevels.max));
previousPosition = mousePosition;
}
This function gets called through my input controller, activated through Unity's Input System events. When the mouse wheel scrolls, the Zoom function is given a normalized value as direction (1 or -1) and the current mousePosition. When its finished its calculation, the mousePosition is stored in previousPosition.
The code actually works -- except it is extremely jittery. This, of course happens because there is no Time.deltaTime applied to the camera movement, nor is this in LateUpdate; both of which helps to smooth the movements. Except, in the former case, multiplying Time.deltaTime to new Vector3(deltaPosition.x, deltaPosition.y, 0) seems to cause the zoom occur at the camera's centre rather than the mouse position. When i put zoom into LateUpdate, it creates a cool but unwanted vibration effect when the camera moves.
So, after doing some thinking and reading, I thought it may be best to calculate the difference between the mouse position and the camera's center point, then multiply it by a scale factor, which is the camera's orthographic size * 2 (maybe...??). Hence my updated code here:
public void Zoom(float direction, Vector2 mousePosition)
{
// zoom
float rate = 1 + direction * Time.unscaledDeltaTime * zoomSpeed;
float orthoTarget = Mathf.MoveTowards(mainCam.orthographicSize, mainCam.orthographicSize * rate, maxZoomDelta);
SetZoomLevel(Mathf.Clamp(orthoTarget, zoomLevels.min, zoomLevels.max));
// movement
if (mainCam.orthographicSize < zoomLevels.max && mainCam.orthographicSize > zoomLevels.min)
{
mousePosition = mainCam.ScreenToWorldPoint(mousePosition);
Vector2 offset = (mousePosition - new Vector2(transform.position.x, transform.position.y)) / (mainCam.orthographicSize * 2);
// panPositions are the same generic struct holding min/max values
offset.x = Mathf.Clamp(offset.x, panPositions.min.x, panPositions.max.x);
offset.y = Mathf.Clamp(offset.y, panPositions.min.y, panPositions.max.y);
transform.position += new Vector3(offset.x, offset.y, 0) * Time.deltaTime;
}
}
This seems a little closer to what I'm trying to achieve but the camera still zooms in near its center point and zooms out on some point... I'm a bit lost as to what I am missing out here.
Is anyone able to help guide my thinking about what I need to do to create a smooth zoom in/out on the point where the mouse currently is? Much appreciated & thanks for reading through this.
Ok I figured it out for if anyone ever comes across the same problem. it is a standard problem that is easily solved once you know the math.
Basically, its a matter of scaling and translating the camera. You can do one or the other first - it does not matter; the outcome is the same. Imagine your screen looks like this:
The green box is your camera viewport, the arrow is your cursor. When you zoom in, the orthographic size gets smaller and shrinks around its anchor point (usually P1(0,0)). This is the scaling aspect of the problem and the following image explains it well:
So, now we want to move the camera position to the new position:
So how do we do this? Its just a matter of getting distance from the old camera position (P1(0, 0)) to the new camera position (P2(x,y)). Basically, we only want this:
My solution to find the length of the arrow in the picture above was to basically subtract the length of the cursor position from the old camera position (oldLength) from the length of the cursor position to the new camera position (newLength).
But how do you find newLength? Well, since we know the length will be scaled accordingly to the size of the camera viewport, newLength will be either oldLength / scaleFactor or oldLength * scaleFactor, depending on whether you want to zoom in or out, respectively. The scale factor can be whatever you want (zoom in/out by 2, 4, 1.4... whatever).
From there, its just a matter of subtracting newLength from oldLength and adding that difference from the current camera position. The psuedo code is below:
(Note that i changed 'newLength' to 'length' and 'oldLength' to 'scaledLength')
// make sure you're working in world space
mousePosition = camera.ScreenToWorldPoint(mousePosition);
length = mousePosition - currentCameraPosition;
scaledLength = length / scaleFactor // to zoom in, otherwise its length * scaleFactor
deltaLength = length - scaledLength;
// change position
cameraPosition = currentCameraPosition - deltaLength;
// do zoom
camera.orthographicSize /= scaleFactor // to zoom in, otherwise orthographic size *= scaleFactor
Works perfectly for me. Thanks to those who helped me in a discord coding community!
I have an image UI in a canvas with Screen Space - Camera render mode. What I like to do is move my LineRenderer to the image vertical position by looping through all the LineRenderer positions and changing its y axis. My problem is I cant get the correct position of the image that the LineRenderer can understand. I've tried using ViewportToWorldPoint and ScreenToWorldPoint but its not the same position.
Vector3 val = Camera.main.ViewportToWorldPoint(new Vector3(image.transform.position.x, image.transform.position.y, Camera.main.nearClipPlane));
for (int i = 0; i < newListOfPoints.Count; i++)
{
line.SetPosition(i, new Vector3(newListOfPoints[i].x, val.y, newListOfPoints[i].z));
}
Screenshot result using Vector3 val = Camera.main.ScreenToWorldPoint(new Vector3(image.transform.localPosition.x, image.transform.localPosition.y, -10));
The green LineRenderer is the result of changing the y position. It should be at the bottom of the square image.
Wow, this was annoying and complicated.
Here's the code I ended up with. The code in your question is the bottom half of the Update() function. The only thing I changed is what was passed into the ScreenToWorldPoint() method. That value is calculated in the upper half of the Update() function.
The RectTransformToScreenSpace() function was adapted from this Unity Answer post1 about getting the screen space coordinates of a RectTransform (which is exactly what we want in order to convert from screen space coordinates back into world space!) The only difference is that I was getting inverse Y values, so I changed from Screen.height - transform.position.y to just transform.position.y which did the trick perfectly.
After that it was just a matter of grabbing that rectangle's lower left corner, making it a Vector3 instead of a Vector2, and passing it back into ScreenToWorldPoint(). The only trick there was because of the perspective camera, I needed to know how far away the line was from the camera originally in order to maintain that same distance (otherwise the line moves up and down the screen faster than the image). For an orthographic camera, this value can be anything.
void Update () {
//the new bits:
float dist = (Camera.main.transform.position - newListOfPoints[0]).magnitude;
Rect r = RectTransformToScreenSpace((RectTransform)image.transform);
Vector3 v3 = new Vector3(r.xMin, r.yMin, dist);
//more or less original code:
Vector3 val = Camera.main.ScreenToWorldPoint(v3);
for(int i = 0; i < newListOfPoints.Count; i++) {
line.SetPosition(i, new Vector3(newListOfPoints[i].x, val.y, newListOfPoints[i].z));
}
}
//helper function:
public static Rect RectTransformToScreenSpace(RectTransform transform) {
Vector2 size = Vector2.Scale(transform.rect.size, transform.lossyScale);
Rect rect = new Rect(transform.position.x, transform.position.y, size.x, size.y);
rect.x -= (transform.pivot.x * size.x);
rect.y -= ((1.0f - transform.pivot.y) * size.y);
return rect;
}
1And finding that post from a generalized search on "how do I get the screen coordinates of a UI object" was not easy. A bunch of other posts came up and had some code, but none of it did what I wanted (including converting screen space coordinates back into world space coordinates of the UI object which was stupid easy and not reversibe, thanks RectTransformUtility!)
I discovered something interesting about Unity Sprite and textures (Texture2D). I crated a 50x50 .png and render it in Unity by attaching to a GameObject and using SpriteRenderer.
What I realized, whenever I call a Unity related method (sprite.texture.width, sprite.rect.width, sprite.textureRect.width, etc.), it always return 50. However, the real size of the image turns into 24x24 or 12x12 depending on the resolution on my screen.
Of course, this is no big surprise since the projection, etc. is applied before Unity render the things on the screen; however, the interesting part I couldn't find any method or easy way to get the size of the Sprite after the projection is applied.
I can still make my own projection to come up with the related size; however, I would like to know whether there is an easier way to get this information.
Thank you!
The way #Draco18s mentioned seems the only way to solve this problem.
So, I crated a Prefab GameObject containing RectTransform and SpriteRenderer, and got the width and height as below:
GameObject twoSide = Instantiate(Resources.Load(mFilePath + "Locater")) as GameObject;
twoSide.GetComponent<RectTransform>().position = Camera.main.ScreenToWorldPoint (new Vector3 (0, 0, 1));
twoSide.GetComponent<RectTransform> ().pivot = new Vector2 (0f, 0f); // RectTransform should have the same pivot location with Sprite
float width = twoSide.GetComponent<RectTransform> ().offsetMax.x - twoSide.GetComponent<RectTransform> ().offsetMin.x;
float height = twoSide.GetComponent<RectTransform> ().offsetMax.y - twoSide.GetComponent<RectTransform> ().offsetMin.y;
In my Unity2D project, I am trying to spawn my sprite on top of each other and across the entire height of the device's screen. For example to give an idea, think of a box on top of each other across the entire device's screen height. In my case, I'm spawning arrow sprites instead of boxes
I already got the sprites spawning on top of each other successfully. My problem now is how to calculate how many sprites to spawn to make sure it spreads across the screen's height.
I currently have this snippet of code:
public void SpawnInitialArrows()
{
// get the size of our sprite first
Vector3 arrowSizeInWorld = dummyArrow.GetComponent<Renderer>().bounds.size;
// get screen.height in world coords
float screenHeightInWorld = Camera.main.ScreenToWorldPoint(new Vector3(0, Screen.height, 0)).y;
// get the bottom edge of the screen in world coords
Vector3 bottomEdgeInWorld = Camera.main.ScreenToWorldPoint(new Vector3(0,0,0));
// calculate how many arrows to spawn based on screen.height/arrow.size.y
int numberOfArrowsToSpawn = (int)screenHeightInWorld / (int)arrowSizeInWorld.y;
// create a vector3 to store the position of the previous arrow
Vector3 lastArrowPos = Vector3.zero;
for(int i = 0; i < numberOfArrowsToSpawn; ++i)
{
GameObject newArrow = this.SpawnArrow();
// if this is the first arrow in the list, spawn at the bottom of the screen
if(LevelManager.current.arrowList.Count == 0)
{
// we only handle the y position because we're stacking them on top of each other!
newArrow.transform.position = new Vector3(newArrow.transform.position.x,
bottomEdgeInWorld.y + arrowSizeInWorld.y/2,
newArrow.transform.position.z);
}
else
{
// else, spawn on top of the previous arrow
newArrow.transform.position = new Vector3(newArrow.transform.position.x,
lastArrowPos.y + arrowSizeInWorld.y,
newArrow.transform.position.z);
}
// save the position of this arrow so that we know where to spawn the next arrow!
lastArrowPos = new Vector3(newArrow.transform.position.x,
newArrow.transform.position.y,
newArrow.transform.position.z);
LevelManager.current.arrowList.Add(newArrow);
}
}
The problem with my current code is that it doesn't spawn the correct number of sprites to cover the entire height of the device's screen. It only spawns my arrow sprites approximately up to the middle of the screen. What I want is for it to be able to spawn up to the top edge of the screen.
Anyone know where the calculation went wrong? and how to make the current code cleaner?
If sprites are rendered via camera mode in perspective and the sprites appear to have varying sizes when displayed on the screen (sprites farther away from the camera are smaller than sprites that are closer to the camera) then a new way to calculate the numberOfArrowsToSpawn value is needed.
You could try adding sprites with a while loop, instead of using a for loop, just continue creating sprites until the calculated world position for the sprite will no longer be visible to the camera. Check to see if a point will be visible in camera by using the technique Jessy provides in this link:
http://forum.unity3d.com/threads/point-in-camera-view.72523/
I think your screenHeightInWorld is really a screenTopInWorld, a point can be anywhere in the space.
You need the relative screen height in world coordinate.
Which is actially the half of the camera frustum size if you use ortographic projection, as you think of it.
float screenHeightInWorld = Camera.main.orthographicSize / 2.0f;
I did not read the rest, but is probably fine, up to you how you implement this.
I'd simply create an arrow method, something like bool SpawnArrowAboveIfFits(), which can call itself iteratively on the new instances.
I would like a RectTransform (panel) in Unity 4.6 to follow a worldObject. I got this working, but the movement is not as smooth as I'd like. It seems a bit jagged and it lags behind when I start moving the camera.
Vector2 followObjectScreenPos = Camera.main.WorldToScreenPoint (planet.transform.position);
rectTransform.anchoredPosition = new Vector2 (followObjectScreenPos.x - Screen.width / 2, followObjectScreenPos.y - Screen.height / 2);
Tips and tricks are greatly appreciated. :-)
There is a bunch of options:
1) You can add a gui canvas to the worldObject and render your panel with this canvas (just add it as a child), but that may not be exactly what you need.
2) To eliminate jagged movement you should tween in one way or another. DOTween is my personal preference, where something along the following lines would give you the required result:
Tweener tweener = transfrom.DOMove (Target.position, 1).SetSpeedBased();
tweener.OnUpdate (() => tweener.ChangeEndValue (Target.position, true));
3) if you don't want to include dependencies in your code, you can perform linear interpolation between current and desired position (or in your case - anchoredPosition) in Update function.
I'd suggest using a tweener so as not to clutter your update function and generally tweeners have loads of potential uses in all kinds of games.
Below is code sample for linear interpolation in case you don't want to use tweener library:
float smoothFactor = 1.0f; //used to sharpen or dull the effect of lerp
var newPosition = new Vector3(x,y,z);
var t = gameObject.transform;
t.position = Vector3.Lerp (t.position,
newPosition,
Time.deltaTime * smoothFactor);
Place it in Update function and it will make the gameObject follow specific newPosition.