I have a very simple script that updates my orthographic camera on a given resolution so that it accurately scales the view to be pixel perfect.
Here is some relevant code:
OrthographicSetting get_override(int size)
{
return Overrides.FirstOrDefault(x => x.OrthographicSize == size);
}
void update_ortho()
{
m_last_size = Screen.height;
float ref_size = (OrthographicSize / PixelsPerUnit) * 0.5f;
OrthographicSetting or = get_override(m_last_size);
float ppu = or != null ? or.PixelsPerUnit : PixelsPerUnit;
float ortho_size = (m_last_size / ppu) * 0.5f;
float multiplier = Mathf.Max(1, Mathf.Round(ortho_size / ref_size));
ortho_size /= multiplier;
this.GetComponent<Camera>().orthographicSize = ortho_size;
Debug.Log(m_last_size + " " + ortho_size + " " + multiplier + " " + ppu);
}
[System.Serializable]
public class OrthographicSetting
{
public int OrthographicSize;
public float PixelsPerUnit;
}
OrthographicSetting get_override(int size)
{
return Overrides.FirstOrDefault(x => x.OrthographicSize == size);
}
With this, i can specify a set of overrides for every resolution.
My current setup is using 100 pixels per unity unit. All of my sprites use point filtering with no compression. Yet i still get strange results. 90% of the sprites render fine, but some seem to be rendering incorrectly.
Here's a screenshot to illustrate:
I may have solved the problem. I was using a sprite shader with pixel snap turned on. if i turn off pixel snap the problem seems to more or less go away. I still get the occasional problem with game objects that aren't in "nice" positions, i dunno how to avoid that problem though...
Related
In Unity i have a UI Panel which has a player object (an UI Image object).
I moving player object into planel with user inputs (keyboard or touch)
I can't keep player object in it's parent panel,
Please check below image, I want to keep player inside of Red Panel
Here is my Tried Code
public Camera MainCamera; //be sure to assign this in the inspector to your main camera
private Vector2 screenBounds;
private float objectWidth;
private float objectHeight;
private RectTransform pnlBackgroundTransform;
private void Start()
{
pnlBackgroundTransform = GameObject.Find("PnlBackground").GetComponent<RectTransform>();
screenBounds = MainCamera.ScreenToWorldPoint(new Vector3(pnlBackgroundTransform.rect.width , pnlBackgroundTransform.rect.height , MainCamera.transform.position.z));
objectWidth = transform.GetComponent<SpriteRenderer>().bounds.extents.x; //extents = size of width / 2
objectHeight = transform.GetComponent<SpriteRenderer>().bounds.extents.y; //extents = size of height / 2
}
void LateUpdate()
{
Vector3 viewPos = transform.position;
viewPos.x = Mathf.Clamp(viewPos.x, screenBounds.x * -1 + objectWidth, screenBounds.x - objectWidth);
viewPos.y = Mathf.Clamp(viewPos.y, screenBounds.y * -1 + objectHeight, screenBounds.y - objectHeight);
Debug.Log(screenBounds);
Debug.Log(viewPos);
transform.position = viewPos;
}
I'd say it's not very usual having the player implemented as a UI element, and instead you should be implementing it outside the UI/Canvas system.
The UI/Canvas system uses a set of rules of placing and scaling to deal with responsive design. You have at least 4 values (excluding rotation) to place something on the screen: anchor, pivot, position and scale.
For example: if you want to create a square you can either set it's size in absolute pixel values or relative values (to parent). If you're using absolute values, your UI Scale Mode, defined on the Canvas object, should affect the visual results.
This means the UI/Canvas is for elements that should adapt to the screen, such as buttons, dialogs, labels, etc. Taking advantage of device parameters to improve the UX.
Outside the UI/Canvas system, things are directly based on Linear Algebra: you have a 3D vector space (a "World") where everything exists with an absolute size and position. Then, your Camera stretches and twists the whole world to match what your current perspective. That means your object will always have the same size, regardless of screen size.
Now, assuming you have a very specific reason to implement your game into UI, there are a few ways you can do it. I'll assume you're using absolute values. Please note all the units used here are pixels, and the effect should be different for devices with different resolutions and sensible to the UI Scale Mode parameter. Also, please note I've set both anchors min and max to (0,0), the bottom left corner (default is screen center, (0.5,0.5)), in order to avoid negative coordinates.
The following script is attached to the player's UI Image.
public class UIMovementController : MonoBehaviour
{
public float speed = 5.0f;
new private RectTransform transform;
private Rect canvasRect;
private void Start()
{
transform = GetComponent<RectTransform>();
canvasRect = GetComponentInParent<Canvas>().pixelRect;
}
void Update()
{
// Keyboard Input (Arrows)
Vector2 move = new Vector2(0,0);
if (Input.GetKey(KeyCode.UpArrow)) { move.y += speed; }
if (Input.GetKey(KeyCode.DownArrow)) { move.y -= speed; }
if (Input.GetKey(KeyCode.LeftArrow)) { move.x -= speed; }
if (Input.GetKey(KeyCode.RightArrow)) { move.x += speed; }
transform.anchoredPosition += move;
// Position clamping
Vector2 clamped = transform.anchoredPosition;
clamped.x = Mathf.Clamp(clamped.x, transform.rect.width / 2, canvasRect.width - transform.rect.width / 2);
clamped.y = Mathf.Clamp(clamped.y, transform.rect.height / 2, canvasRect.height - transform.rect.height / 2);
transform.anchoredPosition = clamped;
}
}
I'm newbie in Unity3D scripting and trying to learn how it works internally for my simple projects, what are the best practices. Here I have a simple scene with a cube in it and trying to animate it, going left to certain point in time, then reversing back to negative value and then back in loop (on x axis). Direction is set in class public boolean property. By default it is negative value (which means it should go in positive direction, sorry for confusion). If it is positive it should go negative. However I have noticed that when I change this boolean value in Update method of script, it reverses back to original value? (when set to true, default is false). Then my object gets stack between going true and false and not moving in any direction. However if I set this property as static property in class, it does not resets and works just as intended (loops fine). I do not know why is it resets and here I'm completely confused.
public class CubeAnim : MonoBehaviour
{
public bool directionnegative;
// Update is called once per frame
void Update()
{
float nv = 25.0f * Time.deltaTime;
float posx = transform.position.x;
if (posx > 20.0f)
{
if (!directionnegative)
{
directionnegative = true;
}
}
else if (posx < -20.0f)
{
if (directionnegative)
{
directionnegative = false;
}
}
if(directionnegative)
{
nv = -(nv);
}
transform.Translate(nv, 0, 0);
deltaTime += (Time.deltaTime - deltaTime) * 0.1f;
float fps = 1.0f / deltaTime;
string log = "posx: " + transform.position.x + "\ndir: " + directionnegative + " transx: " + nv + "\nfps: " + Mathf.Ceil(fps).ToString();
Debug.Log(log);
}
}
And if I declare directionnegative as static bool script works fine and cube animation is properly going in one direction, then in another direction, then reverses:
public static bool directionnegative;
i wanted to ask you if there is a Resolution Scale Option like in Unreal Engine in Unity too. I have looked around the internet but didn't found anything.
If you are wanting to scale objects that are not on the canvas based on the resolution then offhand I can't think of anything. However, it would be fairly easy to implement something that does this.
Create a Script and attach it to every object that should scale based on the current resolution.
public class ScaleObjectFromRes : MonoBehaviour
{
private Vector2 targetResoultion = new Vector2(1920, 1080); //can be changed here or elsewhere
private bool matchWidth = true; //0=width, 1=height used to maintain aspect ratio
// Start is called before the first frame update
void Start()
{
float difference = CalculateDifference();
ScaleObj(difference);
}
void ScaleObj(float diff)
{
gameObject.transform.localScale += (gameObject.transform.localScale * (diff/100));
}
private float CalculateDifference()
{
Vector2 actualResolution = new Vector2(Screen.width, Screen.height);
Vector2 change = actualResolution-targetResoultion;
Vector2 percentChange = (change / targetResoultion) * 100;
//match width/height
if (matchWidth)
{
return percentChange.x;
}
else
{
return percentChange.y;
}
}
This scales the object based on the percent difference between the target resolution and the actual resolution. We can choose to match the difference based upon the width or height to guarantee a constant ratio for the object. This assumes the object's scale vector's magnitude is 1. Hope this helps!
I've ran into a small issue developing a picture taker for a VR project. I need to take a screenshot of a specific zone, which is a rectangle with variable width and height. To do that, I have a transform anchored to the top right corner of the bounding box that represents where the picture is going to be taken, and one anchored to the lower right corner.
Here's what it should look like. I've added little red circles to show the transforms's position.
Here's what a screencap using the left eye looks like. It's the same result if I use "both eyes" as a target in the Camera settings.
Here's what a screencap using the right eye looks like. So not only is it too far left or right, it's also a tad too high.
Here's the code that creates the Rect, and here's the code that reads the pixels.
When the Main Camera targets the left eye, there's almost half of the Rect's with as an offset to the left, when it targets the right eye, there's that same offset to the right, when it targets both, there's a softer offset to the left, and all of these have a slight vertical offset upwards.
Any help is appreciated. I'll keep this thread updated if I find anything!
public void SubmitPicture()
{
Vector2 upperLeftPosition = mainCamera.WorldToScreenPoint(upperLeftTransform.position);
Vector2 lowerRightPosition = mainCamera.WorldToScreenPoint(lowerRightTransform.position);
pictureBoxRect.x = upperLeftPosition.x;
pictureBoxRect.y = mainCamera.scaledPixelHeight - upperLeftPosition.y;
pictureBoxRect.width = lowerRightPosition.x - upperLeftPosition.x;
pictureBoxRect.height = lowerRightPosition.y - upperLeftPosition.y;
pictureSnapper.OnInput(AbsoluteRect(pictureBoxRect));
}
public void OnInput(Rect pictureBox)
{
if ((int)pictureBox.width > 0 && (int)pictureBox.height > 0)
{
videoPlayer.Stop();
Texture2D videoTexture = new Texture2D((int)pictureBox.width, (int)pictureBox.height);
videoTexture.ReadPixels(pictureBox, 0, 0);
videoTexture.Apply();
byte[] imageData = videoTexture.GetRawTextureData();
if (debug)
{
byte[] imagePng = videoTexture.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/" + savename + ".png", imagePng);
}
}
}
private Rect AbsoluteRect(Rect rect)
{
if (rect.width < 0)
{
rect.x -= rect.width;
rect.width = Mathf.Abs(rect.width);
}
if (rect.height < 0)
{
rect.y += rect.height / 2;
rect.height = Mathf.Abs(rect.height);
}
return rect;
}
Updated to add the picture references.
I'm using the latest googleVr SDK(1.10) but I tested some example scenes from unity like castle defence and I notice the view began to drift when I put the phone on the table. is there a way to prevent this drift programatically?
I saw some videos to correct the gyroscope on samsung but I want some code to prevent this
It's still a non solved issue, as explained in this post:
GvrViewerMain rotates the camera yourself. Unity3D + Google VR
But according to your needs, you may find usefull the following workaround.
The idea is to find the delta rotation and ignore it if too small.
using UnityEngine;
public class GyroCorrector {
public enum Correction
{
NONE,
BEST
}
private Correction m_correction;
public GyroCorrector(Correction a_correction)
{
m_correction = a_correction;
}
private void CorrectValueByThreshold(ref Vector3 a_vDeltaRotation, float a_fXThreshold = 1e-1f, float a_fYThreshold = 1e-2f, float a_fZThreshold = 0.0f)
{
// Debug.Log(a_quatDelta.x + " " + a_quatDelta.y + " " + a_quatDelta.z );
a_vDeltaRotation.x = Mathf.Abs(a_vDeltaRotation.x) < a_fXThreshold ? 0.0f : a_vDeltaRotation.x + a_fXThreshold;
a_vDeltaRotation.y = Mathf.Abs(a_vDeltaRotation.y) < a_fYThreshold ? 0.0f : a_vDeltaRotation.y + a_fYThreshold;
a_vDeltaRotation.z = Mathf.Abs(a_vDeltaRotation.z) < a_fZThreshold ? 0.0f : 0.0f;//We just ignore the z rotation
}
public Vector3 Reset()
{
return m_v3Init;
}
public Vector3 Get(Vector3 a_v3Init)
{
if (!m_bInit)
{
m_bInit = true;
m_v3Init = a_v3Init;
}
Vector3 v = Input.gyro.rotationRateUnbiased;
if (m_correction == Correction.NONE)
return a_v3Init + v;
CorrectValueByThreshold(ref v);
return a_v3Init - v;
}
}
... And then use something like this in the "UpdateHead" method from "GvrHead":
GvrViewer.Instance.UpdateState();
if (trackRotation)
{
var rot = Input.gyro.attitude;//GvrViewer.Instance.HeadPose.Orientation;
if(Input.GetMouseButtonDown(0))
{
transform.eulerAngles = m_gyroCorrector.Reset();
}
else
{
transform.eulerAngles = m_gyroCorrector.Get(transform.eulerAngles);//where m_gyroCorrector is an instance of the previous class
}
}
You may find some problems. Mainly but not exclusive:
There will be a latence problem when moving the head as there is an offset to detect the movement
You are dealing with relative positions that come with imprecisions. So you are not 100% sure to find the same position
when doing the opposite movement.
You are using euler representation instead of quaternion, and it seems to be less accurate.
You may also be interested in these links speaking about the field:
http://scholarworks.uvm.edu/cgi/viewcontent.cgi?article=1449&context=graddis
Gyroscope drift on mobile phones
and this piece of code :
https://github.com/asus4/UnityIMU
Hope it can help,