Canvas Screen Scaling with script Instantiated objects? - unity3d

Can anyone explain to me why, when in the editor my instantiated shop UI looks wonderful (left) but when running on the device (right), it looks squished?, like so (please excuse the dodgy cam picture!):
The UI bars are prefabs, containing individual Canvas's and various UI elements. Each of the canvases is manually set to use Screen Scaling, with the reference resolution set to X: 320 Y: 480, and its set to use the Width as the basis for scaling. This has proved to work brilliantly in my previous games, but this one doesn't seem to hold. I've even manually set each of the instantiated objects' properties for scaling in the script, but still nothing.
The only thing I'm doing differently is building the UI at runtime by instantiating the UI prefabs and filling them when the shop is created.
Has anyone seen this before? Know how to fix it?

When you instantiate a prefab in code and then you add that prefab to some parent, unity ui system tries to compenstate the difference between the size of a prefab when it was created and the size it should be after canvas scaler update.
When I do something like this in my code I use this extension:
public static void SetParentAndReset(this RectTransform rect, Transform parent) {
rect.SetParent(parent);
rect.localScale = Vector3.one;
rect.localPosition = Vector3.zero;
}
...
var newObj = Instantiate(prefab);
var rect = newObj.GetComponent<RectTransform>();
rect.SetParentAndReset(parent);
rect.localPosition = new Vector3(1f, 1f, 1f); // set the actual position

Related

(Unity + 2D) Change UI Button position issue

The last few days I've been stuck with a headache of a problem in Unity.
Okay, I won't go into details with my game, but I've made a super-simple example which represents my issue.
I have a 2D scene these components:
When the scene loads, and I tap the button this script executes:
Vector3 pos = transform.position;
pos.x -= 10;
transform.position = pos;
I have also tried this code:
transform.position = Camera.main.WorldToScreenPoint(new Vector3(0, 0, 0));
The problem is, that when I click the button, the x-pos of the object sets to -1536 which is not as expected. Picture shows the scene after the button has been clicked. Notice the Rect Transform values:
So I did a little Googling and found out about ScreenToWorldPoint, WorldToScreenPoint etc but no of these conversions solves my problem.
I'm pretty sure I'm missing someting here, which probably is right in front of my, but I simply can't figure out what.
I hope someone can point me in the right direction.
Best regards.
The issue is that you are using transform.position instead of rectTransform.anchoredPosition.
While it's true that UI elements are still GameObjects and do have the normal Transform component accessible in script, what you see in the editor is a representation of the RectTransform that is built on top. This is a special inspector window for UI elements, which use the anchored positioning system so you can specify how the edges line up with their parent elements.
When you set a GameObject's transform.position, you are providing a world space position specified in 3D scene units (meters by default). This is different from a local position relative to the canvas or parent UI element, specified in reference pixels (the reference pixel size is determined by the canvas "Reference Resolution" field).
A potential issue with your use of Camera.WorldToScreenPoint is that that function returns a position specified in pixels. Whereas, as mentioned before, setting the transform.position is specified in scene units (i.e. meters by default) and not relative to the parent UI element. The inspector, though, knows it's a UI element so instead of showing you that value, it is showing you the world position translated to the UI's local coordinates.
In other words, when you set the position to zero, you are getting the indices of whatever pixels happen to be over the scene's zero point in your main camera's view, and converting those pixel numbers to meters, and moving the element there. The editor is showing you a position in reference pixels (which are not necessarily screen pixels, check your canvas setting) relative to the object's parent UI element. To verify, try tilting your camera a bit and note that the value displayed will be different.
So again you would need to use rectTransform.anchoredPosition, and you would further need to ensure that the canvas resolution is the same as your screen resolution (or do the math to account for the difference). The way the object is anchored will also matter for what the rectTransform values refer to.
Try using transform.localposition = new Vector3(0,0,0); as your button is a child of multiple game objects. You could also try using transform.TransformPoint, which should just convert localpos to worldpos.
The issue is that your button is inside of another object. You want to be changing the local position. transform.localPosition -= new Vector3(10, 0, 0)
As #Joseph has clearly explained, you have to make changes on your RectTransform for your UI components, instead of change Transform.
To achieve what you want, do it like this:
RectTransform rectTransform = this.GetComponent<RectTransform>();
Vector2 anchoredPos = rectTransform.anchoredPosition;
anchoredPos.x -= 10;
rectTransform.anchoredPosition = anchoredPos;
Just keep in mind that this 10 are not your 3D world space units (like meters or centimeters).
Try these things because I did not understand what you were trying to do
Try using transform.deltaposition
Go to the canvas and go then scale with screen size then! You can use transform.position = new Vector3(transform.position.x -10,transform.position.y, transform.positon.z)
And if this doesn't work
transform.Translate(new Vector3(transform.deltaposition.x - 10,transform.deltaposition.y, transform.deltaposition.z);
I have a better idea. Instead of changing the positions of the buttons, why not change the values that the buttons represent?
Moving Buttons
Remember that the buttons are GameObjects, and every gameobject has a position vector in its transform. So if your button is named ButtonA, then in your code you want to get a reference to that.
GameObject buttonA = GameObject.Find("ButtonA");
//or you can assign the game object from the inspector
Once you have a reference to the button, you can proceed in moving it. So let's imagine that we want to move ButtonA 10 units left.
Vector3 pos = buttonA.transform.position;
pos.X -= 10f;
buttonA.transform.position = pos;

Unity3d - Need to hide a group of objects in the area

I've already tried depthmask shaders and examined some other ideas, but it seems like it doesn't suit me at all.
I'm making an AR game and I have a scene with a house and trees. All these objects are animated and do something like falling from the sky, but not all at once, but in sequence. For example, the house first, then trees, then fence etc.
(Plz, look at my picture for details) http://f2.s.qip.ru/bVqSAgcy.png
If user moves camera too far, he will see all these objects stucking in the air and waiting for their order to start falling, and it is not good. I want to hide this area from all sides (because in AR camera can move around freely) and make all parts visible only when each will start moving (falling down).
(One more screen) http://f3.s.qip.ru/bVqSAgcz.png
I thought about animation events, but there are too many objects (bricks, for example) and I can't handle all of them manually.
I look forward to your great advice ;)
P.S. Sorry for my bad english.
You can disable their(the objects that are gonna fall) mesh renderers and re active them when they are ready to fall.
See here for more details about mesh renderer.
Deactivate your Object. You might use the camera viewport coordinates to get a y position outside the viewport. They start on the bottom left of the screen (0,0) and go to the top right of the screen (1,1). Convert them to worldspace coordinates. Camera.ViewportToWorldPoint
Vector3 outsideCamera = Camera.main.ViewportToWorldPoint(new Vector3(0.5f, 1.2f, 10.0f));
Now you can use the intended x and z positions of your object. Activate it when you want to drop it.
myObject.transform.position = new Vector3(myObject.transform.position.x, outsideCamera.y, myObject.transform.position.z);
Another thing you could additionally do is scaling the object from very small to its intended size when it is falling. This would prevent the object being visible before falling when the users point the camera upwards.
1- Maybe you can use the Camera far clipping plane property.
Or you can even use 2 Cameras if you need to display let's say the landscape on one (which will not render the house + trees + ...) with a "big" far clipping plane and use a second one with Depth only clear flags rendering only the items (this one can have a smaller far clipping plane from what I understand).
2- Other suggestion I'd give you is adding the scale to your animation:
set the scale to 0 on the beginning of animation
wait for the item to be needed to fall down
set the scale to 1 (with a transition if needed)
make the item fall down
EDIT: the workaround you found is quite just fine too! But tracking only world position should be enough I think (saving a tiny amount of memory).
Hope this helps,
Finally, the solution I chose. I've added this script to each object in composition. It stores object's position (in my case both world and local) at Start() and listening if it changes in Update(). So, if true, stop monitoring and set MeshRenderer in on state.
[RequireComponent(typeof(MeshRenderer))]
public class RenderScript : MonoBehaviour
{
private MeshRenderer mr;
private bool monitoring = true;
private Vector3 posLocal;
private Vector3 posWorld;
// Use this for initialization
void Start()
{
mr = GetComponent<MeshRenderer>();
mr.enabled = false;
posLocal = transform.localPosition;
posWorld = transform.position;
}
// Update is called once per frame
void Update()
{
if (monitoring)
{
if (transform.localPosition != posLocal || transform.position != posWorld)
{
monitoring = false;
mr.enabled = true;
}
}
}
}
Even my funny cheap сhinese smartphone is alive after this, so, I guess, it's OK.

Strange effect when translating world space UI Canvas

My canvas get strange effects when translated, looks like two canvas ontop och eachother and one lags behind the first. Like this
I attach the canvas like this to the controller (The Menu prefab has its canvas set to world space)
currentMenuOwner = hand;
currentMenu = Instantiate (MenuPrefab);
currentMenu.transform.SetParent (currentMenuOwner.transform);
I then move it like this from Update
currentMenu.transform.position = currentMenuOwner.transform.position;
currentMenu.transform.rotation = currentMenuOwner.transform.rotation;
currentMenu.transform.localPosition = new Vector3 (0, 0, 0.16f);
currentMenu.transform.localRotation = Quaternion.Euler (90, 0, 0);
update: Added this code to the attach code above, didnt help
currentMenu.GetComponent<Canvas>().worldCamera = NVRPlayer.Instance.Head.GetComponentInChildren<Camera> ();
You can't move/translate Canvas unless the Render Mode is World Space. You can move the child UI components of the Canvas. If you want to translate your Canvas, change the Render Mode to World Space then put the main camera in the Canvas' Event Camera slot.

Sprite disappearing to the back of another sprite on moving to defined point via transform.position

I have a difficulty with Unity2D's Sprite rendering.
Currently I have a sprite for a gameBoard, an empty GameObject holding the spawnPoint, a random sprite marking it, as well as a playerSprite to be instantiated as a prefab. If I am just using the hierarchy on Unity, the playerSprite shows perfectly above the gameBoard, and "hard-coding" its position will always keep it above the gameBoard sprite, visible to the eye.
The problem comes when I want to instantiate the gameBoard and dynamically adding the playerPrefabs into the game.
Here is the current code snippet I am currently using:
gameBoard.SetActive(true); //gameBoard is defined as a public gameObject with its element defined in Unity as the gameBoard sprite.
Player.playerSprite = (GameObject)Instantiate(Resources.Load("playerSprite"));
Player.playerSprite.transform.localPosition = spawnPoint.transform.localPosition;
The result is that the spritePrefab spawns at the place I want perfectly, but behind the gameBoard sprite, making it hidden when the game runs.
The result is the same when using transform.position instead of transform.localPosition
How should I code the transform part. such that I can still make my playerSprite visible? Thanks.
It's most likely not an issue with the position, but rather the Sorting Order of your Sprite Renderers.
The default values for any SpriteRenderer is Layer = Default & Sorting Order = 0
Sprite Renderers with a higher sorting order are rendered on top of those with a lower value.
Add the following lines to the end of your code, and try it out.
gameBoard.GetComponent<SpriteRenderer>().sortingOrder = 0;
Player.playerSprite.GetComponent<SpriteRenderer>().sortingOrder = 0;
Obviously, you could do the same thing in the inspector as well.

Unity 4.6: Button instances added at runtime are not scaled by Reference Resolution?

(Using Unity 4.6.0b20)
I've hit a problem where prefab button size works correctly when added in editor but seems to ignore Reference Resolution when added by script.
This uses a Canvas with Reference Resolution 1280x720 and MatchWidthOrHeight. Canvas has a Panel Vertical Layout Group for the buttons. The Button has Preferred Width/Height, and it is saved as a Prefab so new instances can be created from assets at runtime.
In the editor I can drag the prefab to scene to add instances to the panel which also has width 102 and they stack and scale nicely:
But when instead I add new instances of the prefab via script to the Panel they appear with the wrong size. Looking at the dimensions my guess is the 102 pixel size is not being scaled by the Reference Resolution:
The script code creates the instance via GameObject.Instantiate() and adds to the Panel by setting transform.parent:
GameObject uiInstance = (GameObject)GameObject.Instantiate(Resources.Load<GameObject>(assetPath));
uiInstance.transform.parent = unitButtonsPanel.transform;
I assume either there is more that needs to be done when adding the Button to the Panel than just setting Parent, or it's a bug with the beta...
Suggestions?
Use the method transform.SetParent passing false in the second parameter:
transform.SetParent(transform, false);
This way you can prevent world positioning and scaling.
You can find more information in the unity manual, searching for "UICreateFromScripting".
I found the cause -- when the prefab is added by script the Scale values are set incorrectly; in this test Scale was being set to 1.186284.
But if the script sets scale to 1.0 immediately after adding the button:
GameObject uiInstance = (GameObject)GameObject.Instantiate(Resources.Load<GameObject>(assetPath));
uiInstance.transform.parent = unitButtonsPanel.transform;
// HACK force scale to 1.0
uiInstance.transform.localScale = Vector3.one;
the new button instances are sized correctly.
imho this is a Unity bug because adding the prefab instance in the editor sets scale 1.0, the prefab itself has scale 1.0, so I can't see a reason why adding it from script should act differently. But I'm kind of guessing as there's not much documentation for the new UI yet :)
This is not a bug, its an order of execution issue. When using Unity's new layout components the values are driven by the layout components, which get calculated once at the end of the frame that a layout rebuild call is made. If you're basing anything in a script off of the new rectTransform.rect calculations which are driven by the layout elements, you'll need to wait until this calculation has been performed or call it yourself by using
LayoutRebuilder.MarkLayoutForRebuild (transform as RectTransform);
You'll still have to wait until the end of the frame for those calculations to take place before using anything from the rectTransform like scale, pos, etc. This had me stumped for a while when I first start dynamically creating UI elements and the scale was zero depending upon when I instantiated them.
You can read about that rebuild call and the order of execution of the layout calc stuff here:
http://docs.unity3d.com/Manual/UIAutoLayout.html