Unity: Positioning an element on canvas - unity3d

I need to move an image down through canvas so that its central point would be where is now its top edge. It makes some 50 points, but if I decrease y by 50, it moves to different part of the screen on devices with different screen size. I guess, it's because my main canvas is set to scale with the screen size. So I suppose I need to manually divide the number 50 by my screen height and then code to multiply by Screen.height? Isn't there a more convenient way to move UI objects?
Allow me a second question: Do you think it is even wise to make a game purely on canvas? My game is simple 2D, only slightly animated and contains many layout elements, so I decided to go for it, but I have hard time to grasp the UI position rules.

you may have the problem of the anchoring.
Unity UI totally depends on the Anchoring, if you have got right anchoring there is no issue.
For example. if you anchored something at the Center than changing left and right value moves them according to the center anchor.
for clear visualization, you can paste a screenshot of the behavior.

Related

2D sprite problem when setting up an instant messaging UI

I'm new to Unity and to game development in general.
I would like to make a text-based game.
I'm looking to reproduce the behavior of an instant messenger like messenger or whatapp.
I made the choice to use the Unity UI system for the pre-made components like the rect scroll.
But this choice led me to the following problem:
I have "bubbles" of dialogs, which must be able to grow in width as well as in height with the size of the text. Fig.1
I immediately tried to use VectorGraphics to import .svg with the idea to move runtime the points of my curves of Beziers.
But I did not find how to access these points and edit them runtime.
I then found the "Sprite shapes" but they are not part of the "UI",
so if I went with such a solution, I would have to reimplement
scroll, buttons etc...
I thought of cutting my speech bubble in 7 parts Fig.2 and scaling it according to the text size. But I have the feeling that this is very heavy for not much.
Finally I wonder if a hybrid solution would not be the best, use the
UI for scrolling, get transforms and inject them into Shape sprites
(outside the Canvas).
If it is possible to do 1. and then I would be very grateful for an example.
If not 2. 3. 4. seem feasible, I would like to have your opinion on the most relevant of the 3.
Thanks in advance.
There is a simpler and quite elegant solution to your problem that uses nothing but the sprite itself (or rather the design of the sprite).
Take a look at 9-slicing Sprites from the official unity documentation.
With the Sprite Editor you can create borders around the "core" of your speech bubble. Since these speech bubbles are usually colored in a single color and contain nothing else, the ImageType: Sliced would be the perfect solution for what you have in mind. I've created a small Example Sprite to explain in more detail how to approach this:
The sprite itself is 512 pixels wide and 512 pixels high. Each of the cubes missing from the edges is 8x8 pixels, so the top, bottom, and left borders are 3x8=24 pixels deep. The right side has an extra 16 pixels of space to represent a small "tail" on the bubble (bottom right corner). So, we have 4 borders: top=24, bottom=24, left=24 and right=40 pixels. After importing such a sprite, we just have to set its MeshType to FullRect, click Apply and set the 4 borders using the Sprite Editor (don't forget to Apply them too). The last thing to do is to use the sprite in an Image Component on the Canvas and set the ImageType of this Component to Sliced. Now you can scale/warp the Image as much as you like - the border will always keep its original size without deforming. And since your bubble has a solid "core", the Sliced option will stretch this core unnoticed.
Edit: When scaling the Image you must use its Width and Height instead of the (1,1,1)-based Scale, because the Scale might still distort your Image. Also, here is another screenshot showing the results in different sizes.

How to resize camera to work with multiple resolutions on Unity?

I'm working on a fun project on Unity and I want to support all mobile resolutions in landscape mode. I designed everything to work in 1920:1080 resolution.
Everything works in world space, including UI elements.
What's correct way of supporting all resolutions (including weird ones like square 1:1)?
I don't want the scene to be cropped or filled with blue, all I want is my camera's viewport to be scaled to fit the device screen. I don't care if objects in the scene will get thin or fat.
To support different aspect ratios for your UI Elements, I recommend making use of the anchors that come with every RectTransform. These will ensure that the position of an element is consistent across various aspect ratios. For example, setting the anchor of an element to be left on the x axis will make the origin and pivot on the far left. This means an x position of 10 will position the element 10 units away from the far left of the element's parent. This is made easier with Anchor Presets. Although you can set every value yourself, Anchor Presets provide an easier way to anchor your UI Elements. Provided you have a RectTransform on you UI Elements, every element should have the Anchor Presets available. No need to change the width and height of your Canvas.
Aside from that the Camera should automatically resize depending on the resolution of the game view. You can test how your elements' anchoring works with different aspect resolutions by setting your game view's Aspect to free aspect, which will change the aspect ratio of the game depending on game view's width and height.

Unity UI screen size issue

I'm quite new to Unity, so I'm sorry if this is a basic question. I've been trying to set up the UI for a mobile game, but I'm not quite sure how to make the UI lock it's position, no matter the screen size. I've tried using anchors (though I don't fully understand how to use them properly), I've tried using a canvas scaler, I've looked at the Unity document and I just can't seem to find an answer. The buttons are off screen/half off the screen when I build the game to my device/switch screen sizes in the game view. Does anyone know how to fix this?
You can set your anchor point by selecting the UI object (such as a button) and then clicking here and selecting the right anchor point. You can also press down shift to set the pivot and/or alt to move the object to that point at the same time. The object should now be anchored to that point and keep its position even if the resolution is changed. You can set a precise position from the inspector, too. Simply adjust the Pos X and Pos Y variables. It will still adhere to the anchor point.
Note that you might have to play around with the Canvas object's UI Scale mode and its settings to get the right setup.

Adjusting the 3D game to the screen size

How to make the 3D game adapt to the screen resolution?
I tried to change the fieldOfView of the camera, but this adjustment does not work correctly!
If you mean UI elements, there are little triangles usually in the middle of the canvas the element is under. These are anchors that will tell the element to try and stay in the same place on the canvas regardless of the screen resolution. You can read more about it here: https://docs.unity3d.com/Manual/UIBasicLayout.html https://docs.unity3d.com/Manual/HOWTO-UIMultiResolution.html
If you mean your actual game view, you'd probably need to write a script that adjusts the camera's FOV at the start of the game based on the resolution, but I have no idea where to even begin on the formula you'd use.

UnityVR - Having the both renders display the exact same image for each eye

When Unity builds a VR project, by default it is set to make the two views stereoscopic. It slightly offsets the camera position of one eye to give the user a sense of depth.
For example a square will appear slightly to the left on the right view compared to the left view.
I want to make the camera truly monoscopic by removing the offset that is created when i build the project. Each camera should render all objects in exactly the same position for both eyes.
One of things i tried was creating two camera and setting them to the left and right eye. Then i manually set the position/rotation of one camera until it looked monoscopic
It worked fine on my pixel phone, but as soon as i put the project on my test phone i noticed that the difference in resolutions messed up the view i was going for. The blocks were not in the same position when i looked at both renders.
If anyone has any solutions or ideas as to how i can go about this, i would greatly appreciate it.
Thank you!
You can still use 2 cameras, but instead of offsetting them, you can just make the width of the camera half.
Make 2 cameras, set their positions to exactly the same.
On the left eye camera, set the width to 0.5 and the x position to 0.
On the right eye camera, set the width to 0.5 and the x position to 0.5.
You should now have 2 cameras rendering the exact same thing, but twice across the screen, with no sense of depth.