How to calculate sizeDelta in RectTransform? - unity3d

I write a custom content fitter that is required for my custom layout. So, I need to control RectTransform.sizeDelta property when anchors aren't same but I can't get that shows this value.
I don't need Unity3D API reference, I read it and got a nothing cuz it says only:
The size of this RectTransform relative to the distances between the
anchors. If the anchors are together, sizeDelta is the same as size.
If the anchors are in each of the four corners of the parent, the
sizeDelta is how much bigger or smaller the rectangle is compared to
its parent.
Can anyone explain in normal language what does it mean? And how can I calculate it manually when anchors aren't same?

The definition is somewhat confusing, indeed.
sizeDelta, basically, returns the difference between the actual rectangle of the UI element and the rectangle defined by the anchors.
For example, given a rectangle of 300x200:
Anchors in the same place as the corners of the rectangle: sizeDelta is (0,0)
Left or right anchors at half width of the rectangle: sizeDelta is (150,0)
All four anchors in a point: sizeDelta is (300,200) (i.e.: same size as the rectangle)
As you can see, it doesn't matter at all where the center of the rectangle defined by the anchors is, the only thing that matters is the difference between the width and height of the element rectangle and the anchors rectangle.
In pseudo-code, it's like this:
sizeDelta.x = UIElementRectangle.width - AnchorsRectangle.width;
sizeDelta.y = UIElementRectangle.height - AnchorsRectangle.height;
So, if the UI Rectangle has a dimension bigger than the anchors' one, sizeDelta is positive, if it's smaller, sizeDelta is negative.

sizeDelta: If you made a search, and end up here for an explanation of what sizeDelta means, like GetComponent().sizeDelta.y, then clear your mind.
Visualize a small PANEL, resting on top of a big CANVAS, it's Parent object.
In the PANEL's Rect Transform component, there are 2 rectangles defined:
(a) The rectangle defined by its Anchors. Those triangles. Normally related to the Parent Object location and dimensions, in this case the CANVAS.
(b) The rectangle defined by its own size, the PANEL's own dimension.
sizeDelta = (b) - (a)
That's it. Because normally an interactive component like a Button, smaller in size compared to the object where it rests, like a Panel, and because of that, normally sizeDelta is a negative value. Button size - Panel size = a negative value, normally.
You know the term Negative Space, used in general Design theory?
Think of it, as the space NOT used by a Button resting on a Panel.
Example:
How to find the height of a Panel, that is a Child of a Canvas that is a Camera overlay, thus screen sized. The Anchors of the Panel are related to the Canvas dimensions. Script is on the Panel object:
panelHeight = Screen.height + this.GetComponent().sizeDelta.y;
Remember, sizeDelta is normally negative so it reads more like this pseudo code:
panelHeight = Screen.height - this.sizeDelta.y
Hope this helps you, drove me crazy for a while. Cheers!
References:
https://www.youtube.com/watch?v=VhGxKDIKRvc
https://www.youtube.com/watch?v=FeheZqu85WI

public Vector2 ActualSize(RectTransform trans, Canvas can)
{
var v = new Vector3[4];
trans.GetWorldCorners(v);
//method one
//return new Vector2(v[3].x - v[0].x, v[1].y - v[0].y);
//method two
return RectTransformUtility.PixelAdjustRect(trans, canvas).size;
}
this function works in start

Related

(Unity + 2D) Change UI Button position issue

The last few days I've been stuck with a headache of a problem in Unity.
Okay, I won't go into details with my game, but I've made a super-simple example which represents my issue.
I have a 2D scene these components:
When the scene loads, and I tap the button this script executes:
Vector3 pos = transform.position;
pos.x -= 10;
transform.position = pos;
I have also tried this code:
transform.position = Camera.main.WorldToScreenPoint(new Vector3(0, 0, 0));
The problem is, that when I click the button, the x-pos of the object sets to -1536 which is not as expected. Picture shows the scene after the button has been clicked. Notice the Rect Transform values:
So I did a little Googling and found out about ScreenToWorldPoint, WorldToScreenPoint etc but no of these conversions solves my problem.
I'm pretty sure I'm missing someting here, which probably is right in front of my, but I simply can't figure out what.
I hope someone can point me in the right direction.
Best regards.
The issue is that you are using transform.position instead of rectTransform.anchoredPosition.
While it's true that UI elements are still GameObjects and do have the normal Transform component accessible in script, what you see in the editor is a representation of the RectTransform that is built on top. This is a special inspector window for UI elements, which use the anchored positioning system so you can specify how the edges line up with their parent elements.
When you set a GameObject's transform.position, you are providing a world space position specified in 3D scene units (meters by default). This is different from a local position relative to the canvas or parent UI element, specified in reference pixels (the reference pixel size is determined by the canvas "Reference Resolution" field).
A potential issue with your use of Camera.WorldToScreenPoint is that that function returns a position specified in pixels. Whereas, as mentioned before, setting the transform.position is specified in scene units (i.e. meters by default) and not relative to the parent UI element. The inspector, though, knows it's a UI element so instead of showing you that value, it is showing you the world position translated to the UI's local coordinates.
In other words, when you set the position to zero, you are getting the indices of whatever pixels happen to be over the scene's zero point in your main camera's view, and converting those pixel numbers to meters, and moving the element there. The editor is showing you a position in reference pixels (which are not necessarily screen pixels, check your canvas setting) relative to the object's parent UI element. To verify, try tilting your camera a bit and note that the value displayed will be different.
So again you would need to use rectTransform.anchoredPosition, and you would further need to ensure that the canvas resolution is the same as your screen resolution (or do the math to account for the difference). The way the object is anchored will also matter for what the rectTransform values refer to.
Try using transform.localposition = new Vector3(0,0,0); as your button is a child of multiple game objects. You could also try using transform.TransformPoint, which should just convert localpos to worldpos.
The issue is that your button is inside of another object. You want to be changing the local position. transform.localPosition -= new Vector3(10, 0, 0)
As #Joseph has clearly explained, you have to make changes on your RectTransform for your UI components, instead of change Transform.
To achieve what you want, do it like this:
RectTransform rectTransform = this.GetComponent<RectTransform>();
Vector2 anchoredPos = rectTransform.anchoredPosition;
anchoredPos.x -= 10;
rectTransform.anchoredPosition = anchoredPos;
Just keep in mind that this 10 are not your 3D world space units (like meters or centimeters).
Try these things because I did not understand what you were trying to do
Try using transform.deltaposition
Go to the canvas and go then scale with screen size then! You can use transform.position = new Vector3(transform.position.x -10,transform.position.y, transform.positon.z)
And if this doesn't work
transform.Translate(new Vector3(transform.deltaposition.x - 10,transform.deltaposition.y, transform.deltaposition.z);
I have a better idea. Instead of changing the positions of the buttons, why not change the values that the buttons represent?
Moving Buttons
Remember that the buttons are GameObjects, and every gameobject has a position vector in its transform. So if your button is named ButtonA, then in your code you want to get a reference to that.
GameObject buttonA = GameObject.Find("ButtonA");
//or you can assign the game object from the inspector
Once you have a reference to the button, you can proceed in moving it. So let's imagine that we want to move ButtonA 10 units left.
Vector3 pos = buttonA.transform.position;
pos.X -= 10f;
buttonA.transform.position = pos;

Unity2D - uniform positioning of transforms with different sizes

I have a rectangle (sprite) and I need to place different game objects (sprites) inside that rectangle but so they are all "aligned" by their bottoms.
For the life of me, I cannot make it work in Unity.
Say that my box has a height of 5.
I want to place the different size objects so they are all "resting" at the 2.5 y axis inside the box.
Does anyone know how I can do that since transform.position measures from the center of the GameObject?
Thanks!
Don't use transform.position, use RectTransform properites, as they take anchor points into account. In particular you need to set the anchor position for the sprite in the prebab / inspector and then use RectTransform.anchoredPosition to position it.

Swift SpriteKit: Flipping parent sprite randomizes child nodes horizontal alignment

I have a parent SKSpriteNode which contains a number of child SKShapeNodes, SKSpriteNodes and SKLabelNodes arranged in specific X,Y position relative to the parent.
When I flip the parent by running an SKAction on it:
SKAction.scaleXTo(-1, duration: duration)
It flips the parent including the children (which I want) but the horizontal alignment of all the children nodes is randomized.
I must mention that I have
parent.anchorPoint = CGPoint(x: 0,y: 0)
The reason this is happening is because of your anchor point. Scaling (And all transformations) happens on this point. To test, take a piece of paper, put a pencil on the top left corner, then try moving the right side over to the left. leaving the pencil where it is. You will notice that the right side is now -width of the paper
What you need to do in this case, is add the width back in to keep the paper at the same location.
Same rule with the sprites. You have to add the width in when doing a negative scale, then remove the width when going back to a positive scale
(or just put your anchor in the center)
Geometrical transformation you apply to a node are automatically applied to its descendants.
This is why when you move/rotate/scale a node the children (and the grandchildren...) are updated as well.
If you want apply a transformation to your sprite without involving other nodes simply change the structure of you scene moving the children of your sprite like shown below.
As it is now
In this image the green circle is your sprite. In the left scenario when you change the green node, it's children are updated accordingly. And you don't want that.
As it should be
So move to the right scenario. Create a new SKNode (the orange one) and place it as shown below.
Now the green node has no children so you can apply geometrical transformation to it without any side effect.

Applying perspective that is always relative to the screen

I want to ask a question about the perspective that is achieved through CATransform3D.
I know that if you have a view that is 320x480 and then apply this:
CATransform3D perspective = CATransform3DIdentity;
CGFloat zDistance = 1000;
perspective.m34 = 1.0 / -zDistance;
view.layer.sublayerTransform = perspective;
you create a perspective that makes it look like the observer is looking straight at the center of the screen and therefore the same transformation looks different, depending on where the subview that is being transformed is located on the screen. For example, tilting a view looks like this when the view is in the middle of the screen:
And it looks like this if it's in the lower left corner:
Now, my problem is that making the perspective relative to the screen only works if the view I'm transforming is a subview of another view that is 320x480px big. But what if the view I want to transform is a subview of a view that is only 100x100px? Is there a way to make the perspective relative to the whole screen if the superview isn't the size of the screen?
Thanks in advance
According to apple
"The anchorPoint property is a CGPoint that specifies a location within the bounds of a layer that corresponds with the position coordinate. The anchor point specifies how the bounds are positioned relative to the position property, as well as serving as the point that transforms are applied around."
Your perspective should not be relative to the center of the screen or even to the center of your layer by default, is that where you have your anchor point? Aside from that though, what you seem to be asking is how to make your perspective appear to be relative to a different point. The trick is that your perspective is created by multiplying by your perspective matrix. Setting m34 to a small number does nothing magic, you are multiplying by your projection matrix.
YourProjection = {1,0,0 ,0,
0,1,0 ,0,
0,0,1 ,0,
0,0,-1.0/zdistance,1};
Remember that you can combine successive transforms by multiplying them together. Just transform your layer to wherever you want, then apply your Projection matrix, then transform it back, presto, perspective from a different origin.
float x = your coordinates relative to the screen center;
float y = same thing
TranslationMatrix = {1,0,0,0,
0,1,0,0,
0,0,1,0,
x,y,0,1};
ReverseTranslationMatrix = {1,0,0,0,
0,1,0,0,
0,0,1,0,
-x,-y,0,1};
//now just multiply them all together
Final = ReverseTranslation*YourProjection*Translation;
You will need to do the matrix math yourself, hopefully you already have a generic 4x4 column major matrix class that can do multiplication for you, if not i suggest you make one. Also, if you are interested, you might consider reading. This for an explanation of how the matrix you are currently using works, or This for a different take on projection matrices.

Problem with glTranslatef

I use the glTranslate command to shift the position of a sprite which I load from a texture in my iPhone OpenGL App. My problem is after I apply glTranslatef, the image appears a little blurred. When I comment that line of code, the image is crystal clear. How can i resolve this issue???
You're probably not hitting the screen pixel grid exactly. This will cause texture filtering to blur it. The issue is a bit complicated: Instead of seeing the screen an texture as a array of points, see it as sheets of grid ruled paper (the texture sheet can be stretched, sheared, scaled). To make things look crisp the grids must align perfectly. The texture coordinates (0,0) and (1,1) don't hit the center of the texels but the outer edges of the texture sheet. Thus you need a little bit to offset and scale to address the texel centers. And the same goes for placing the target quads on the screen, where the vertex position must be aligned with the edges of the screen, not the pixel centers. If your projection and modelview matrix are not setup in a way that one unit in modelview space is one pixel wide and the projection fills the whole screen (or window viewport) it's difficult to get this right.
One normally starts with
glViewport(0,0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// modelview XY range 0..width x 0..height now covers the whole viewport
// (0,0) don't address the lower left pixel but the lower left edge of this
// (width,height) similarily addresses the upper right corner
// drawing a 0..width x 0..height quad with texture coordinates 0..1 x 0..1
// will cover it perfectly
This will work as long as the quad as exactly the same dimensions (i.e. it's vertex positions match) the texture coordinates and the vertex positions are integers.
Now the interesting part is: What if they don't meet those conditions. Then aliasing occours. In GL_NEAREST filtering mode things still look crisp, but some lines/rows are simply missing. In GL_LINEAR filtering mode neighbouring pixels are interpolated with the interpolation factor beding determined how far off grid they are (in laymans terms, the actual implementation looks slightly different).
So how to solve your issue: Draw sprites in a projection/modelview that matches with the viewport, use only integer coordinates for the vertex coordinates and make your texture cover the whole picture. If you're using only a part of the texture coordinate range, things get even more interesting, since one addressed the texture grid, not the texel centers.
I would recommend looking at your modelview matrix declaration and be sure that glLoadIdentity() is being called to ensure that the matrix stack is clean before applying the transform.