I have created a square that is 40x40, as shown above. I have a 4x40 strip that I'd like to use to animate (increase) the width of my square till it takes the the width of the whole screen within a second, regardless of the square's position. Quite similar to that of a progress bar loading on both sides.
UPDATE
I forgot to mention that the square is a physics body, hence the physics body must also increase as the sprite increases.
What you want to do is use SKAction.scaleXTo to achieve what you are looking for:
SKAction.scaleXTo(sceneWidth / spriteWidth, duration: 1).
Now if you want the left and right side to not scale evenly, but instead reach both edges at the same time, what you can do is change the anchor point.
The math behind this assumes that your original anchor point is (0.5,0.5)
sprite.anchorPoint = CGPointMake(sprite.position.x / scene.width,sprite.anchorPoint.y)
E.G. Scene size width is 100, sprite is at x 75
What this is basically saying is that your sprite is at some percentage of the scene, in case of the example, 75%. so by changing the anchor point to .75, what is going to happen is the left side will fill faster than the right side when you are expanding your width since the left side of the anchor point has 75% of the width, and the right side has 25% of the width .
Lets say we set the scale to 2, that means the left side of the anchor point will now be at 150%, while the right side will be at 50%.
In general, assuming the origin of your objects are in the top-left (or at least the left, since we're only changing things on one axis) if you set start_x to the original x position of your square, start_width to its width, target_x to the x position of your strip, and target_width to its width, then:
x = start_x + (target_x - start_x) * a;
and
width = start_width + (target_width - start_width) * a;
And as a goes from 0.0 to 1.0, x and width will grow to match the strip.
Hope this helps.
Related
The docs simply states
RoundedRectangleBorder
A rectangular border with rounded corners.
Typically used with ShapeDecoration to draw a box with a rounded rectangle.
This shape can interpolate to and from CircleBorder.
BorderRadius.circular
Creates a border radius where all radii are [Radius.circular(radius)].
What does this mean?
If my button is 50 logical pixels (25 radius), and I set the radius to 20 should it then clip the corners outside of the 20 logical pixel radius?
If I set it to 30 the whole button would be within the circular radius, so nothing should be clipped.
This is not the case.
Everything >=30 seems to clip the corners to a 45 degree arc, resulting in a complete half circle on each short side of the button.
Can anyone explain this value and how to use it?
The radius is telling you how far from the corner that the arc is created. So if you want just slightly rounded corners you'd use a smaller value. A DecoratedBox that's 80pt x 30pt might take a 7pt circular radius. A circle who's radius extends from both the top and adjacent side by 7pts. If you applied a circular radius that was larger than the shortest available side that's where you run into the half circle. For this, a radius of 15pt and larger would create that effect.
If you're looking for a box that might fractionally decide it's rounded corners for you then you could easily create a class that wrapped a DecoratedBox inside of a LayoutBuilder to figure out the shortest side and determine a fractional radius based off of that length.
I write a custom content fitter that is required for my custom layout. So, I need to control RectTransform.sizeDelta property when anchors aren't same but I can't get that shows this value.
I don't need Unity3D API reference, I read it and got a nothing cuz it says only:
The size of this RectTransform relative to the distances between the
anchors. If the anchors are together, sizeDelta is the same as size.
If the anchors are in each of the four corners of the parent, the
sizeDelta is how much bigger or smaller the rectangle is compared to
its parent.
Can anyone explain in normal language what does it mean? And how can I calculate it manually when anchors aren't same?
The definition is somewhat confusing, indeed.
sizeDelta, basically, returns the difference between the actual rectangle of the UI element and the rectangle defined by the anchors.
For example, given a rectangle of 300x200:
Anchors in the same place as the corners of the rectangle: sizeDelta is (0,0)
Left or right anchors at half width of the rectangle: sizeDelta is (150,0)
All four anchors in a point: sizeDelta is (300,200) (i.e.: same size as the rectangle)
As you can see, it doesn't matter at all where the center of the rectangle defined by the anchors is, the only thing that matters is the difference between the width and height of the element rectangle and the anchors rectangle.
In pseudo-code, it's like this:
sizeDelta.x = UIElementRectangle.width - AnchorsRectangle.width;
sizeDelta.y = UIElementRectangle.height - AnchorsRectangle.height;
So, if the UI Rectangle has a dimension bigger than the anchors' one, sizeDelta is positive, if it's smaller, sizeDelta is negative.
sizeDelta: If you made a search, and end up here for an explanation of what sizeDelta means, like GetComponent().sizeDelta.y, then clear your mind.
Visualize a small PANEL, resting on top of a big CANVAS, it's Parent object.
In the PANEL's Rect Transform component, there are 2 rectangles defined:
(a) The rectangle defined by its Anchors. Those triangles. Normally related to the Parent Object location and dimensions, in this case the CANVAS.
(b) The rectangle defined by its own size, the PANEL's own dimension.
sizeDelta = (b) - (a)
That's it. Because normally an interactive component like a Button, smaller in size compared to the object where it rests, like a Panel, and because of that, normally sizeDelta is a negative value. Button size - Panel size = a negative value, normally.
You know the term Negative Space, used in general Design theory?
Think of it, as the space NOT used by a Button resting on a Panel.
Example:
How to find the height of a Panel, that is a Child of a Canvas that is a Camera overlay, thus screen sized. The Anchors of the Panel are related to the Canvas dimensions. Script is on the Panel object:
panelHeight = Screen.height + this.GetComponent().sizeDelta.y;
Remember, sizeDelta is normally negative so it reads more like this pseudo code:
panelHeight = Screen.height - this.sizeDelta.y
Hope this helps you, drove me crazy for a while. Cheers!
References:
https://www.youtube.com/watch?v=VhGxKDIKRvc
https://www.youtube.com/watch?v=FeheZqu85WI
public Vector2 ActualSize(RectTransform trans, Canvas can)
{
var v = new Vector3[4];
trans.GetWorldCorners(v);
//method one
//return new Vector2(v[3].x - v[0].x, v[1].y - v[0].y);
//method two
return RectTransformUtility.PixelAdjustRect(trans, canvas).size;
}
this function works in start
I'm reading the book "Learn Unity for 2D game development", and I don't know how to do this:
"The camera has been aligned in the world so that the screen space origin is at the bottom-left corner; meaning positive X spans across the screen width, and positive Y across the screen height from bottom to top."
I'm new on Unity and the book doesn't talk how to do it.
By the way, I'm using Unity 4.3.3f1 on a Windows 7.
How can I align the camera to make screen space origin at the bottom-left corner?
In a 2D game, you have an X-axis and Y-axis. When increasing an object's X-value, you could say the object is going right. When increasing the Y-value, you could say the object is going up.
In a 3D game, there is another additional axis, the Z-axis. This makes it possible to gain 'depth' in games.
Example:
If you wanna create a 2D game in a 3D environment, you'll have to 'remove' one of the axis. The most common is to remove the Z-axis to keep the naming in line (X and Y remain, like in a 2D game).
To achieve 'removing' an axis in a 3D environment, your view has to be looking straight at it. In this case, X and Y rotation can be anything but your Z rotation has to be 0.
Example:
Consider the above picture to have a Z-axis as well. But because you are looking from Z=0 towards the origin, the line doesn't go to the right, left up or bottom. The axis will be like 1 pixel size.
When you do this using the camera, in such a way that the world origin is in front of you and higher X numbers are to your right and higher Y numbers are above you, you've achieved this. This also means that the screen's value of X=0 is totally left, and the screen's value of Y=0 is totally bottom. This concludes that space origin is at the bottom-left corner; meaning positive X spans across the screen width, and positive Y across the screen height from bottom to top.
By saying "camera has been aligned", he doesn't mean that you manually align it in the scene, he's saying how screen space origin is at the bottom-left corner by default.
Source: Unity Script Reference
I'm looking for a camera zooming effect like the one used in Tiny Wings, where the camera zooms out based on the characters height.
I want the character to start zooming after it reaches a set height and I want the zooming to be non-linear so that the character gradually gets closer to the camera bounds as it goes higher up the screen.
I'm currently using the following code to scale linearly
camera.scale = MIN(1, SCREEN_HEIGHT*0.7 / player_position_y);
This results in the player always being 30% away from the top of the screen. I'm trying to find an elegant solution that will result in the player going between 30% from the edge of the screen to 10% from the edge of the screen depending on how high in the game world the character goes.
Just for completion I'm posting the solution I came up with.
float scalar = 4; // Had to tweak this number to get the difference in scales to feel right
float distance = player_position_y - SCREEN_HEIGHT*0.7;
float percentage = distance/(SCREEN_HEIGHT*2 - SCREEN_HEIGHT*0.7)
percentage = 1 - (percentage/scalar);
self.scale = MIN(1, SCREEN_HEIGHT*0.70 / (player_position_y * percentage));
Basically I get the distance between where the character starts scaling and the max height the character can reach as a percentage of the max height.
I invert that number and multiply it by a scaler. I multiply this percentage value by the player height used in the scale calculation. This results in the scale calculation using a position for the character that moves lower than the character as the character gains height.
I have a view that is 320x460 (standard iPhone screen) that I want to draw as if it were in landscape mode, though the phone is not oriented that way. This seems like a simple task, which I tried to solve by creating a subview of size 460x320, and then rotating it 90 degrees through setting the view transformation. This worked, except that the rotated view was not centered correctly. To 'fix' this I added a translation transformation, which ended up looking like this:
CGAffineTransform rotate = CGAffineTransformMakeRotation(M_PI / 2.0);
[landscapeView setTransform: CGAffineTransformTranslate(rotate, 70.0, 70.0)];
I don't mind having some adjustment transformation, but I have no clue where the magic number 70 came in. I just played around with it until the edges matched up correctly. How can I get rid of them, either by eliminating the translation transformation, or by deriving the number from some meaningful math related to the height and width?
Just a hunch, but I'm guessing prior to the transform that you're setting (or defaulting) the center of landscapeView to (160, 230). When it rotates, it keeps the upper left position fixed.
(320 px screen width - 460 px width) = -140 px. Divide that in half since it's centered, and you get -70 px. Same idea with the vertical.
70 is the difference between the width and height, divided by two. (460 - 320) / 2. The division by two is what centers the view.