My Android game uses screen co-ordinates based on real space co-ordinates and my conversion goes like this: All this is pseudo code, but I've highlighted as code
Real-space=(position/480); (so for example 240/480 would be halfway across the screen)
Velocity = 1/Time; (Time = seconds)
Real-space= Real-space+ (Velocity * delta time);
Screen Coordinates = Real-space* screen width / length
Now the problem I have is that in my game, I have the need to match the screen co-ordinates of 2 sprites, so they move together (with one on top of the other, so I'm simply using something like
Sprite1_Screen_Coords = (Sprite_2_Screen_Coords-Sprite1 Height)
(All hights are scaled so are relative to the screen size the app is currently being run on)
This works OK, but I have the need to match the 'real space' co-ordinates.
Obviously I can do something like:
Sprite1-Real-Co-ordinates = Sprite2-Real-Co-ordinates
which means they would be the same but what value would I subtract from this so it 'sits' perfectly on top of the other sprite? How do I derive this missing value from/for the sprites I have? So to summerise, I need something like:
Sprite1-Real-Co-ordinates = (Sprite2-Real-Co-ordinates - something representing the sprites height)
Thank all! :-)
The answer was that I simply divided the sprite's scaled height by the current screen height and used that value - seems to work on the 3 resolutions I tested it on.
Related
Say my camera is rotated around the X axis 60 degrees and looking down on a 9x9 block chess board. As we adjust board size, I want to zoom out the camera. Say for arguments sake the camera's position is (4,20,-7) and like this the whole board is visible and taking up the full screen.
If I adjust my board size to say 11x11 blocks I will now need to zoom out the camera. Say I want to maintain the same 60 degree angle and want the board to fill as much of the screen as it did before. What should the camera's new position be and how do you calculate it?
The X part is easy since you simple give the camera the same X position as the middle of the board. I'm not sure about how to calculate the new Y and Z positions though.
Any advice appreciated. Thanks.
edit: and if i wanted to change the angle of the camera as well as zoom out, is that possible to calculate? this is less important since i'll probably stick with the same angle, but i'm interested to know the maths behind it anyway.
Transform.Translate() method will move the transform according to the rotation. So you don't have to worry about the direction where your camera is looking at, just
yourCamera.transform.Translate(Vector3.forward * moveAmount);
will move your camera forward, which means zoom in. If you want to zoom out, just change the sign of the value to minus.
When I didn't know this, I used Mathf.Sin() and Mathf.Cos() to calculate each y and z world coordinates, which sucks.
I am building an AR application. I have some points which are real worlds coordinates.
I can geolocate these points through Mapbox. My problem is that when I got far away from the points, they are looking getting smaller. I want to see them as the same size independently from the distance.
Here is an example of how to visualize the points:
So, if I near the points I see them in normal sizes. Even though I got 400 KMs away from the point, I want to see it in the same size. Is it possible?
You can try to scale the lables by some value * distance to object.
If you are standing in device and the target is in target it would be:
float experimentalScale = 0.5f
This is the amplifier of the distance. If you increase the value, the lable will get bigger by greater distance. Try out what works best for you.
float scaleFactor = Vector3.Distance(device.transform.position, target.transform.position) * experimentalScale;
target.transform.localScale(scaleFactor,scaleFactor,scaleFactor)
This only works if your Objects scale is 1. If it is something else, just multiply the scale with scaleFactor.
I write a custom content fitter that is required for my custom layout. So, I need to control RectTransform.sizeDelta property when anchors aren't same but I can't get that shows this value.
I don't need Unity3D API reference, I read it and got a nothing cuz it says only:
The size of this RectTransform relative to the distances between the
anchors. If the anchors are together, sizeDelta is the same as size.
If the anchors are in each of the four corners of the parent, the
sizeDelta is how much bigger or smaller the rectangle is compared to
its parent.
Can anyone explain in normal language what does it mean? And how can I calculate it manually when anchors aren't same?
The definition is somewhat confusing, indeed.
sizeDelta, basically, returns the difference between the actual rectangle of the UI element and the rectangle defined by the anchors.
For example, given a rectangle of 300x200:
Anchors in the same place as the corners of the rectangle: sizeDelta is (0,0)
Left or right anchors at half width of the rectangle: sizeDelta is (150,0)
All four anchors in a point: sizeDelta is (300,200) (i.e.: same size as the rectangle)
As you can see, it doesn't matter at all where the center of the rectangle defined by the anchors is, the only thing that matters is the difference between the width and height of the element rectangle and the anchors rectangle.
In pseudo-code, it's like this:
sizeDelta.x = UIElementRectangle.width - AnchorsRectangle.width;
sizeDelta.y = UIElementRectangle.height - AnchorsRectangle.height;
So, if the UI Rectangle has a dimension bigger than the anchors' one, sizeDelta is positive, if it's smaller, sizeDelta is negative.
sizeDelta: If you made a search, and end up here for an explanation of what sizeDelta means, like GetComponent().sizeDelta.y, then clear your mind.
Visualize a small PANEL, resting on top of a big CANVAS, it's Parent object.
In the PANEL's Rect Transform component, there are 2 rectangles defined:
(a) The rectangle defined by its Anchors. Those triangles. Normally related to the Parent Object location and dimensions, in this case the CANVAS.
(b) The rectangle defined by its own size, the PANEL's own dimension.
sizeDelta = (b) - (a)
That's it. Because normally an interactive component like a Button, smaller in size compared to the object where it rests, like a Panel, and because of that, normally sizeDelta is a negative value. Button size - Panel size = a negative value, normally.
You know the term Negative Space, used in general Design theory?
Think of it, as the space NOT used by a Button resting on a Panel.
Example:
How to find the height of a Panel, that is a Child of a Canvas that is a Camera overlay, thus screen sized. The Anchors of the Panel are related to the Canvas dimensions. Script is on the Panel object:
panelHeight = Screen.height + this.GetComponent().sizeDelta.y;
Remember, sizeDelta is normally negative so it reads more like this pseudo code:
panelHeight = Screen.height - this.sizeDelta.y
Hope this helps you, drove me crazy for a while. Cheers!
References:
https://www.youtube.com/watch?v=VhGxKDIKRvc
https://www.youtube.com/watch?v=FeheZqu85WI
public Vector2 ActualSize(RectTransform trans, Canvas can)
{
var v = new Vector3[4];
trans.GetWorldCorners(v);
//method one
//return new Vector2(v[3].x - v[0].x, v[1].y - v[0].y);
//method two
return RectTransformUtility.PixelAdjustRect(trans, canvas).size;
}
this function works in start
I'm looking for a camera zooming effect like the one used in Tiny Wings, where the camera zooms out based on the characters height.
I want the character to start zooming after it reaches a set height and I want the zooming to be non-linear so that the character gradually gets closer to the camera bounds as it goes higher up the screen.
I'm currently using the following code to scale linearly
camera.scale = MIN(1, SCREEN_HEIGHT*0.7 / player_position_y);
This results in the player always being 30% away from the top of the screen. I'm trying to find an elegant solution that will result in the player going between 30% from the edge of the screen to 10% from the edge of the screen depending on how high in the game world the character goes.
Just for completion I'm posting the solution I came up with.
float scalar = 4; // Had to tweak this number to get the difference in scales to feel right
float distance = player_position_y - SCREEN_HEIGHT*0.7;
float percentage = distance/(SCREEN_HEIGHT*2 - SCREEN_HEIGHT*0.7)
percentage = 1 - (percentage/scalar);
self.scale = MIN(1, SCREEN_HEIGHT*0.70 / (player_position_y * percentage));
Basically I get the distance between where the character starts scaling and the max height the character can reach as a percentage of the max height.
I invert that number and multiply it by a scaler. I multiply this percentage value by the player height used in the scale calculation. This results in the scale calculation using a position for the character that moves lower than the character as the character gains height.
so I have a sprite that I create every second on the screen. This sprite is a sequence of 20 images . I would like to know if it can hurt performance ? if yes how can I reduce the impact on the performance thank you :) sorry for my english I'm french :/
I've worked with sprites before and yes the more you have on the screen the lower your performance will be. The sequence of 20 images part is what worries me. Instead of using 20 images, look into something called a spritesheet.
A spritesheet is where all your images (an animation In your case right??) are in one file and you keep some parameters stored like
-How big is one frame?
-How many frames?
-X and Y positions.
Example: If I have a 5 frame animation, and each image is 20x100 pixels. I would put them all in one image file side by side, making the image file 100x100. I would draw each portion of this spritesheet on the screen in sequential order.
So my parameters would be:
-SizePerFrame = (20, 100)
-TotalSizeOfImage = (100, 100)
-FramesTotal = 5
-x y of First frame = (0,0)
So I would draw a portion from (0,0) to (20,100) first frame, (20,100) to (40,100) the second and so on.
Hope this makes sense