UNITY: How to get the width of the line in pixels? - unity3d

I am working on a project where you have to draw anything using a LineRenderer Component and then I process your drawing using the positions of the Line. To get the positions is pretty simple, but I need to know which width of the line you choose in game. Here is an example:
As you can see, I've done a simple drawing, but the width of the line will be chosen by the player. Now, for this drawing I've used the Width of 0.15 for the LineRenderer Component:
Now, what I have trouble with, is how can I get the width of the Line in pixels. Let's say that the White Image is 900x900 pixels. In this case, how can I get the width of the line in pixels relative to the width of the RawImage used as a Background to draw on top of? Thank you in advance.

That depends on your import settings of the sprite, particulary pixels per unit property. Let's say, that you import your 900x900px image with 90 pixels per unit. Then it will be 10x10 units in Unity. Your line is 0.15 units width (assuming that scale is Vector3.One). 0.15 units * 90 pixels/units = 13.5 pixels.

If your canvas is P pixels wide, it's transform.localScale.x is X and LineRenderer.widthMultiplier is W, then line width in pixels should be (P / X) * W.

Related

Resolution for 2d pixel art games

I'm having problems to set the right resolution on unity to not have pixel distortion on my pixel art assets. When I create an tile grid, on the preview tab the assets look terrible.
I have an tilemap with 64x32 resolution for each tile.
I'm using 64 pixels per unit.
The camera size is set to 5 in a 640x360 resolution (using the following formula: vertical resolution / PPU / 2).
What I'm doing wrong and what I'm missing?
I don't know how the tiles are defined, but assuming those are rects with textures on topm you could check your texture filter setting and play with it a little, setting it up for example to "anisotropic"
To solve this problem and get an "pixel perfect" view, you need to apply the following formula:
Camera size = height of the screen resolution / PPU (pixels per unit) / 2
This will do the job!

How to increase (animate) the width of the square on both ends?

I have created a square that is 40x40, as shown above. I have a 4x40 strip that I'd like to use to animate (increase) the width of my square till it takes the the width of the whole screen within a second, regardless of the square's position. Quite similar to that of a progress bar loading on both sides.
UPDATE
I forgot to mention that the square is a physics body, hence the physics body must also increase as the sprite increases.
What you want to do is use SKAction.scaleXTo to achieve what you are looking for:
SKAction.scaleXTo(sceneWidth / spriteWidth, duration: 1).
Now if you want the left and right side to not scale evenly, but instead reach both edges at the same time, what you can do is change the anchor point.
The math behind this assumes that your original anchor point is (0.5,0.5)
sprite.anchorPoint = CGPointMake(sprite.position.x / scene.width,sprite.anchorPoint.y)
E.G. Scene size width is 100, sprite is at x 75
What this is basically saying is that your sprite is at some percentage of the scene, in case of the example, 75%. so by changing the anchor point to .75, what is going to happen is the left side will fill faster than the right side when you are expanding your width since the left side of the anchor point has 75% of the width, and the right side has 25% of the width .
Lets say we set the scale to 2, that means the left side of the anchor point will now be at 150%, while the right side will be at 50%.
In general, assuming the origin of your objects are in the top-left (or at least the left, since we're only changing things on one axis) if you set start_x to the original x position of your square, start_width to its width, target_x to the x position of your strip, and target_width to its width, then:
x = start_x + (target_x - start_x) * a;
and
width = start_width + (target_width - start_width) * a;
And as a goes from 0.0 to 1.0, x and width will grow to match the strip.
Hope this helps.

Align the camera to make screen space origin at the bottom-left corner

I'm reading the book "Learn Unity for 2D game development", and I don't know how to do this:
"The camera has been aligned in the world so that the screen space origin is at the bottom-left corner; meaning positive X spans across the screen width, and positive Y across the screen height from bottom to top."
I'm new on Unity and the book doesn't talk how to do it.
By the way, I'm using Unity 4.3.3f1 on a Windows 7.
How can I align the camera to make screen space origin at the bottom-left corner?
In a 2D game, you have an X-axis and Y-axis. When increasing an object's X-value, you could say the object is going right. When increasing the Y-value, you could say the object is going up.
In a 3D game, there is another additional axis, the Z-axis. This makes it possible to gain 'depth' in games.
Example:
If you wanna create a 2D game in a 3D environment, you'll have to 'remove' one of the axis. The most common is to remove the Z-axis to keep the naming in line (X and Y remain, like in a 2D game).
To achieve 'removing' an axis in a 3D environment, your view has to be looking straight at it. In this case, X and Y rotation can be anything but your Z rotation has to be 0.
Example:
Consider the above picture to have a Z-axis as well. But because you are looking from Z=0 towards the origin, the line doesn't go to the right, left up or bottom. The axis will be like 1 pixel size.
When you do this using the camera, in such a way that the world origin is in front of you and higher X numbers are to your right and higher Y numbers are above you, you've achieved this. This also means that the screen's value of X=0 is totally left, and the screen's value of Y=0 is totally bottom. This concludes that space origin is at the bottom-left corner; meaning positive X spans across the screen width, and positive Y across the screen height from bottom to top.
By saying "camera has been aligned", he doesn't mean that you manually align it in the scene, he's saying how screen space origin is at the bottom-left corner by default.
Source: Unity Script Reference

How do I get rid of these magic numbers in my rotation affine transform?

I have a view that is 320x460 (standard iPhone screen) that I want to draw as if it were in landscape mode, though the phone is not oriented that way. This seems like a simple task, which I tried to solve by creating a subview of size 460x320, and then rotating it 90 degrees through setting the view transformation. This worked, except that the rotated view was not centered correctly. To 'fix' this I added a translation transformation, which ended up looking like this:
CGAffineTransform rotate = CGAffineTransformMakeRotation(M_PI / 2.0);
[landscapeView setTransform: CGAffineTransformTranslate(rotate, 70.0, 70.0)];
I don't mind having some adjustment transformation, but I have no clue where the magic number 70 came in. I just played around with it until the edges matched up correctly. How can I get rid of them, either by eliminating the translation transformation, or by deriving the number from some meaningful math related to the height and width?
Just a hunch, but I'm guessing prior to the transform that you're setting (or defaulting) the center of landscapeView to (160, 230). When it rotates, it keeps the upper left position fixed.
(320 px screen width - 460 px width) = -140 px. Divide that in half since it's centered, and you get -70 px. Same idea with the vertical.
70 is the difference between the width and height, divided by two. (460 - 320) / 2. The division by two is what centers the view.

Problem with glTranslatef

I use the glTranslate command to shift the position of a sprite which I load from a texture in my iPhone OpenGL App. My problem is after I apply glTranslatef, the image appears a little blurred. When I comment that line of code, the image is crystal clear. How can i resolve this issue???
You're probably not hitting the screen pixel grid exactly. This will cause texture filtering to blur it. The issue is a bit complicated: Instead of seeing the screen an texture as a array of points, see it as sheets of grid ruled paper (the texture sheet can be stretched, sheared, scaled). To make things look crisp the grids must align perfectly. The texture coordinates (0,0) and (1,1) don't hit the center of the texels but the outer edges of the texture sheet. Thus you need a little bit to offset and scale to address the texel centers. And the same goes for placing the target quads on the screen, where the vertex position must be aligned with the edges of the screen, not the pixel centers. If your projection and modelview matrix are not setup in a way that one unit in modelview space is one pixel wide and the projection fills the whole screen (or window viewport) it's difficult to get this right.
One normally starts with
glViewport(0,0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// modelview XY range 0..width x 0..height now covers the whole viewport
// (0,0) don't address the lower left pixel but the lower left edge of this
// (width,height) similarily addresses the upper right corner
// drawing a 0..width x 0..height quad with texture coordinates 0..1 x 0..1
// will cover it perfectly
This will work as long as the quad as exactly the same dimensions (i.e. it's vertex positions match) the texture coordinates and the vertex positions are integers.
Now the interesting part is: What if they don't meet those conditions. Then aliasing occours. In GL_NEAREST filtering mode things still look crisp, but some lines/rows are simply missing. In GL_LINEAR filtering mode neighbouring pixels are interpolated with the interpolation factor beding determined how far off grid they are (in laymans terms, the actual implementation looks slightly different).
So how to solve your issue: Draw sprites in a projection/modelview that matches with the viewport, use only integer coordinates for the vertex coordinates and make your texture cover the whole picture. If you're using only a part of the texture coordinate range, things get even more interesting, since one addressed the texture grid, not the texel centers.
I would recommend looking at your modelview matrix declaration and be sure that glLoadIdentity() is being called to ensure that the matrix stack is clean before applying the transform.