Flutter: Path scaling results in shapes not being aligned, how do i align? - flutter

I'm trying to draw a polygon using a CustomPainter, this is working fine. Then I would like to draw a 2nd polygon identical to the first underneath it but X times the size. Currently I am transforming the path like:
polygon1 = new Path();
polygon1.addPolygon(polygonPoints, true);
double scale = 1.5;
Matrix4 matrix4 = Matrix4.identity()
..scale(scale,scale,0);
Path polygon2 = Path.from(polygon1)
..transform(matrix4.storage);
However, it seems that polygon2 is also translated which is undesired. I would like it to be perfectly behind the polygon1.
How do I achieve this?
Pictures for reference:
Polygon 1 (green) and Polygon 2 (orange) far away from (0,0) and NOT aligned
Polygon 1 (green) and Polygon 2 (orange) at ~ (0,0) and aligned

I managed to centre the scaled polygon2 by normalizing polygon1 w.r.t. point 0, then scaling the path as above, and finally shifting both paths using the Offset from point 0. Furthermore, the polygon2 needs to be shifted w.r.t to the polygon1 and for this I used polygon1's Rect parameter bottomCenter.

Related

Flutter- Draw points on image use canvas

I want to plot some (red color)points in the image, my image width and height assume (300 x 300) and I plot (red color) points in Offset(76.2, 75.2) position at image(1,1).
[1]: https://i.stack.imgur.com/mvGt0.jpg
In some cases I need to change the image to (450x 450), but already plot (red color)points are a mismatch.
How to move red color points to (1,1) position of image? how to calculate Offset value.
[2]: https://i.stack.imgur.com/Xua9h.jpg
You have to recalcuate the offsets for the new resolution.
Simply calculate a scale factor by which you can multiply the offsets and redraw the canvas. Pseudocode:
offset = (76.2, 75.2)
scale = 450/300 // 1.5
newOffset = offset * scale //(114.3, 112,5)

How to handle 3D object size and its position in ARKit

I am facing difficulties with 3D object size and its x, y, z positioning. I added the 3D object to sceneView, but its size is too big. How do I reduce the 3D object size based on my requirement? Can anyone help me handle the 3D object's size and its x, y, z positioning?
I am using Swift to code.
Each SCNNode has a scale property:
Each component of the scale vector multiplies the corresponding
dimension of the node’s geometry. The default scale is 1.0 in all
three dimensions. For example, applying a scale of (2.0, 0.5, 2.0) to
a node containing a cube geometry reduces its height and increases its
width and depth.
Which can be set as follows:
var scale: SCNVector3 { get set }
If for example your node was called myNode, you could thus use the following to scale it by 1/10 of it's original size:
myNode.scale = SCNVector3(0.1, 0.1, 0.1)
Regarding positioning SCNNodes this can be achieved by setting the position property:
The node’s position locates it within the coordinate system of its
parent, as modified by the node’s pivot property. The default position
is the zero vector, indicating that the node is placed at the origin
of the parent node’s coordinate system.
If therefore, you wanted to add your SCNNode to the center of the worldOrigin, and 1m away from the camera you can use the following:
myNode.position = SCNVector3(0, 0, -1)
Hope it helps...

How to set correct image dimensions by LatLngBounds using ImageOverlay?

I want to use ImageOverlays as markers, because I want the images to scale with zoom. Markers icons always resize to keep their size the same when you zoom.
My problem is that I can't figure out how to transform pixels to cords, so my image isn't stretched.
For instance, I decided my south-west LatLng to be [50, 50]. My image dimensions are 24px/24px.
How do I calculate the north-east LatLng based on the image pixels?
You are probably looking for map conversion methods.
In particular, you could use:
latLngToContainerPoint: Given a geographical coordinate, returns the corresponding pixel coordinate relative to the map container.
containerPointToLatLng: Given a pixel coordinate relative to the map container, returns the corresponding geographical coordinate (for the current zoom level).
// 1) Convert LatLng into container pixel position.
var originPoint = map.latLngToContainerPoint(originLatLng);
// 2) Add the image pixel dimensions.
// Positive x to go right (East).
// Negative y to go up (North).
var nextCornerPoint = originPoint.add({x: 24, y: -24});
// 3) Convert back into LatLng.
var nextCornerLatLng = map.containerPointToLatLng(nextCornerPoint);
var imageOverlay = L.imageOverlay(
'path/to/image',
[originLatLng, nextCornerLatLng]
).addTo(map);
Demo: http://playground-leaflet.rhcloud.com/tehi/1/edit?html,output

swift: orient y-axis toward another point in 3-d space

Suppose you have two points in 3-D space. Call the first o for origin and the other t for target. The rotation axes of each are alligned with the world/parent coordinate system (and each other). Place a third point r coincident with the origin, same position and rotation.
How, in Swift, can you rotate r such that its y-axis points at t? If pointing the z-axis is easier, I'll take that instead. The resulting orientation of the other two axes is immaterial for my needs.
I've been through many discussions related to this but none satisfy. I have learned, from reading and experience, that Euler angles is probably not the way to go. We didn't cover this in calculus and that was 50 years ago anyway.
Got it! Incredibly simple when you add a container node. The following seems to work for any positions in any quadrants.
// pointAt_c is a container node located at, and child of, the originNode
// pointAtNode is its child, position coincident with pointAt_c (and originNode)
// get deltas (positions of target relative to origin)
let dx = targetNode.position.x - originNode.position.x
let dy = targetNode.position.y - originNode.position.y
let dz = targetNode.position.z - originNode.position.z
// rotate container node about y-axis (pointAtNode rotated with it)
let y_angle = atan2(dx, dz)
pointAt_c.rotation = SCNVector4(0.0, 1.0, 0.0, y_angle)
// now rotate the pointAtNode about its z-axis
let dz_dx = sqrt((dz * dz) + (dx * dx))
// (due to rotation the adjacent side of this angle is now a hypotenuse)
let x_angle = atan2(dz_dx, dy)
pointAtNode.rotation = SCNVector4(1.0, 0.0, 0.0, x_angle)
I needed this to replace lookAt constraints which cannot, easily anyway, be archived with a node tree. I'm pointing the y-axis because that's how SCN cylinders and capsules are directed.
If anyone knows how to obviate the container node please do tell. Everytime I try to apply sequential rotations to a single node, the last overwrites the previous one. I haven't the knowledge to formulate a rotation expression to do it in one shot.

How to change a pixel distance to meters?

I have a .bmp image with a map. What i know:
Height an Width of bmp image
dpi
Map Scale
Image Center's coordinates in meters.
What i want:
How can i calculate some points of image (for example corners) in meters.
Or how can i change a pixel distanse to meters?
What i do before:
For sure i know image center coordinates in pixels:
CenterXpix = Widht/2;
CenterYpix = Height/2;
But what i gonna do to find another corners coordinates. Don't think that:
metersDistance = pixelDistance*Scale;
is a correct equation.
Any advises?
If you know the height or width in both meters and pixels, you can calculate the scale in meters/pixel. You equation:
metersDistance = pixelDistance*Scale;
is correct, but only if your points are on the same axis. If your two points are diagonal from each other, you have to use good old pythagoras (in pseudocode):
X = XdistancePix*scale;
Y = YdistancePix*scale;
Distance_in_m = sqrt(X*X+Y*Y);