Flutter - how to keep track of corners and lines of a shape after transformation - flutter

i'm fairly new to flutter.
The app i'm trying to develop allows the user to create simple shapes that can be transformed (scaled/rotated/translated) but also be able to select the corners and lines, therefore the position of the corners and lines must be known.
The app must also allow for multiple shapes to be present at any one time and each shape can be transformed individually.
I have been trying to use the canvas.drawPath in a customPainter, and can successfully transform
each shape as desired using a matrix4 array, but i'm not sure how to monitor the position of the corners after transformation.
I'd really appreciate any advice as i'm quite stuck on this.

I solved this by using MatrixUtils.transformPoint which returns the new Offset after the transformation.
The complete shape is just a list of Offsets so i just applied the transformPoint for each element using the .forEach.
worked for me.

Related

Extra edges are generated between vertices when I move camera in game or in editor

I'm trying generate custom procedural landscape in unreal engine 4
To implement this I'm using this class https://docs.unrealengine.com/en-US/API/Plugins/ProceduralMeshComponent/UProceduralMeshComponent/index.html
and for nice noise generation on Z axis I'm using this plugin https://github.com/devdad/SimplexNoise from this library the only method I use is: USimplexNoiseBPLibrary::SimplexNoise2D
How to implement whole process I inspired from this video: https://www.youtube.com/watch?v=IKB1hWWedMk
I will try to describe flow of whole process:
define vertices count in row and column
iterate through row and column and create vertex vectors on (xscale, yscale, FMath::Lerp((-maxFallOff, maxHeight, USimplexNoiseBPLibrary::SimplexNoise2D(xperlinScale,yperlinScale)))
generate triangles using this method: https://docs.unrealengine.com/en-US/API/Plugins/ProceduralMeshComponent/UKismetProceduralMeshLibrary/ConvertQuadToTri-/index.html
generate UVs
That is all, at this point I can say everything works fine, but there is little issue, when I move camera in editor or in game on mesh appears extra edges. I also recorded video to show what I'm talking about.
https://youtube.com/watch?v=_B9Fxg5oZcE the edges I'm talking, appears on 00:05 second
Code is written in C++, I could post code here, but I think code is not problem here, I think something happens on runtime while I move camera something that I don't know...
I can say in advance if you interested that I'm not manipulating on mesh on Tick event
Problem solved...
Actually seems like I had bug in my code, I was passing too many triangle points to CreateMeshSection_LinearColor method that was the problem.
Anyway thanks for attention
Cheers

pdfSweep with rotated rectangle (itext7)

i have the requirement to perform a redaction in itext7. We have several rectangles which have been selected by the user. Some of these have been rotated. I have not found the ability to rotate rectangles in itext7. Usually, how we draw "rotated" rectangles is to perform some mathematical operations on a "fake" rectangle we draw in the code, and then draw them either using a series of lines like so:
if (rect.mRotation > 0)
{
r.Rotate(DegreeToRadian(rect.mRotation));
}
c.MoveTo(r.TopLeft.X, r.TopLeft.Y);
c.LineTo(r.TopRight.X, r.TopRight.Y);
c.LineTo(r.BottomRight.X, r.BottomRight.Y);
c.LineTo(r.BottomLeft.X, r.BottomLeft.Y);
c.LineTo(r.TopLeft.X, r.TopLeft.Y);
c.Stroke();
In the case of images, or something similar, we are unable to do the above. In this case we use an affinetransform to simulate the movement, which is applied to the image before it is added to the document. Both of the previous methods work perfectly.
Unfortunately for us, the pdfSweep tool only accepts (iText.Kernel.Geom) rectangles. We are looking for a way to be able to still pass an iText.Kernel.Geom.Rectangle which has had transforms applied (ie. a rectangle which has been rotated). We have tried setting the llx/urx values manually using the setBBox method, but this wont affect the rotation.
Does anyone know how we can go about redacting items over a given rectangular area which has been rotated?
Thanks

Creating a non Rectangle UI in Unity?

So what I want to do is basically create a graph based off four numbers I get during runtime to create something like a personality chart. The user takes a quiz and based on which answer they give, I increase a running total for that attribute. At the end, based off of the numbers calculated turn each of the four numbers into vertices of a RectTransform in perhaps a panel or maybe something else entirely.
For each of the four categories there is a total of 10 possible points. The overall shape of the background panel is a diamond(i.e. rotated square), with each of the four corners representing an attribute.
I've tried messing with the RectTransforms and such but the shape always turns out a rectangle(Duh! its a RectTransform). But The problem is that I need it to not be a rectangle. Is there a way to do this in Unity or through any other means?
The black polygon would be an example of the type of shape I would want to create.
After doing some more research on the topic I found a script that does exactly what I want and more, which is really nice. I'll leave a link to it below.
https://github.com/CiaccoDavide/Unity-UI-Polygon/blob/master/UIPolygon.cs

Identify different shapes drawn using UIBezierPath?

I am able to draw shapes using the UIBezierPath object. Now I want to identify different shapes drawn using this eg. Rectangle , Square , Triangle , Circle etc. Then next thing I want to do is that user should be able to select a particular shape and should be able to move the whole shape to different location on the screen. The actual requirement is even more complex , but If I could make this much then I can work out on the rest.
Any suggestion or links or points on how do I start with this is welcome . I am thinking of writing a separate view to handle every shape but not getting how do I do that..
Thank You all in advance !!
I recommend David Gelphman’s Programming with Quartz.
In his chapter “Drawing with Paths” he has a section on “Path Construction Primitives” which provides a crossroads:
If you use CGContextAddLineToPoint your user could make straight lines defined by known Cartesian points. You would use basic math to deduce the geometric shapes defined by those points.
If you use CGContextAddCurveToPoint your user could make curved lines defined by known points, and I’m pretty sure that those lines would run through the points, so you could still use basic math to determine at least an approximation of the types of shapes formed.
But if you use CGContextAddQuadCurveToPoint, the points define a framework outside of the drawn curve. You’d need more advanced math to determine the shapes formed by curves along tangents.
Gelphman also discusses “Path Utility Functions,” like getting a bounding box and checking whether a given point is inside the path.
As for moving the completed paths, I think you would use CGContextTranslateCTM.

Not able to calibrate camera view to 3D Model

I am developing an app which uses LK for tracking and POSIT for estimation. I am successful in getting rotation matrix, projection matrix and able to track perfectly but the problem for me is I am not able to translate 3D object properly. The object is not fitting in to the right place where it has to fit.
Will some one help me regarding this?
Check this links, they may provide you some ideas.
http://computer-vision-talks.com/2011/11/pose-estimation-problem/
http://www.morethantechnical.com/2010/11/10/20-lines-ar-in-opencv-wcode/
Now, you must also check whether the intrinsic camera parameters are correct. Even a small error in estimating the field of view can cause troubles when trying to reconstruct 3D space. And from your details, it seems that the problem are bad fov angles (field of view).
You can try to measure them, or feed the half or double value to your algorithm.
There are two conventions for fov: half-angle (from image center to top or left, or from bottom to top, respectively from left to right) Maybe you just mixed them up, using full-angle instead of half, or vice-versa
Maybe you can show us how you build a transformation matrix from R and T components?
Remember, that cv::solvePnP function returns inverse transformation (e.g camera in world) - it finds object pose in 3D space where camera is in (0;0;0). For almost all cases you need inverse it to get correct result: {Rt; -T}