How to define self-intersection of polygon in google maps (flutter) - flutter

I am making an application in which the user can draw a polygon on a map by points.
I need to somehow make sure that the polygon does not have self-intersections.
I know it is possible to manually check each line.
There are various methods for this.
But I noticed that Google Maps
automatically fills polygons that do not have self-intersections.
Is it possible to get this value from the plugin somehow?
i am using google_maps_flutter
without self-intersection
with self-intersection

Perhaps this will be useful to someone.
I did not find a built-in function in the plugins listed in the question.
So I wrote my own function:
true - there are self-intersections
bool isNotSimplePolygon(List<LatLng> polygon){
if(polygon.length <= 3)
return false;
for(int i = 0; i < polygon.length - 2; i++){
double x1 = polygon[i].latitude;
double y1 = polygon[i].longitude;
double x2 = polygon[i + 1].latitude;
double y2 = polygon[i + 1].longitude;
double maxx1 = max(x1, x2), maxy1 = max(y1, y2);
double minx1 = min(x1, x2), miny1 = min(y1, y2);
for (int j = i + 2; j < polygon.length; j++) {
double x21 = polygon[j].latitude;
double y21 = polygon[j].longitude;
double x22 = polygon[(j + 1) == polygon.length ? 0 : (j + 1)].latitude;
double y22 = polygon[(j + 1) == polygon.length ? 0 : (j + 1)].longitude;
double maxx2 = max(x21, x22), maxy2 = max(y21, y22);
double minx2 = min(x21, x22), miny2 = min(y21, y22);
if ((x1 == x21 && y1 == y21) || (x2 == x22 && y2 == y22) || (x1 == x22 && y1 == y22) || (x2 == x21 && y2 == y21))
continue;
if (minx1 > maxx2 || maxx1 < minx2 || miny1 > maxy2 || maxy1 < miny2)
continue; // The moment when the lines have one common vertex...
double dx1 = x2-x1, dy1 = y2-y1; // The length of the projections of the first line on the x and y axes
double dx2 = x22-x21, dy2 = y22-y21; // The length of the projections of the second line on the x and y axes
double dxx = x1-x21, dyy = y1-y21;
double div = dy2 * dx1 - dx2 * dy1;
double mul1 = dx1 * dyy - dy1 * dxx;
double mul2 = dx2 * dyy - dy2 * dxx;
if (div == 0)
continue; // Lines are parallel...
if (div > 0) {
if (mul1 < 0 || mul1 > div)
continue; // The first segment intersects beyond its boundaries...
if (mul2 < 0 || mul2 > div)
continue; // // The second segment intersects beyond its borders...
}
else{
if (-mul1 < 0 || -mul1 > -div)
continue; // The first segment intersects beyond its boundaries...
if (-mul2 < 0 || -mul2 > -div)
continue; // The second segment intersects beyond its borders...
}
return true;
}
}
return false;
}

Related

Checking User is inside Polygone on Googlemap or not.. - Flutter [duplicate]

I'm working on flutter project using google-maps-flutter plugin, and I want to check if the user location is inside the polygon that I created on the map. There is an easy way using JavaScript api (containsLocation() method) but for flutter I only found a third party plugin,google_map_polyutil, which is only for android and I get a security worming when I run my app. Is there another way to do so??
I found this answer and just modified some minor things to work with dart, I ran a test on a hardcoded polygon. The list _area is my polygon and _polygons is required for my mapcontroller.
final Set<Polygon> _polygons = {};
List<LatLng> _area = [
LatLng(-17.770992200, -63.207739700),
LatLng(-17.776386600, -63.213576200),
LatLng(-17.778348200, -63.213576200),
LatLng(-17.786848100, -63.214262900),
LatLng(-17.798289700, -63.211001300),
LatLng(-17.810547700, -63.200701600),
LatLng(-17.815450600, -63.185252100),
LatLng(-17.816267800, -63.170660900),
LatLng(-17.800741300, -63.153838100),
LatLng(-17.785867400, -63.150919800),
LatLng(-17.770501800, -63.152636400),
LatLng(-17.759712400, -63.160361200),
LatLng(-17.755952300, -63.169802600),
LatLng(-17.752519100, -63.186625400),
LatLng(-17.758404500, -63.195551800),
LatLng(-17.770992200, -63.206538100),
LatLng(-17.770996000, -63.207762500)];
The function ended like this:
bool _checkIfValidMarker(LatLng tap, List<LatLng> vertices) {
int intersectCount = 0;
for (int j = 0; j < vertices.length - 1; j++) {
if (rayCastIntersect(tap, vertices[j], vertices[j + 1])) {
intersectCount++;
}
}
return ((intersectCount % 2) == 1); // odd = inside, even = outside;
}
bool rayCastIntersect(LatLng tap, LatLng vertA, LatLng vertB) {
double aY = vertA.latitude;
double bY = vertB.latitude;
double aX = vertA.longitude;
double bX = vertB.longitude;
double pY = tap.latitude;
double pX = tap.longitude;
if ((aY > pY && bY > pY) || (aY < pY && bY < pY) || (aX < pX && bX < pX)) {
return false; // a and b can't both be above or below pt.y, and a or
// b must be east of pt.x
}
double m = (aY - bY) / (aX - bX); // Rise over run
double bee = (-aX) * m + aY; // y = mx + b
double x = (pY - bee) / m; // algebra is neat!
return x > pX;
}
Notice the polygons property and the onTap method. I was trying to check if the marker created in my map was inside my polygon:
GoogleMap(
initialCameraPosition: CameraPosition(
target: target, //LatLng(0, 0),
zoom: 16,
),
zoomGesturesEnabled: true,
markers: markers,
polygons: _polygons,
onMapCreated: (controller) =>
_mapController = controller,
onTap: (latLng) {
_getAddress(latLng);
},
)
Then i just used the following call in my _getAddress method:
_checkIfValidMarker(latLng, _area);
I hope it helps you to create what you need.
The easiest way to use it - https://pub.dev/packages/maps_toolkit
with isLocationOnPath method.
L. Chi's answer really help.
But due to I have pretty close points, rayCastIntersect might have wrong boolean return if aX is equal to bX
Therefore, I just add aX == bX condition check before calculate m then it works.
bool rayCastIntersect(LatLng tap, LatLng vertA, LatLng vertB) {
double aY = vertA.latitude;
double bY = vertB.latitude;
double aX = vertA.longitude;
double bX = vertB.longitude;
double pY = tap.latitude;
double pX = tap.longitude;
if ((aY > pY && bY > pY) || (aY < pY && bY < pY) || (aX < pX && bX < pX)) {
return false; // a and b can't both be above or below pt.y, and a or
// b must be east of pt.x
}
if (aX == bX) {
return true;
}
double m = (aY - bY) / (aX - bX); // Rise over run
double bee = (-aX) * m + aY; // y = mx + b
double x = (pY - bee) / m; // algebra is neat!
return x > pX;
}
The easiest way to use it - https://pub.dev/packages/maps_toolkit
with PolygonUtil.containsLocation - computes whether the given point lies inside the specified polygon.

MATLAB: Inputting enough arguments, still getting a "not enough input arguments" error

I'm having problems with the class constructor for my CoaxLine class. I pass it all the arguments it needs, but when I create an object in another program, I get the error:
Error using length Not enough input arguments.
Error in CoaxLine (line 23) function obj = CoaxLine(pow,len,h,freq,x1,x2,y1,y2,dir,split)
Error in Test2 (line 38) coax1 = CoaxLine(3.9,100,4.75,1800,10,110,10,10,0,1);
I got this same error with length even when I removed all the argument requirements for the constructor, and created the object with no inputs. This is my first time building a class in MATLAB, so it is likely that I missed something silly. I appreciate the help.
Here is the code for CoaxLine:
classdef CoaxLine
%UNTITLED2 Summary of this class goes here
% Detailed explanation goes here
properties
%Default values
PA = 3.9;
orientation = 0; %0 for East-West, 1 for North-South
splitter = 1; %0 for left side, 1 for right side
length = 90;
frequency = 1800; %in MHz
height = 4.75;
Ce = 8.77; %Hardcoded for now
Lint = .13; %Hardcoded
nearFieldLength = 2*(length^2)/((3.0*10^8)/(frequency*10^6));
X1 = 10; %Will be points in the simulation axis
X2 = 110;
Y1 = 10;
Y2 = 10;
%loss = 10;
end
methods
function obj = CoaxLine(pow,len,h,freq,x1,x2,y1,y2,dir,split)
%if nargin > 0
obj.PA = pow;
obj.length = len;
obj.height = h;
obj.frequency = freq;
obj.X1 = x1;
obj.X2 = x2;
obj.Y1 = y1;
obj.Y2 = y2;
obj.orientation = dir;
obj.splitter = split;
%end
end
function r = contribution(px,py)
if(obj.orientation == 0)
if(obj.splitter)
if(abs(py - obj.Y1) <= obj.nearFieldLength && px > obj.X1 && px < obj.X2)
H = abs(py - obj.Y1);
x = px - obj.X1;
r = NearFieldPropagation(obj.PA,obj.length,obj.frequency,H,obj.height,obj.Ce,obj.Lint,x);
end
else
if(abs(py - obj.Y1) <= obj.nearFieldLength && px < obj.X1 && px > obj.X2)
H = abs(py - obj.Y1);
x = obj.X1 - px;
r = NearFieldPropagation(obj.PA,obj.length,obj.frequency,H,obj.height,obj.Ce,obj.Lint,x);
end
end
%else
end
end
end
end
The error stems from this line:
nearFieldLength = 2*(length^2)/((3.0*10^8)/(frequency*10^6));
MATLAB thinks that you're trying to call the function length. That requires an argument whose length will be returned.
The use of frequency will give you headaches too. To properly handle this kind of properties you should declare your nearFieldLength as Dependent: http://www.mathworks.com/help/matlab/matlab_oop/access-methods-for-dependent-properties.html and then write a getter for it that will calculate its value on the fly.
Also, as excaza noted, you'll have further errors because you don't declare obj as argument in contribution.
This is my idea on how the code should look like:
classdef CoaxLine
properties
PA = 3.9;
orientation = 0;
splitter = 1;
length = 90;
frequency = 1800;
height = 4.75;
Lint = .13;
X1 = 10;
X2 = 110;
Y1 = 10;
Y2 = 10;
end;
properties(Dependent, SetAccess=private)
Ce;
nearFieldLength;
end;
methods
%//Constructor
function obj = CoaxLine(pow,len,h,freq,x1,x2,y1,y2,dir,split)
if nargin > 0
obj.PA = pow;
obj.length = len;
obj.height = h;
obj.frequency = freq;
obj.X1 = x1;
obj.X2 = x2;
obj.Y1 = y1;
obj.Y2 = y2;
obj.orientation = dir;
obj.splitter = split;
end;
end;
%//Getters for dependent properties
function val = get.Ce(obj) %#ok<MANU>
val = 8.77; %//this can be changed later
end;
function val = get.nearFieldLength(obj)
val = 2*(obj.length^2)/(3E8/(obj.frequency*1E6));
end;
%//Normal methods
function r = contribution(obj, px, py)
r = []; % some default value
if obj.orientation == 0
if obj.splitter
if abs(py - obj.Y1) <= obj.nearFieldLength ...
&& px > obj.X1 ...
&& px < obj.X2
H = abs(py - obj.Y1);
x = px - obj.X1;
r = NearFieldPropagation(obj.PA,obj.length,obj.frequency,H,obj.height,obj.Ce,obj.Lint,x);
end;
else
if abs(py - obj.Y1) <= obj.nearFieldLength ...
&& px < obj.X1 ...
&& px > obj.X2
H = abs(py - obj.Y1);
x = px - obj.X1;
r = NearFieldPropagation(obj.PA,obj.length,obj.frequency,H,obj.height,obj.Ce,obj.Lint,x);
end;
end;
end;
end;
end;
end

imregionalmax matlab function's equivalent in opencv

I have an image of connected components(circles filled).If i want to segment them i can use watershed algorithm.I prefer writing my own function for watershed instead of using the inbuilt function in OPENCV.I have successfu How do i find the regionalmax of objects using opencv?
I wrote a function myself. My results were quite similar to MATLAB, although not exact. This function is implemented for CV_32F but it can easily be modified for other types.
I mark all the points that are not part of a minimum region by checking all the neighbors. The remaining regions are either minima, maxima or areas of inflection.
I use connected components to label each region.
I check each region for any point belonging to a maxima, if yes then I push that label into a vector.
Finally I sort the bad labels, erase all duplicates and then mark all the points in the output as not minima.
All that remains are the regions of minima.
Here is the code:
// output is a binary image
// 1: not a min region
// 0: part of a min region
// 2: not sure if min or not
// 3: uninitialized
void imregionalmin(cv::Mat& img, cv::Mat& out_img)
{
// pad the border of img with 1 and copy to img_pad
cv::Mat img_pad;
cv::copyMakeBorder(img, img_pad, 1, 1, 1, 1, IPL_BORDER_CONSTANT, 1);
// initialize binary output to 2, unknown if min
out_img = cv::Mat::ones(img.rows, img.cols, CV_8U)+2;
// initialize pointers to matrices
float* in = (float *)(img_pad.data);
uchar* out = (uchar *)(out_img.data);
// size of matrix
int in_size = img_pad.cols*img_pad.rows;
int out_size = img.cols*img.rows;
int x, y;
for (int i = 0; i < out_size; i++) {
// find x, y indexes
y = i % img.cols;
x = i / img.cols;
neighborCheck(in, out, i, x, y, img_pad.cols); // all regions are either min or max
}
cv::Mat label;
cv::connectedComponents(out_img, label);
int* lab = (int *)(label.data);
in = (float *)(img.data);
in_size = img.cols*img.rows;
std::vector<int> bad_labels;
for (int i = 0; i < out_size; i++) {
// find x, y indexes
y = i % img.cols;
x = i / img.cols;
if (lab[i] != 0) {
if (neighborCleanup(in, out, i, x, y, img.rows, img.cols) == 1) {
bad_labels.push_back(lab[i]);
}
}
}
std::sort(bad_labels.begin(), bad_labels.end());
bad_labels.erase(std::unique(bad_labels.begin(), bad_labels.end()), bad_labels.end());
for (int i = 0; i < out_size; ++i) {
if (lab[i] != 0) {
if (std::find(bad_labels.begin(), bad_labels.end(), lab[i]) != bad_labels.end()) {
out[i] = 0;
}
}
}
}
int inline neighborCleanup(float* in, uchar* out, int i, int x, int y, int x_lim, int y_lim)
{
int index;
for (int xx = x - 1; xx < x + 2; ++xx) {
for (int yy = y - 1; yy < y + 2; ++yy) {
if (((xx == x) && (yy==y)) || xx < 0 || yy < 0 || xx >= x_lim || yy >= y_lim)
continue;
index = xx*y_lim + yy;
if ((in[i] == in[index]) && (out[index] == 0))
return 1;
}
}
return 0;
}
void inline neighborCheck(float* in, uchar* out, int i, int x, int y, int x_lim)
{
int indexes[8], cur_index;
indexes[0] = x*x_lim + y;
indexes[1] = x*x_lim + y+1;
indexes[2] = x*x_lim + y+2;
indexes[3] = (x+1)*x_lim + y+2;
indexes[4] = (x + 2)*x_lim + y+2;
indexes[5] = (x + 2)*x_lim + y + 1;
indexes[6] = (x + 2)*x_lim + y;
indexes[7] = (x + 1)*x_lim + y;
cur_index = (x + 1)*x_lim + y+1;
for (int t = 0; t < 8; t++) {
if (in[indexes[t]] < in[cur_index]) {
out[i] = 0;
break;
}
}
if (out[i] == 3)
out[i] = 1;
}
The following listing is a function similar to Matlab's "imregionalmax". It looks for at most nLocMax local maxima above threshold, where the found local maxima are at least minDistBtwLocMax pixels apart. It returns the actual number of local maxima found. Notice that it uses OpenCV's minMaxLoc to find global maxima. It is "opencv-self-contained" except for the (easy to implement) function vdist, which computes the (euclidian) distance between points (r,c) and (row,col).
input is one-channel CV_32F matrix, and locations is nLocMax (rows) by 2 (columns) CV_32S matrix.
int imregionalmax(Mat input, int nLocMax, float threshold, float minDistBtwLocMax, Mat locations)
{
Mat scratch = input.clone();
int nFoundLocMax = 0;
for (int i = 0; i < nLocMax; i++) {
Point location;
double maxVal;
minMaxLoc(scratch, NULL, &maxVal, NULL, &location);
if (maxVal > threshold) {
nFoundLocMax += 1;
int row = location.y;
int col = location.x;
locations.at<int>(i,0) = row;
locations.at<int>(i,1) = col;
int r0 = (row-minDistBtwLocMax > -1 ? row-minDistBtwLocMax : 0);
int r1 = (row+minDistBtwLocMax < scratch.rows ? row+minDistBtwLocMax : scratch.rows-1);
int c0 = (col-minDistBtwLocMax > -1 ? col-minDistBtwLocMax : 0);
int c1 = (col+minDistBtwLocMax < scratch.cols ? col+minDistBtwLocMax : scratch.cols-1);
for (int r = r0; r <= r1; r++) {
for (int c = c0; c <= c1; c++) {
if (vdist(Point2DMake(r, c),Point2DMake(row, col)) <= minDistBtwLocMax) {
scratch.at<float>(r,c) = 0.0;
}
}
}
} else {
break;
}
}
return nFoundLocMax;
}
I do not know if it is what you want, but in my answer to this post, I gave some code to find local maxima (peaks) in a grayscale image (resulting from distance transform).
The approach relies on subtracting the original image from the dilated image and finding the zero pixels).
I hope it helps,
Good luck
I had the same problem some time ago, and the solution was to reimplement the imregionalmax algorithm in OpenCV/Cpp. It is not that complicated, because you can find the C++ source code of the function in the Matlab distribution. (somewhere in toolbox). All you have to do is to read carefully and understand the algorithm described there. Then rewrite it or remove the matlab-specific checks and you'll have it.

How to drag a view, along the circumference of an oval?

I have an Oval and a view on the circumference of the Oval. When you try to drag the view, the view should be moved only on the circumference of the Oval. How can I achieve this?
Any sample equation would be helpful. Thanks.
CGPoint ovalCenter;
CGSize ovalSize;
- (CGPoint)constrainPointToOval:(CGPoint)point
{
float angle = atan2(point.y - ovalCenter.y, point.x - ovalCenter.x);
return CGPointMake(ovalSize.width * cosf(angle), ovalSize.height * sinf(angle));
}
You'll need to set ovalCenter and ovalSize elsewhere. Then run the touch position through this before setting the location of the view.
I have figured out a solution for getting a constrained drag along the sides of a square.
If anyone can improve the code, or have a better solution, you most welcome.
- (CGPoint) constrainPointToSquare:(CGPoint) point
{
float pi = 3.14159265;
float s1,s2;
CGPoint squareDragPoint;
float squareSize = 200.0;
float angle;
angle = atan2 (point.y - mCenter.y, point.x - mCenter.x);
float x1 = point.x;
float x2 = mCenter.x;
float y1 = point.y;
float y2 = mCenter.y;
if (((3*(pi/4) <= angle && pi >= angle) || (-pi <= angle && -3*(pi/4) >= angle)))//left
{
s1 = y2 - squareSize;
s2 = x2 - squareSize * ((y1-y2)/(x1-x2));
squareDragPoint = CGPointMake(s1, s2);
}
else if (((-(pi/4) <= angle && 0.0 >= angle) || (0.0 <= angle && (pi/4) >= angle))) //right
{
s1 = y2 + squareSize;
s2 = x2 + squareSize * ((y1-y2)/(x1-x2));
squareDragPoint = CGPointMake(s1, s2);
}
else if (((-3*(pi/4) <= angle && -(pi/2) >= angle) || (-(pi/4) >= angle && -(pi/2) <= angle))) //top
{
s1 = x2 - squareSize;
s2 = y2 - squareSize * ((x1-x2)/(y1-y2));
squareDragPoint = CGPointMake(s2, s1);
}
else if (((3*(pi/4) >= angle && (pi/2) <= angle) || (pi/4 <= angle && (pi/2) >= angle))) //bottom
{
s1 = x2 + squareSize;
s2 = y2 + squareSize * ((x1-x2)/(y1-y2));
squareDragPoint = CGPointMake (s2, s1);
}
return squareDragPoint;
}

Rotate an image to direct to a well know point using compass heading

As a proof of concept, I want to create an application that retrieve the current coordinates, calculate the direction towards an another point and, using compass, rotate an arrow image to direct to that point in space.
I know how to retrieve current coordinate and to rotate the image through CGAffineTransformMakeRotation, but I haven't find a formula to calculate the correct angle.
Any hints?
First you need to calculate a bearing. This page gives a neat formula for doing that:
http://www.movable-type.co.uk/scripts/latlong.html
Then, you can do some simple arithmetic to find the difference between that bearing and the heading the iPhone is pointed towards. Rotate your image by that difference.
Bearing is:
double bearingUsingStartCoordinate(CLLocation *start, CLLocation *end)
{
double tc1;
tc1 = 0.0;
//dlat = lat2 - lat1
//CLLocationDegrees dlat = end.coordinate.latitude - start.coordinate.latitude;
//dlon = lon2 - lon1
CLLocationDegrees dlon = end.coordinate.longitude - start.coordinate.longitude;
//y = sin(lon2-lon1)*cos(lat2)
double y = sin(d2r(dlon)) * cos(d2r(end.coordinate.latitude));
//x = cos(lat1)*sin(lat2)-sin(lat1)*cos(lat2)*cos(lon2-lon1)
double x = cos(d2r(start.coordinate.latitude))*sin(d2r(end.coordinate.latitude)) - sin(d2r(start.coordinate.latitude))*cos(d2r(end.coordinate.latitude))*cos(d2r(dlon));
if (y > 0)
{
if (x > 0)
tc1 = r2d(atan(y/x));
if (x < 0)
tc1 = 180 - r2d(atan(-y/x));
if (x == 0)
tc1 = 90;
} else if (y < 0)
{
if (x > 0)
tc1 = r2d(-atan(-y/x));
if (x < 0)
tc1 = r2d(atan(y/x)) - 180;
if (x == 0)
tc1 = 270;
} else if (y == 0)
{
if (x > 0)
tc1 = 0;
if (x < 0)
tc1 = 180;
if (x == 0)
tc1 = nan(0);
}
if (tc1 < 0)
tc1 +=360.0;
return tc1;
}
And for those looking for the distance between two points:
double haversine_km(double lat1, double long1, double lat2, double long2)
{
double dlong = d2r(long2 - long1);
double dlat = d2r(lat2 - lat1);
double a = pow(sin(dlat/2.0), 2) + cos(d2r(lat1)) * cos(d2r(lat2)) * pow(sin(dlong/2.0), 2);
double c = 2 * atan2(sqrt(a), sqrt(1-a));
double d = 6367 * c;
return d;
}
double haversine_mi(double lat1, double long1, double lat2, double long2)
{
double dlong = d2r(long2 - long1);
double dlat = d2r(lat2 - lat1);
double a = pow(sin(dlat/2.0), 2) + cos(d2r(lat1)) * cos(d2r(lat2)) * pow(sin(dlong/2.0), 2);
double c = 2 * atan2(sqrt(a), sqrt(1-a));
double d = 3956 * c;
return d;
}