So I have a set of points V, which are the vertices of a convex polytope, and a separate point p. Basically, I want to check whether p is contained in V. To do so, I set up a linear program that checks whether there exists a hyperplane such that all points in V lie on one side, while p lies on the other, like so (using YALMIP):
z=sdpvar(size(p,1),1);
sdpvar z0;
LMI=[z'*vert-z0<=0,z'*probs-z0<=1];
solvesdp(LMI,-z'*probs+z0);
The hyperplane is defined by the set of points z such that z'*x - z0 = 0, such that if I get a value larger than zero for the point p, and one smaller than zero for all vertices, then I know they are separated by the plane (the second constraint is just so the problem is bounded). This works fine. However, now I want to check whether there is a hyperplane separating the two point sets such that it contains the origin. For this, I simply set z0 = 0, i.e. drop it entirely, getting:
z=sdpvar(size(p,1),1);
LMI=[z'*vert<=0,z'*probs<=1];
solvesdp(LMI,-z'*probs);
Now, however, even for cases in which I know there is a solution, it doesn't find it, and I'm at a loss for understanding why. As a test, I've used the vertices
v1=[0;0;0];v2=[1;0;0];v3=[0;1;0];v4=[1;1;1];
and the point
p=[0.4;0.6;0.6];
When plotted, that looks like the picture here.
So it's clear that there should be a plane separating the lone point and the polytope that contains the origin (the front and center point of the polytope).
One thing I've tried already is to offset the vertex of the polytope that's now on the origin from the origin a little (10^-5), such that the plane would not touch the polytope (although the LP should allow for that), but that didn't work either.
I'm grateful for any ideas!
Related
I am reviewing some MATLAB code that is publicly available at the following location:
https://github.com/mattools/matGeom/blob/master/matGeom/geom2d/orientedBox.m
This is an implementation of the rotating calipers algorithm on the convex hull of a set of points in order to compute an oriented bounding box. My review was to understand intuitively how the algorithm works however I seek clarification on certain lines within the file which I am confused on.
On line 44: hull = bsxfun(#minus, hull, center);. This appears to translate all the points within the convex hull set so the calculated centroid is at (0,0). Is there any particular reason why this is performed? My only guess would be that it allows straightforward rotational transforms later on in the code, as rotating about the real origin would cause significant problems.
On line 71 and 74: indA2 = mod(indA, nV) + 1; , indB2 = mod(indB, nV) + 1;. Is this a trick in order to prevent the access index going out of bounds? My guess is to prevent out of bounds access, it will roll the index over upon reaching the end.
On line 125: y2 = - x * sit + y * cot;. This is the correct transformation as the code behaves properly, but I am not sure why this is actually used and different from the other rotational transforms done later and also prior (with the calls to rotateVector). My best guess is that I am simply not visualizing what rotation needs to be done in my head correctly.
Side note: The external function calls vectorAngle, rotateVector, createLine, and distancePointLine can all be found under the same repository, in files named after the function name (as per MATLAB standard). They are relatively uninteresting and do what you would expect aside from the fact that there is normalization of vector angles going on.
I'm the author of the above piece of code, so I can give some explanations about it:
First of all, the algorithm is indeed a rotating caliper algorithm. In the current implementation, only the width of the algorithm is tested (I did not check the west and est vertice). Actually, it seems the two results correspond most of the time.
Line 44 -> the goal of translating to origin was to improve numerical accuracy. When a polygon is located far away from the origin, coordinates may be large, and close together. Many computation involve products of coordinates. By translating the polygon around the origin, the coordinates are smaller, and the precision of the resulting products are expected to be improved. Well, to be honest, I did not evidenced this effect directly, this is more a careful way of coding than a fix…
Line 71-74! Yes. The idea is to find the index of the next vertex along the polygon. If the current vertex is the last vertex of the polygon, then the next vertex index should be 1. The use of modulo rescale between 0 and N-1. The two lines ensure correct iteration.
Line 125: There are several transformations involved. Using the rotateVector() function, one simply computes the minimal with for a given edge. On line 125, one rotate the points (of the convex hull) to align with the “best” direction (the one that minimizes the width). The last change of coordinates (lines 132->140) is due to the fact that the center of the oriented box is different from the centroid of the polygon. Then we add a shift, which is corrected by the rotation.
I did not really look at the code, this is an explanation of how the rotating calipers work.
A fundamental property is that the tightest bounding box is such that one of its sides overlaps an edge of the hull. So what you do is essentially
try every edge in turn;
for a given edge, seen as being horizontal, south, find the farthest vertices north, west and east;
evaluate the area or the perimeter of the rectangle that they define;
remember the best area.
It is important to note that when you switch from an edge to the next, the N/W/E vertices can only move forward, and are readily found by finding the next decrease of the relevant coordinate. This is how the total processing time is linear in the number of edges (the search for the initial N/E/W vertices takes 3(N-3) comparisons, then the updates take 3(N-1)+Nn+Nw+Ne comparisons, where Nn, Nw, Ne are the number of moves from a vertex to the next; obviously Nn+Nw+Ne = 3N in total).
The modulos are there to implement the cyclic indexing of the edges and vertices.
Given a calibrated stereo pair the following things are known:
Camera Intrinsics
Essential Matrix
Relative transformation
A set of keypoint matches (matches satisfy epipolar constraint)
I want to filter out wrong matches by "projecting" the orientation of one keypoint to the other image and compare it to the orientation of the matched keypoint.
My solution idea is the following:
Given the match (p1,p2) with orientation (o1,o2) I compute the depth z of p1 by triangulation. I know create a second point close to p1 shifted a few pixels towards the orientation vector p1' = p1 + o1. After that you compute the 3D point of p1' with z and project it back to image 2 yielding in p2'. The projected orientation is now o2 = p2'-p2.
Does that algorithm work? Are there better ways (for example using the essential matrix)?
While your idea sounds very interesting at first, I don't think that it can work because your way of computing the depth of p' will inevitably lead to wrong keypoint orientations in the second image. Consider this example I came up with:
Assume that p1 is reprojected to Q. Now, you said that since you can't know the depth of p'_1, you set it to z, thus back-projecting p'_1 to Q'. However, imagine that the true depth that corresponds to p'1 is the point shown in green, Q_t. In that case, the correct orientation in the the second image is c-b, while with your solution, we have computed a-b, which is a wrong orientation.
A better solution, in my opinion, is to fix the pose of one of the two cameras, triangulate all the matches that you have, and do a small bundle adjustment (preferably using a robust kernel) where you optimize all the points but only the non-fixed camera. This should take care of a lot of outliers. It will change your estimation of the Essential though, but I think it is probable that it will improve it.
Edit:
The example above used large distances for visibility, and made abstraction from the fact that a,b and c are not necessarily colinear. However, assume that p'1 is close enough to p1, so that Q' is close to Q. I think we can agree that most of the matches that passed the test would be in a configuration similar to this:
In that case, c and a both lie on the epipolar line given by the projection of Q' and camera center 1 in camera 2. But, b is not on that line (it is on the epipolar line corresponding to Q). So, the vectors a-b and c-b will be different by some angle.
But there are also two other issues with the method, that are related to this question: how do you determine the size of the vector o1? I assume that it will be a good idea to define it as some_small_coef*(1/z), because o1 will need to be smaller for distant objects. So, the two other problems are
if you are in an urban settings with for example, buildings that are a bit far, z grows, and the size of o1 will need to be smaller than the width of one pixel.
Assuming you overcome that problem, then the value of some_small_coef will need to be determined separately for different image couples (what if you go from indoors to outdoors?).
The question is
a.write a function which finds the circle with the minimal area s.t it bounds a given list of points (use fminsearch and give appropriate plot).
b.If you managed do the same for sphere (find one with minimal volume)
What I've tried so far:
%%Main function
function minarea= mincircle(points)
maxx=max(points(1,:));
maxy=max(points(2m:));
radius=max(maxx,maxy);
minarea=fminsearch(#(x) circle(x,r,c),[0,0])
end
%%This function is supposed to give equalation of circle
function eq=circle(x,r,c)
eq=(x(1)-c(1)).^2+(x(2)-c(2)).^2 %=r?
% and here I don't know how to insert r:
end`
For better understanding I'll attach a sketch.
In these terms I want to find the area of the circle whose center is in O
Note: I don't believe that the circle you drew is the smallest possible bounding circle. It should be a little smaller, up and to the right, and should touch at least two points on its perimeter.
Approaching the problem
We have a set of points, and we want to draw a circle that encompasses all of them. The problem is that you need three bits of information to define a circle: the X and Y coordinates of the circle's center, and the circle's radius. So the problem doesn't seem straightforward.
However, there is a related problem that is much easier to solve. Suppose the circle's center is fixed. From that point, we make a circle grow concentrically outwards so that it becomes bigger and bigger. At some point, the circle will encompass one of the points in our set. As it gets bigger, it will encompass a second point, and a third, until all the points in our set fall within our circle. Clearly, as soon as the last point in the set falls within our circle, we have the smallest possible circle that encompasses all the points, given that we started by fixing the center point of the circle.
Moreover, we can determine what the radius of this circle is. It is simply the maximum distance from any point in the set to the center of the circle, since we stop when the last point is touched by the perimeter of our expanding circle.
The next problem is to determine What is the best starting point to place the center of our circle? Clearly if the starting point is far away from all the points in our set, then the radius must be very large to even encompass one point in the set. Intuitively, it must be "in the middle" of our points somewhere. But where, exactly?
Using fminsearch
My suggestion is that you want to find the point P(x, y) that minimises how large you have to grow the circle to encompass all the points in the set. And we're in luck that we can use fminsearch to find P.
According to the fminsearch documentation, the function you pass in must be a function of one parameter (which may be an array), and it must return a scalar. The idea is that you want the output of your function to be as small as possible, and you want to find out what inputs will make that possible.
In our case, we want to write a function that outputs the size of our circle, given the center of the circle as input. That way, fminsearch will find the center of the smallest possible circle that will still encompass all the points. I'm going to write a function that outputs the radius required to encompass all the points given a center point P.
pointsX = [..]; % X-coordinates of points in the set
pointsY = [..]; % Y-coordinates of points in the set
function r = radiusFromPoint(P)
px = P(1);
py = P(2);
distanceSquared = (pointsX - px).^2 + (pointsY - py).^2;
r = sqrt(max(distanceSquared));
end
Then we want to use fminsearch to find the point that gives us the smallest radius. I've just naively used the origin (0, 0) as my starting estimate, but you may have a better idea (like using the first point in the set)
P0 = [0, 0]; % starting estimate
[P, radiusMin] = fminsearch(#radiusFromPoint, P0);
The circle is defined by its center at P and radius of radiusMin.
And I'll leave it to you to plot the output and generalize to the 3D case!
Actually, while you may need it to complete your homework assignment (I assume that is what this is) you don't really need to use an optimizer at all. The minboundcircle code posted with my minimal bounding tools does it without use of an optimizer. (There is also a minboundsphere tool.)
Regardless, you might find a few tricks in there that will be useful. At the very least, learn how to reduce the size of the problem (and so the speed of solution) by use of a convex hull. After all, it is only the points on the convex hull that can determine a minimal bounding circle. All other points are simply a waste of CPU time.
I have two ellipsoids in R3 described in terms of their centre points (P), their axes lengths (a,b,c), and their rotation vector (R). I wish to interpolate a tubular structure between these two ellipsoids along a given centre line. This is done by creating an ellipsoid centred at each point along the centre line. Its axes lengths are interpolated linearly between those at the two endpoints, and the rotation is obtained as a quaternion using spherical linear interpolation, or SLERP.
I previously asked a similar question on this problem here. I have since isolated the issue a little further, and thought it warranted a new post. The difference here is that before doing SLERP, I first rotate the two reference ellipsoids by the inverse of the rotation matrix that describes one of them, such that one of them is now axis-aligned (i.e. has no rotation). Previously this appeared to solve the problem, but I have encountered an example where this fix does not work.
The source code to reproduce this issue is available here. The relevant function is ellipsoidSLERP and the functions it calls. Here is a screenshot of the output:
What you are seeing is an interpolation of ellipsoid volumes (blue) between two reference ellipsoid volumes at either end (green) along a centreline (cyan).
Problem Statement
The interpolation on the left works correctly, resulting in a smooth tubular structure. The interpolation on the right does not work correctly, and results in a twist.
What is causing this behaviour, and how can I correct it?
Please let me know if there's anything I can do to clarify.
I'm not sure if this would be better asked on Mathoverflow, but I thought I would check here first. I have tried to be as clear and concise as possible; if there is anything that needs clearing up please let me know.
Background
I have two sets of points in R3 that are distributed in the form of (more-or-less) arbitrarily oriented ellipsoids. I wish to interpolate a tubular structure between these two ellipsoids. I also have coordinates of the desired centre line of this tubular structure.
I approximate the ellipsoids at either end with a minimum volume enclosing ellipsoid using the Khachiyan Algorithm implemented in Matlab, [1] which returns the coordinates of the centre of the ellipsoid (C), and matrix of the ellipse in centre form (A), such that:
(x - C)' * A * (x - C) = 1
I then extract the ellipsoid's axes lengths (a,b,c) and the rotation matrix (V) using singular value decomposition:
[U,D,V] = svd(A);
a = 1/sqrt(D(1,1));
b = 1/sqrt(D(2,2));
c = 1/sqrt(D(3,3));
I can easily interpolate the axes length parameters (e.g. linear, spline). To interpolate between the orientations, I first convert the rotation matrices to quaternion representation. Then for each point along the centre line, I use spherical linear interpolation (SLERP) implemented in another Matlab file [2]:
for iPoint = 1 : nPoints
t = iPoint / (nPoints + 2);
quat = slerp(startQuat,endQuat,t,0.001);
R = quat2rot(quat);
end
This is where I get stuck.
Unfortunately, even though SLERP "gives a straightest and shortest path between its quaternion endpoints," [3] the resulting interpolated ellipsoids are sometimes rotating in the "wrong" direction. That is, rather than resulting in a smooth tube, the interpolation results in a sort of twisted elliptical cylinder (see attached image, below).
I have tried checking to see if the dot product of the two quaternions is negative and if so, inverting one of them using quatinv. However, inverting results in something completely incorrect (see second attached image, below).
My question is: why is this happening, and what can I do to correct for this behavior? That is, how can I interpolate along the "true" shortest path between the two ellipsoid orientations?
Any suggestions would be greatly appreciated!
UPDATE
I have created a minimum working example and a required data file. I have also attached a screenshot of the result. I've zipped these up and uploaded them to Dropbox. [4]
[1] http://www.mathworks.com/matlabcentral/fileexchange/9542-minimum-volume-enclosing-ellipsoid/content/MinVolEllipse.m
[2] http://www.mathworks.com/matlabcentral/fileexchange/11827-slerp/content/slerp.m
[3] https://en.wikipedia.org/wiki/Slerp
[4] https://dl.dropboxusercontent.com/u/38218/ellipsoidInterpolation.zip
The solution was to rotate everything by the inverse of the rotation matrix of one of the reference ellipsoids, such that that reference ellipsoid was axis-aligned (i.e. had no rotation). Then after interpolating each ellipsoid, rotate it back to the original reference frame by multiplying by the original rotation matrix.
I've attached a screenshot of the result:
Update
Apparently this does not work in every case. I have posted a new question here.