Consider a line joining two point X(a, b, c) and Y(d, e, f)in a 3 dimension array. How to find indices of all points in between them along the line, except Least square?
You should definitely stay away from least squares, which would browse the entire 3D space if I understood you correclty. Instead, have a look at Bresenheim's Line Algorithm.
Basically you start with the starting cube, compute the line's gradient in each XYZ direction, and start marching.
You alternate marching in X (for example) direction until the line is no longer inside the cube, then you switch to whichever other direction (Y, Z) brings the line back into the current cube. And so on an so forth until the current cube is the target.
All usual links are in 2D, but the process in 3D is exactly the same.
The trickier bits resides in choosing which direction to start matching. There's an algo in 3D, which whould be adapted to 2D.
Notes:
A cool optimization is, each time you march in a given direction, you can march Nx, or Ny, or Nz steps straight. these 3 numbers can be computed before hand and will never change.
A cooler optimization, is, you should only have to compute the order of X-Y-Z iteration (which might well be Y,X,Z in some cases) only once, at the beginning. Then the marching is nicely periodic and should stay the same until the target is reached.
Related
I am reviewing some MATLAB code that is publicly available at the following location:
https://github.com/mattools/matGeom/blob/master/matGeom/geom2d/orientedBox.m
This is an implementation of the rotating calipers algorithm on the convex hull of a set of points in order to compute an oriented bounding box. My review was to understand intuitively how the algorithm works however I seek clarification on certain lines within the file which I am confused on.
On line 44: hull = bsxfun(#minus, hull, center);. This appears to translate all the points within the convex hull set so the calculated centroid is at (0,0). Is there any particular reason why this is performed? My only guess would be that it allows straightforward rotational transforms later on in the code, as rotating about the real origin would cause significant problems.
On line 71 and 74: indA2 = mod(indA, nV) + 1; , indB2 = mod(indB, nV) + 1;. Is this a trick in order to prevent the access index going out of bounds? My guess is to prevent out of bounds access, it will roll the index over upon reaching the end.
On line 125: y2 = - x * sit + y * cot;. This is the correct transformation as the code behaves properly, but I am not sure why this is actually used and different from the other rotational transforms done later and also prior (with the calls to rotateVector). My best guess is that I am simply not visualizing what rotation needs to be done in my head correctly.
Side note: The external function calls vectorAngle, rotateVector, createLine, and distancePointLine can all be found under the same repository, in files named after the function name (as per MATLAB standard). They are relatively uninteresting and do what you would expect aside from the fact that there is normalization of vector angles going on.
I'm the author of the above piece of code, so I can give some explanations about it:
First of all, the algorithm is indeed a rotating caliper algorithm. In the current implementation, only the width of the algorithm is tested (I did not check the west and est vertice). Actually, it seems the two results correspond most of the time.
Line 44 -> the goal of translating to origin was to improve numerical accuracy. When a polygon is located far away from the origin, coordinates may be large, and close together. Many computation involve products of coordinates. By translating the polygon around the origin, the coordinates are smaller, and the precision of the resulting products are expected to be improved. Well, to be honest, I did not evidenced this effect directly, this is more a careful way of coding than a fix…
Line 71-74! Yes. The idea is to find the index of the next vertex along the polygon. If the current vertex is the last vertex of the polygon, then the next vertex index should be 1. The use of modulo rescale between 0 and N-1. The two lines ensure correct iteration.
Line 125: There are several transformations involved. Using the rotateVector() function, one simply computes the minimal with for a given edge. On line 125, one rotate the points (of the convex hull) to align with the “best” direction (the one that minimizes the width). The last change of coordinates (lines 132->140) is due to the fact that the center of the oriented box is different from the centroid of the polygon. Then we add a shift, which is corrected by the rotation.
I did not really look at the code, this is an explanation of how the rotating calipers work.
A fundamental property is that the tightest bounding box is such that one of its sides overlaps an edge of the hull. So what you do is essentially
try every edge in turn;
for a given edge, seen as being horizontal, south, find the farthest vertices north, west and east;
evaluate the area or the perimeter of the rectangle that they define;
remember the best area.
It is important to note that when you switch from an edge to the next, the N/W/E vertices can only move forward, and are readily found by finding the next decrease of the relevant coordinate. This is how the total processing time is linear in the number of edges (the search for the initial N/E/W vertices takes 3(N-3) comparisons, then the updates take 3(N-1)+Nn+Nw+Ne comparisons, where Nn, Nw, Ne are the number of moves from a vertex to the next; obviously Nn+Nw+Ne = 3N in total).
The modulos are there to implement the cyclic indexing of the edges and vertices.
Given a calibrated stereo pair the following things are known:
Camera Intrinsics
Essential Matrix
Relative transformation
A set of keypoint matches (matches satisfy epipolar constraint)
I want to filter out wrong matches by "projecting" the orientation of one keypoint to the other image and compare it to the orientation of the matched keypoint.
My solution idea is the following:
Given the match (p1,p2) with orientation (o1,o2) I compute the depth z of p1 by triangulation. I know create a second point close to p1 shifted a few pixels towards the orientation vector p1' = p1 + o1. After that you compute the 3D point of p1' with z and project it back to image 2 yielding in p2'. The projected orientation is now o2 = p2'-p2.
Does that algorithm work? Are there better ways (for example using the essential matrix)?
While your idea sounds very interesting at first, I don't think that it can work because your way of computing the depth of p' will inevitably lead to wrong keypoint orientations in the second image. Consider this example I came up with:
Assume that p1 is reprojected to Q. Now, you said that since you can't know the depth of p'_1, you set it to z, thus back-projecting p'_1 to Q'. However, imagine that the true depth that corresponds to p'1 is the point shown in green, Q_t. In that case, the correct orientation in the the second image is c-b, while with your solution, we have computed a-b, which is a wrong orientation.
A better solution, in my opinion, is to fix the pose of one of the two cameras, triangulate all the matches that you have, and do a small bundle adjustment (preferably using a robust kernel) where you optimize all the points but only the non-fixed camera. This should take care of a lot of outliers. It will change your estimation of the Essential though, but I think it is probable that it will improve it.
Edit:
The example above used large distances for visibility, and made abstraction from the fact that a,b and c are not necessarily colinear. However, assume that p'1 is close enough to p1, so that Q' is close to Q. I think we can agree that most of the matches that passed the test would be in a configuration similar to this:
In that case, c and a both lie on the epipolar line given by the projection of Q' and camera center 1 in camera 2. But, b is not on that line (it is on the epipolar line corresponding to Q). So, the vectors a-b and c-b will be different by some angle.
But there are also two other issues with the method, that are related to this question: how do you determine the size of the vector o1? I assume that it will be a good idea to define it as some_small_coef*(1/z), because o1 will need to be smaller for distant objects. So, the two other problems are
if you are in an urban settings with for example, buildings that are a bit far, z grows, and the size of o1 will need to be smaller than the width of one pixel.
Assuming you overcome that problem, then the value of some_small_coef will need to be determined separately for different image couples (what if you go from indoors to outdoors?).
I painted a scheme/diagram that makes it easier to understand my question. (The angles are for technical reasons 0° 90° 180° ... MSpaint won't rotate degreewise, but my data does). I need the B arrows relative to A, concerning the coordinate and relative angle. I subtract A's coordinates from A and B, now A always sits at (0,0) and B keeps its relative distance. How can I do that with the angles of the arrows?
The data I have is situation 1,2 and 3.
I have A and B's coordinates and directions. I need to translate/rotate/normalize/egocentric - whatever the right word might be, to get to situation 4,5 and 6 respectively. In the end, all data (4,5,6) pooled will look like 7, with the new coordinates and directions of B, because then A would always be in the center and heading up. I would believe that something like that might be used often in a different context and am hoping for a inline function or hint/topic to search in.
In the following screen shot:
when you drag the tail of the word balloon (the thing that connects from the balloon to the persons mouth), the shape curves (as illustrated by the difference between the two balloon tails in the picture). I'm wondering, how is this done? I'm assuming you need to start with a CGPath and do something to it, does anyone happen to know what this is?
Update: So if I wanted to curve the following shape:
Would I use the following code:
CGPathAddCurveToPoint(mutablePath, NULL, x1, y1, x2, y2 + constant, x5, y5);
CGPathAddCurveToPoint(mutablePath, NULL, x3, y3, x4, y4 + constant, x5, y5);
Where the constant readjusts the y position of point 2 and point 4 to make the curve?
You need to exploit the fact that, mathematically, a straight-line segment is just a kind of curve segment.
(It's easier than it sounds, trust me.)
Bézier path segments have something called “order” that essentially determines how many points there are in the segment, not counting the point you're coming from.
A straight-line segment is a first-order curve, meaning that it only has the destination point. These “curves” are always straight lines because there are no control points to curve toward.
Quadratic curves are second-order curves (one control point plus the destination).
Cubic curves are third-order curves (two control points).
(The math doesn't put any limit on this, but Quartz stops here. No fourth-order curves for you without rolling your own rasterizer.)
This matters because any lower-order curve—including a straight line—can be expressed as a higher-order curve.
So, the secret?
For even a straight tail, use a curve.
(Namely, a cubic curve, since you want the curve going in two different directions: One, more or less into the tail, and the other, more or less along the edge of the balloon.)
From each of the two points at the base of the tail, you want one of the control points to be about halfway to the destination. This much is unconditional.
The direction of each of the control points gives you three options:
The straight-out tail
Notice the two control points along the blue line at the vertical center of the image.
Notice the direction of these two control points, relative to the base point it's connected to. They are angled inward, toward the tip—indeed, exactly on the straight line to the tip.
The oblique tail
Here, the tip point is no longer horizontally between the two base points. The control points have moved, but only to follow: each one is still halfway along the straight line between the corresponding base point and the tip.
The curved tail
For a curved tail, you move the tip, but you keep the control points at the same position as for a straight tail. Thus, the tail starts out straight out (following the control points), but as it gets farther from the base points, their influence wanes, and the tail begins curving toward the tip.
This is a lot easier to describe visually than to put into code, so you may want to consider using something like PaintCode or Opacity to draw each kind of tail using a pen tool and then see what the code they generate for it looks like.
You can use the CGContextAddCurveToPoint() functions:
CGContextMoveToPoint(ctx, x, y);
CGContextAddCurveToPoint(ctx, outTangentX, outTangentY, inTangentX, inTangentY, newX, newY);
... // more points or whatever you need here
CGContextFillPath(ctx); // Fill with white
CGContextStrokePath(ctx); // stroke the edges with black
The in/out tangents can be hardcoded to be something that looks good based on the point on the mouth of the picture and the point where it meets the balloon bubble. You might try something like making their angles half-way between perpendicular and the slope of the straight line between the 2 points or something like that as a starting place.
I have a set with 50 points in x,y. I need to draw the smoothest bezier that passes in all points, or in other words, the bezier that will best fit the points.
How do I do that? thanks
I am undergoing a similar problem in 3D. It is slightly easier in 2D because lines will always intersect if not parallel.
Firstly, read up on quadratic bezier curves. Each curve is represented by three points. The line will not pass through the middle point. Thus, your middle point cannot be one of the points you are trying to fit, or it won't go through it.
Instead, the beginning and end point of your quadratic bezier curve must be two consecutive points you want it to pass through. So what is your middle point going to be?
One way of solving this (never tried it myself HENCE it might not look perfect, but Im thinking off the top of my head) is to calculate the tangents from your -1st data point to your 0th data point, and find the intersection between that and the 1st data point to the 2nd data point. Then draw the line between the 0th data point and the 1st data point using this intersection as the middle bezier curve value.
Obviously you may have trouble at the ends of the curves, that may require some inventive thinking to make them look good. (the first point has no -1st point).
Sorry about the lack of diagrams. I would draw one but I'm on an iPad.
Imagine 3-point bezier curve (start-A, middle-B, end-C)
Imagine a straight line from A to C.
Imagine a straight line that is perpendicular to AC and goes through point B.
Those two lines cross in point D.
Bezier curve will go through EXACTLY half way from D to B. In other words if you want bezier curve that goes through 3 points, you must make the second point 2 times further from start and end than the actual second point.