I have two ellipsoids in R3 described in terms of their centre points (P), their axes lengths (a,b,c), and their rotation vector (R). I wish to interpolate a tubular structure between these two ellipsoids along a given centre line. This is done by creating an ellipsoid centred at each point along the centre line. Its axes lengths are interpolated linearly between those at the two endpoints, and the rotation is obtained as a quaternion using spherical linear interpolation, or SLERP.
I previously asked a similar question on this problem here. I have since isolated the issue a little further, and thought it warranted a new post. The difference here is that before doing SLERP, I first rotate the two reference ellipsoids by the inverse of the rotation matrix that describes one of them, such that one of them is now axis-aligned (i.e. has no rotation). Previously this appeared to solve the problem, but I have encountered an example where this fix does not work.
The source code to reproduce this issue is available here. The relevant function is ellipsoidSLERP and the functions it calls. Here is a screenshot of the output:
What you are seeing is an interpolation of ellipsoid volumes (blue) between two reference ellipsoid volumes at either end (green) along a centreline (cyan).
Problem Statement
The interpolation on the left works correctly, resulting in a smooth tubular structure. The interpolation on the right does not work correctly, and results in a twist.
What is causing this behaviour, and how can I correct it?
Please let me know if there's anything I can do to clarify.
Related
I am reviewing some MATLAB code that is publicly available at the following location:
https://github.com/mattools/matGeom/blob/master/matGeom/geom2d/orientedBox.m
This is an implementation of the rotating calipers algorithm on the convex hull of a set of points in order to compute an oriented bounding box. My review was to understand intuitively how the algorithm works however I seek clarification on certain lines within the file which I am confused on.
On line 44: hull = bsxfun(#minus, hull, center);. This appears to translate all the points within the convex hull set so the calculated centroid is at (0,0). Is there any particular reason why this is performed? My only guess would be that it allows straightforward rotational transforms later on in the code, as rotating about the real origin would cause significant problems.
On line 71 and 74: indA2 = mod(indA, nV) + 1; , indB2 = mod(indB, nV) + 1;. Is this a trick in order to prevent the access index going out of bounds? My guess is to prevent out of bounds access, it will roll the index over upon reaching the end.
On line 125: y2 = - x * sit + y * cot;. This is the correct transformation as the code behaves properly, but I am not sure why this is actually used and different from the other rotational transforms done later and also prior (with the calls to rotateVector). My best guess is that I am simply not visualizing what rotation needs to be done in my head correctly.
Side note: The external function calls vectorAngle, rotateVector, createLine, and distancePointLine can all be found under the same repository, in files named after the function name (as per MATLAB standard). They are relatively uninteresting and do what you would expect aside from the fact that there is normalization of vector angles going on.
I'm the author of the above piece of code, so I can give some explanations about it:
First of all, the algorithm is indeed a rotating caliper algorithm. In the current implementation, only the width of the algorithm is tested (I did not check the west and est vertice). Actually, it seems the two results correspond most of the time.
Line 44 -> the goal of translating to origin was to improve numerical accuracy. When a polygon is located far away from the origin, coordinates may be large, and close together. Many computation involve products of coordinates. By translating the polygon around the origin, the coordinates are smaller, and the precision of the resulting products are expected to be improved. Well, to be honest, I did not evidenced this effect directly, this is more a careful way of coding than a fix…
Line 71-74! Yes. The idea is to find the index of the next vertex along the polygon. If the current vertex is the last vertex of the polygon, then the next vertex index should be 1. The use of modulo rescale between 0 and N-1. The two lines ensure correct iteration.
Line 125: There are several transformations involved. Using the rotateVector() function, one simply computes the minimal with for a given edge. On line 125, one rotate the points (of the convex hull) to align with the “best” direction (the one that minimizes the width). The last change of coordinates (lines 132->140) is due to the fact that the center of the oriented box is different from the centroid of the polygon. Then we add a shift, which is corrected by the rotation.
I did not really look at the code, this is an explanation of how the rotating calipers work.
A fundamental property is that the tightest bounding box is such that one of its sides overlaps an edge of the hull. So what you do is essentially
try every edge in turn;
for a given edge, seen as being horizontal, south, find the farthest vertices north, west and east;
evaluate the area or the perimeter of the rectangle that they define;
remember the best area.
It is important to note that when you switch from an edge to the next, the N/W/E vertices can only move forward, and are readily found by finding the next decrease of the relevant coordinate. This is how the total processing time is linear in the number of edges (the search for the initial N/E/W vertices takes 3(N-3) comparisons, then the updates take 3(N-1)+Nn+Nw+Ne comparisons, where Nn, Nw, Ne are the number of moves from a vertex to the next; obviously Nn+Nw+Ne = 3N in total).
The modulos are there to implement the cyclic indexing of the edges and vertices.
I have a surface by the code below and another surface which is created by the exact same code. I want to see the height differences in another figure. How am I able to do that? Already operated with the Minus-operator but this won't work.
Furthermore the matrices have NOT the same size!
Appreciate your help!
x1 = Cx1;
y1 = Cy1;
z1 = Cz1;
tri1 = delaunay(x1,y1);
fig1 = figure%('units','normalized','outerposition',[0 0 1 1]);
trisurf(tri1,x2,y2,z2)
xlabel('x [mm] ','FontSize',30)
ylabel('y [mm] ','FontSize',30)
zlabel('z [mm] ','FontSize',30)
The simplest way to solve this problem is to interpolate from one mesh onto the other one. Such an approach works well when one is more highly resolved than the other, or when you're not as concerned with results at individual nodes, but rather the overall pattern across elements. If that's not the case, then you have a very complicated problem because you need to create a polygonal surface that fully captures all nodes and edges of both triangulations. Consider the following pair of triangular patterns:
A surface that captured all the variations would need to have all the vertices and edges that make up both of them, which is not a purely triangular surface. So, let us instead assume the easier case. To map results from one triangulation to the other, you simply need to formulate functions that define how the values vary along the triangles, which are more broadly called basis functions. It is often assumed that values betweeen the nodes (i.e. vertices) of the triangles vary linearly along the surfaces of the triangles. You can do it differently if you want, it just requires defining new basis functions. If we go for linear functions, then the equations in 2D are pretty simple. Let's say you make an array trimap that has which triangle each of the vertices of the other triangulation is inside of. This can be accomplished using the info here. Then, we set the coordinates of the vertices of the current triangle to (x1,y1), (x2,y2), and (x3,y3), and then do the math:
for cnt1=1,npoints
x1=x(tri1(trimap(cnt1),1));
x2=x(tri1(trimap(cnt1),2));
x3=x(tri1(trimap(cnt1),3));
y1=y(tri1(trimap(cnt1),1));
y2=y(tri1(trimap(cnt1),2));
y3=y(tri1(trimap(cnt1),3));
delta=x2*y3+x1*y2+x3*y1-x2*y1-x1*y3-x3*y2;
delta1=(x2*y3-x3*y2+xstat(cnt1)*(y2-y3)+ystat(cnt1)*(x3-x2));
delta2=(x3*y1-x1*y3+xstat(cnt1)*(y3-y1)+ystat(cnt1)*(x1-x3));
delta3=(x1*y2-x2*y1+xstat(cnt1)*(y1-y2)+ystat(cnt1)*(x2-x1));
weights(cnt1,1)=delta1/delta;
weights(cnt1,2)=delta2/delta;
weights(cnt1,3)=delta3/delta;
z1=z(tri1(trimap(cnt1),1));
z2=z(tri1(trimap(cnt1),2));
z3=z(tri1(trimap(cnt1),3));
valinterp(cnt1)=sum(weights(cnt1,:).*[z1,z2,z3]);
end
valinterp is the interpolated value for each point. Here and here are some nice slides explaining the mathematics behind all this. Note that I've not tested any of this code. Note also that you will need to do something to assign to values outside of the triangulation. Perhaps a null value, or an inverse-distance weighted value.
I wanted to translate a set of reference points on contour to a set of corresponding target points. There are total 8 points on each contour.
In order to calculate the rotation & translation vector, I was using Math.Net Numerics library to perform SVD calculation - The idea came from this URL (page 3-7):
But somehow I noticed that transformation done using result from SVD calculation seems inaccurate. The result as shown below:
The transform supposed to move reference points to target points as close as possible, but as highlighted, it moves far away from target point.
In addition, I also did a simple test whereby I calculated centroid for both contours and perform deduction: (TargetCentroid - RefCentroid = translation vector). The final transformation result is the same as going through SVD.
Am I did something wrong? Can anyone suggest a better solution to transform ref point to target point?
Edit:
1. Garment transformation from reference model to various target models
This seems like an over complicated solution to the problem.
If you have the target points, you can just Lerp the given points to their corresponding target points.
Or if the target is the same mesh but of different scale and rotation as in the picture, you can just Lerp the transform values, scale and rotation respectfully, without the need to go over all the points individually.
Using Vector3.Lerp
Edit:
Additionally, lerping will cause all the points to reach their targets at the same time, which is, in most cases, the desired behavior.
So, this is going to be pretty hard for me to explain, or try to detail out since I only think I know what I'm asking, but I could be asking it with bad wording, so please bear with me and ask questions if need-be.
Currently I have a 3D vector field that's being plotted which corresponds to 40 levels of wind vectors in a 3D space (obviously). These are plotted in 3D levels and then stacked on top of each other using a dummy altitude for now (we're debating how to go about pressure altitude conversion most accurately--not to worry here). The goal is to start at a point within the vector space, modeling that point as a particle that can experience physics, and iteratively go through the vector field reacting to the forces, thus creating a trajectory of sorts through the vector field.
Currently what I'm trying to do is whip up code that would allow me to to start a point within this field and calculate the forces that the particle would feel at that point and then establish a resultant force vector that would indicate the next path of movement throughout the vector space.
Right now I'm stuck in the theoretical aspects of the code, as I'm trying to think through how the particle would feel vectors at a distance.
Any suggestions on ways to attack this problem within MatLab or relevant equations to use?
In order to run my code, you'll need read_grib.r4 and to compile that mex file here is a link to a zip with the code and the required files.
https://www.dropbox.com/s/uodvixdff764frq/WindSim_StackOverflow_Files.zip
I would try to interpolate the wind vector from the adjecent ones. You seem to have a regular grid, that should be no problem. (You can use interp3 for this)
Afterwards, you can use any differential-equation solver for your problem, as you have basically a field of gradients and an initial value. Forward euler would be the simplest one but need a small step size. (N.B.: Your field should be a gradient field)
You may read about this in Wikipedia: http://en.wikipedia.org/wiki/Vector_field#Flow_curves
In response to comment #1:
Yes. In a regular grid, any (arbitrary chosen) point will have eight neighbors. interp3 will so a trilinear interpolation to determine an interpolated gradient vector.
If you use forward-euler, you will then move a small distance in that direction. There you interpolate a gradient and go a small step into this new direction and so on. What happens are two things:
You get a series of points that lie on a streamline and thus form the trajectory of a particle moving along the field
Get large errors, the further you move and the larger the step size is. Use a small step size or use a better solver (Runge-Kutta comes to my mind)
If all you want is plotting, then the streamline function might help.
I am simulating a system where I need Direction Cosine Matrix to quaternion conversion. I use the default DCM to Quaternion conversion block available in simulink. However at some points of the simulation, the output quaternion components reverse sign.
Unfortunately I cannot attach the plot image.
Though this is mathematically correct I desire a smooth change. Any idea on how to avoid this and have a smooth curve for the quaternion?
Update 1:
http://tinypic.com/view.php?pic=33dayap&s=6
Above is the simulated plot. The first plot is of the output quaternion. Second plot is of the Direction Cosine Matrix. As you see that even though the dcm components change smoothly, the quaternion changes sign abruptly.
The problem arises because of the double covering property of quaternions: Two unit quaternions correspond to every rotation. At some point, according to some rule, the Matlab implementation switched from one quaternion to the other. There is not much you can do about it.
A messy workaround would be to write your own rotation matrix to quaternion conversion, and pick that representation of the two possibilities that is closer to the previous one, hence avoiding the sudden jumps. It's messy.
Plotting the quaternions is typically not needed in practical applications. Most likely you are rotating an object / vector. If you plot that object / vector (or some projections of it) you won't get any sudden jumps even if there are jumps in the representation of the rotation. Another benefit of plotting the projections of the rotated object is that it is usually much easier to interpret these plots than the quaternions. I don't know whether it makes sense in your application; it worked beautifully in mine.