Given parametrization of points on line L(t,θ) is
x(s):= t cos θ − s sin θ
y(s):= t sin θ + s cos θ, where t is the distance from the origin to the line at the angle θ from x-axis, and s is some point on the line.
How to take projections of image Img at this line L(t,θ) with specific step size s. Using this I have to implement a radon transform further.
My question is how to define step size s and value of t ?
Also, do I need to rotate Img or without rotation is it possible?
Please help.
I suggest you have a look at the multiple Open source software that are around.
In tomography, rotating the image is the same as rotating the machine around, so you could change the source/detector position per angle and then compute the line that joins them. Then, the step size is up to you. Research has shown (I have tested this) that a good value is s=pixel_size/2 or 0.5 if you are working in standard pixels.
If you are doing 2D parallel beam then you can forget about all the geometric transformations that need to be performed and generate projections by using imrotate. If you are using fan beam or cone beam, then the code gets a bit more complicated.
Related
I am making game in Unity engine, where car is moving along the bezier curve by percentage of bezier curve legth.
On this image you can see curve with 8 stop points (yellow spheres). Between each stop point is 20% gap of total distance.
On the image above everything is working correctly, but when I move handles, that the handles have different length problem occurs.
As you can see on image above, distances between stop points are not equal. It is because of my algorithm, because I am finding point of segment by multiplying segment length by interpolation (t). In short problem is that: if t=0.5 it is not in the 50% percent of the segment. As you can see on first image, stop points are in half of segment, but in the second image it is not in half of segment. This problem will be fixed, if there is some mathematical formula, how to find distance middle point.
As you can see on the image above, there are two mid points. T param mid point can be found by setting t to 0.5 (it is what i am doing now), but it is not half of the distance.
How can I find distance mid point (for cubic bezier curve, that have handles in different distance)?
You have correctly observed that the parameter value t=0.5 is generally not the point in the middle of the length. That is a good start but the difficulty lies in the mathematics beneath.
Denoting the components of your parametric curve by x(t) and y(t), the length of the curve
between t=0 (the beginning) and a chosen parameter value t = u is equal to
What you are trying to do is to find u such that l(u) is one half of l(1). This is sometimes possible but often difficult or impossible. So what can you do?
One possibility is to approximate the point you want. A straightforward way is to approximate your Bezier curve by a piecewise linear curve (simply by choosing many parameter values 0 = t_0 < t_1 < ... < t_n = 1 and connecting the values in these parameters by line segments). Now it is easy to compute the entire length (Pythagoras Theorem is your friend) as well as the middle point (walk along the piecewise linear curve the prescribed length). The more points you sample, the more precise you will be and the more time your computation will take, so there is a trade-off. Of course, you can use a more complicated numerical scheme for the approximation but that is beyond the scope of the answer.
The second possibility is to restrict yourself to a subclass of Bezier curves that will let you do what you want. These are called Pythagorean-Hodograph (shortly PH) curves. They have the extremely useful property that there exists a polynomial sigma(t) such that
This means that you can compute the integral above and search for the correct value of u. However, everything comes at a price and the price here is that you will have less freedom, where to put the control points (for me as a mathematician, a cubic Bézier curve has four control points; computer graphics people often speak of "handles" so you might have to translate into your terminology). For the cubic case you can find the conditions on slide 15 of this seminar talk by Vito Vitrih.
Denote:
the control points,
;
then the Bézier curve is a PH curve if and only if
.
It is up to you to figure out, if you can enforce this condition in your situation or if it is too restrictive for your application.
Explanation of the problem:
I have points with (x,y,z) coordinates at two+ distinct times. For convenience, they can be imagined as irregularly spaced points along the surface of an inverted paraboloid.
There is some minimal thickness to the paraboloid. The paraboloid changes shape slightly as time proceeds (like a balloon inflating) and when it does so, all of the points move.
By substracting the coordinates at time2 - time1, I can get the displacement vectors at each point.
It is important to note (and I suspect this might be the source of the problem) that at the first time point, the x and y coordinates range from 0 to 2000, and the z coordinates are all within a narrower range - say 350 to 450. During the deformation, each point has an x component of displacement, y component, and z component.
The x and y components are small (~50 at most), while the z component is the largest (goes up to 400 near the center, much less near the edges).
Using weighted moving least squares at the location of each point, I am trying to fit the components of displacements to a second degree polynomial surface in terms of the original x,y,z coordinates of the point: eg.
x component of
displacement = ax^2 + bxy + cx + dy^2.. + hz^2 + iz + j
I use the lsqr function in MATLAB,like so, looping through each point for each time interval:
Ux = displacements{k,1}(:,1);
Cx = lsqr((adjust_B_matrix'*W*adjust_B_matrix),(adjust_B_matrix'*W*Ux),1e-7,10000);
W is the weight matrix, and adjust_B_matrix is the matrix of all (x,y,z) coordinates at time 1, shifted so that they're all centered around the point at which I'm trying to fit the function.
What is going wrong?
It's just not working -- once I have the functions, they're re-centered around the actual coordinates of the points.
But once I plot the resulting points (initial pointx + displacementx, initial pointy + displacementy, initial pointz + displacementz) by plugging in the coordinates at time 1 into the now-discovered functions, it just spits out a surface that looks just like the surface at time 1.
What might be going wrong? Things I have tried:
It's not an issue with the code itself- I generated 'fake' data using a grid of points and it worked perfectly. The predicted locations were superimposed with the actual coordinates and I was able to get back the function I started with. But in my trial example, I used x,y,z from 0 to 5, evenly spaced.
Global fitting works (but I need local fitting...).
I tried MATLAB's curve fitting toolbox and just tried to fit one of the displacements to only x and y coordinates, globally. It worked perfectly.
I think I shouldn't have a singular matrix issue because I use a large radius (around 75-80) points in the calculations, somewhat dispersed in 3D space.
Suspicions:
I think it has to do with the uneven distribution of initial (x,y,z) coordinates, but I don't know why or how to fix the issue, or even what method I can use.
If you read this far, thank you so much. Any advice would be greatly appreciated.
Figure for reference:
green = predicted points at time 2. Overlapping mostly with red, the actual coordinates of the points at time 1.
blue is the correct coordinates of points at time 2 (this is where the green ones should be if things were working).
image
Updated link for files:
http://a.tmp.ninja/eWfkNmFZyTFk.zip
Contents - code, sample data (please load the .mat files).
I can't actually access the code you posted, so here's some general suggestions.
It does look like the curve fitting toolbox has tools that do exactly what you are looking for, checkout the bottom of this page: https://www.mathworks.com/help/curvefit/polynomial.html#bt9ykh.
It looks like for whatever your learned function for the displacement is just very small or zero everywhere. I suspect the issue is just a minor typo/error on your part somewhere in your pipeline, possibly translating what you have to work with the fit function will reveal the issue.
This really shouldn't be the issue, but in the future if you had much more unbalanced data you could normalize it all before fitting (x_norm = (x - x_mu)/x_std).
Also, I don't think this is your problem either, but you can check if your matrix is close to singular by checking the condition number using the cord() function. So you could check cond(adjust_B_matrix'Wadjust_B_matrix). Second, If you check the documentation for lsqr there is an option to get a debug return flag, that is worth checking too.
I wanted to translate a set of reference points on contour to a set of corresponding target points. There are total 8 points on each contour.
In order to calculate the rotation & translation vector, I was using Math.Net Numerics library to perform SVD calculation - The idea came from this URL (page 3-7):
But somehow I noticed that transformation done using result from SVD calculation seems inaccurate. The result as shown below:
The transform supposed to move reference points to target points as close as possible, but as highlighted, it moves far away from target point.
In addition, I also did a simple test whereby I calculated centroid for both contours and perform deduction: (TargetCentroid - RefCentroid = translation vector). The final transformation result is the same as going through SVD.
Am I did something wrong? Can anyone suggest a better solution to transform ref point to target point?
Edit:
1. Garment transformation from reference model to various target models
This seems like an over complicated solution to the problem.
If you have the target points, you can just Lerp the given points to their corresponding target points.
Or if the target is the same mesh but of different scale and rotation as in the picture, you can just Lerp the transform values, scale and rotation respectfully, without the need to go over all the points individually.
Using Vector3.Lerp
Edit:
Additionally, lerping will cause all the points to reach their targets at the same time, which is, in most cases, the desired behavior.
How do you determine that the intrinsic and extrinsic parameters you have calculated for a camera at time X are still valid at time Y?
My idea would be
to use a known calibration object (a chessboard) and place it in the camera's field of view at time Y.
Calculate the chessboard corner points in the camera's image (at time Y).
Define one of the chessboard corner points as world origin and calculate the world coordinates of all remaining chessboard corners based on that origin.
Relate the coordinates of 3. with the camera coordinate system.
Use the parameters calculated at time X to calculate the image points of the points from 4.
Calculate distances between points from 2. with points from 5.
Is that a clever way to go about it? I'd eventually like to implement it in MATLAB and later possibly openCV. I think I'd know how to do steps 1)-2) and step 6). Maybe someone can give a rough implementation for steps 2)-5). Especially I'd be unsure how to relate the "chessboard-world-coordinate-system" with the "camera-world-coordinate-system", which I believe I would have to do.
Thanks!
If you have a single camera you can easily follow the steps from this article:
Evaluating the Accuracy of Single Camera Calibration
For achieving step 2, you can easily use detectCheckerboardPoints function from MATLAB.
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(imageFileNames);
Assuming that you are talking about stereo-cameras, for stereo pairs, imagePoints(:,:,:,1) are the points from the first set of images, and imagePoints(:,:,:,2) are the points from the second set of images. The output contains M number of [x y] coordinates. Each coordinate represents a point where square corners are detected on the checkerboard. The number of points the function returns depends on the value of boardSize, which indicates the number of squares detected. The function detects the points with sub-pixel accuracy.
As you can see in the following image the points are estimated relative to the first point that covers your third step.
[The image is from this page at MATHWORKS.]
You can consider point 1 as the origin of your coordinate system (0,0). The directions of the axes are shown on the image and you know the distance between each point (in the world coordinate), so it is just the matter of depth estimation.
To find a transformation matrix between the points in the world CS and the points in the camera CS, you should collect a set of points and perform an SVD to estimate the transformation matrix.
But,
I would estimate the parameters of the camera and compare them with the initial parameters at time X. This is easier, if you have saved the images that were used when calibrating the camera at time X. By repeating the calibrating process using those images you should get very similar results, if the camera calibration is still valid.
Edit: Why you need the set of images used in the calibration process at time X?
You have a set of images to do the calibrations for the first time, right? To recalibrate the camera you need to use a new set of images. But for checking the previous calibration, you can use the previous images. If the parameters of the camera are changes, there would be an error between the re-estimation and the first estimation. This can be used for evaluating the validity of the calibration not for recalibrating the camera.
I have a convex polygon in 3D. For simplicity, let it be a square with vertices, (0,0,0),(1,1,0),(1,1,1),(0,0,1).. I need to arrange these vertices in counter clockwise order. I found a solution here. It is suggested to determine the angle at the center of the polygon and sort them. I am not clear how is that going to work. Does anyone have a solution? I need a solution which is robust and even works when the vertices get very close.
A sample MATLAB code would be much appreciated!
This is actually quite a tedious problem so instead of actually doing it I am just going to explain how I would do it. First find the equation of the plane (you only need to use 3 points for this) and then find your rotation matrix. Then find your vectors in your new rotated space. After that is all said and done find which quadrant your point is in and if n > 1 in a particular quadrant then you must find the angle of each point (theta = arctan(y/x)). Then simply sort each quadrant by their angle (arguably you can just do separation by pi instead of quadrants (sort the points into when the y-component (post-rotation) is greater than zero).
Sorry I don't have time to actually test this but give it a go and feel free to post your code and I can help debug it if you like.
Luckily you have a convex polygon, so you can use the angle trick: find a point in the interior (e.g., find the midpoint of two non-adjacent points), and draw vectors to all the vertices. Choose one vector as a base, calculate the angles to the other vectors and order them. You can calculate the angles using the dot product: A · B = A B cos θ = |A||B| cos θ.
Below are the steps I followed.
The 3D planar polygon can be rotated to 2D plane using the known formulas. Use the one under the section Rotation matrix from axis and angle.
Then as indicated by #Glenn, an internal points needs to be calculated to find the angles. I take that internal point as the mean of the vertex locations.
Using the x-axis as the reference axis, the angle, on a 0 to 2pi scale, for each vertex can be calculated using atan2 function as explained here.
The non-negative angle measured counterclockwise from vector a to vector b, in the range [0,2pi], if a = [x1,y1] and b = [x2,y2], is given by:
angle = mod(atan2(y2-y1,x2-x1),2*pi);
Finally, sort the angles, [~,XI] = sort(angle);.
It's a long time since I used this, so I might be wrong, but I believe the command convhull does what you need - it returns the convex hull of a set of points (which, since you say your points are a convex set, should be the set of points themselves), arranged in counter-clockwise order.
Note that MathWorks recently delivered a new class DelaunayTri which is intended to superseded the functionality of convhull and other older computational geometry stuff. I believe it's more accurate, especially when the points get very close together. However I haven't tried it.
Hope that helps!
So here's another answer if you want to use convhull. Easily project your polygon into an axes plane by setting one coordinate zero. For example, in (0,0,0),(1,1,0),(1,1,1),(0,0,1) set y=0 to get (0,0),(1,0),(1,1),(0,1). Now your problem is 2D.
You might have to do some work to pick the right coordinate if your polygon's plane is orthogonal to some axis, if it is, pick that axis. The criterion is to make sure that your projected points don't end up on a line.