I am implementing the algorithm for Photometric Stereo where I have already calculated the normals from a set of images with different light directions.
How can I plot the normal vector field in matlab? I have a matrix of normals of size (N x 3).
I'm afraid you have left out a step. You need to retrieve the depth map from the surface normals, and then you can start plotting. To see how to do this, you can check out section 4 of the following paper:
http://www.wisdom.weizmann.ac.il/~vision/photostereo/Photometric%20Stereo%20with%20General%20Unknown%20Lighting%20-%20BasriJacobsKemelmacher_ijcv06.pdf
There are other resources on the web too; I don't know of any built-in function in any Matlab library, but I don't have the Computer Vision toolbox, so who knows?
I suspect you are looking for quiver3.
You need to present normals field, as an gradient field, then you can,use quiver function. And in gradient field previously normalized triple {pn,qn,rn}, the data is presented in such a way , as to rend the third component of it always equal to one (at least in theory). I mean with rn=1, or should I now say, that now : R=1, and you actually need only {P,Q} coomponents to present contents of gradient field with ordinary 2D quiver function. Thus, gradient vector is something quite different and distinct from normals field, because: P=pn/sqrt(pn^2+qn^2+rn^2) , and Q=qn/(pn^2+qn^2+rn^2) POINTWISELY saying.
However you don't bother with double for loops, run over X,Y directions, cause POINTWISELY, correctly rendered calculations for gradient field from normals, is the following: P=pn./(pn.^2 + qn.^2 + rn.^2).^(1/2);, and so on.
You can see as well: http://www.mathworks.com/matlabcentral/fileexchange/authors/126090/
You need to present normals field, as an gradient field, then you can use a Matlab's quiver function. And in gradient field, the previously normalized triple {pn,qn,rn}, of the data, is presented in such a way, as to rend the third component of it always equal to one (at least in theory).
I mean with rn = 1, or should I now say, that now with: some R=1, you actually need only {P,Q} coomponents to present contents of gradient field with ordinary 2D quiver function. Thus, gradient vector is something quite different and distinct from normals field, because:
P=pn/sqrt(pn^2+qn^2+rn^2) , and Q=qn/(pn^2+qn^2+rn^2) POINTWISELY saying.
However you don't bother with double for loops, which would be run over X, Y directions, cause POINTWISELY, correctly rendered calculations for gradient field from normals, are the following:
P=pn./(pn.^2 + qn.^2 + rn.^2).^(1/2); , and: Q=qn./(pn.^2 + qn.^2 + rn.^2);
You can see as well:
http://www.mathworks.com/matlabcentral/fileexchange/authors/126090/
Briefly saying, the gradient field always represents slopes on X, Y directions, while descending exactly one height unit alongside Z axis on the 3D surface retrieved with for instance Photometric Stereo algorithm. That is why the third component in quiver function visualization is always equal to one (i.e. R = 1), and practically irrelevant.
I have posted, last month, some codes, on THE simplest Photometric Stereo methods, on Mathworks Web pages, due to some span of time available to tide it all up, I mean my own so far produced codes in Matlab...
Related
I have a surface by the code below and another surface which is created by the exact same code. I want to see the height differences in another figure. How am I able to do that? Already operated with the Minus-operator but this won't work.
Furthermore the matrices have NOT the same size!
Appreciate your help!
x1 = Cx1;
y1 = Cy1;
z1 = Cz1;
tri1 = delaunay(x1,y1);
fig1 = figure%('units','normalized','outerposition',[0 0 1 1]);
trisurf(tri1,x2,y2,z2)
xlabel('x [mm] ','FontSize',30)
ylabel('y [mm] ','FontSize',30)
zlabel('z [mm] ','FontSize',30)
The simplest way to solve this problem is to interpolate from one mesh onto the other one. Such an approach works well when one is more highly resolved than the other, or when you're not as concerned with results at individual nodes, but rather the overall pattern across elements. If that's not the case, then you have a very complicated problem because you need to create a polygonal surface that fully captures all nodes and edges of both triangulations. Consider the following pair of triangular patterns:
A surface that captured all the variations would need to have all the vertices and edges that make up both of them, which is not a purely triangular surface. So, let us instead assume the easier case. To map results from one triangulation to the other, you simply need to formulate functions that define how the values vary along the triangles, which are more broadly called basis functions. It is often assumed that values betweeen the nodes (i.e. vertices) of the triangles vary linearly along the surfaces of the triangles. You can do it differently if you want, it just requires defining new basis functions. If we go for linear functions, then the equations in 2D are pretty simple. Let's say you make an array trimap that has which triangle each of the vertices of the other triangulation is inside of. This can be accomplished using the info here. Then, we set the coordinates of the vertices of the current triangle to (x1,y1), (x2,y2), and (x3,y3), and then do the math:
for cnt1=1,npoints
x1=x(tri1(trimap(cnt1),1));
x2=x(tri1(trimap(cnt1),2));
x3=x(tri1(trimap(cnt1),3));
y1=y(tri1(trimap(cnt1),1));
y2=y(tri1(trimap(cnt1),2));
y3=y(tri1(trimap(cnt1),3));
delta=x2*y3+x1*y2+x3*y1-x2*y1-x1*y3-x3*y2;
delta1=(x2*y3-x3*y2+xstat(cnt1)*(y2-y3)+ystat(cnt1)*(x3-x2));
delta2=(x3*y1-x1*y3+xstat(cnt1)*(y3-y1)+ystat(cnt1)*(x1-x3));
delta3=(x1*y2-x2*y1+xstat(cnt1)*(y1-y2)+ystat(cnt1)*(x2-x1));
weights(cnt1,1)=delta1/delta;
weights(cnt1,2)=delta2/delta;
weights(cnt1,3)=delta3/delta;
z1=z(tri1(trimap(cnt1),1));
z2=z(tri1(trimap(cnt1),2));
z3=z(tri1(trimap(cnt1),3));
valinterp(cnt1)=sum(weights(cnt1,:).*[z1,z2,z3]);
end
valinterp is the interpolated value for each point. Here and here are some nice slides explaining the mathematics behind all this. Note that I've not tested any of this code. Note also that you will need to do something to assign to values outside of the triangulation. Perhaps a null value, or an inverse-distance weighted value.
I have 8 plots which I want to implement in my Matlab code. These plots originate from several research papers, hence, I need to digitize them first in order to be able to use them.
An example of a plot is shown below:
This is basically a surface plot with three different variables. I know how to digitize a regular plot with just X and Y coordinates. However, how would one digitize a graph like this? I am quite unsure, hence, the question.
Also, If I would be able to obtain the data from this plot. How would you be able to utilize it in your code? Maybe with some interpolation and extrapolation between the given data points?
Any tips regarding this topic are welcome.
Thanks in advance
Here is what I would suggest:
Read the image in Matlab using imread.
Manually find the pixel position of the left bottom corner and the upper right corner
Using these pixels values and the real numerical value, it is simple to determine the x and y value of every pixel. I suggest you use meshgrid.
Knowing that the curves are in black, then remove every non-black pixel from the image, which leaves you only with the curves and the numbers.
Then use the function bwareaopen to remove the small objects (the numbers). Don't forget to invert the image to remove the black instead of the white.
Finally, by using point #3 and the result of point #6, you can manually extract the data of the graph. It won't be easy, but it will be feasible.
You will need the data for the three variables in order to create a plot in Matlab, which you can get either from the previous research or by estimating and interpolating values from the plot. Once you get the data though, there are two functions that you can use to make surface plots, surface and surf, surf is pretty much the same as surface but includes shading.
For interpolation and extrapolation it sounds like you might want to check out 2D interpolation, interp2. The interp2 function can also do extrapolation as well.
You should read the documentation for these functions and then post back with specific problems if you have any.
I wanted to translate a set of reference points on contour to a set of corresponding target points. There are total 8 points on each contour.
In order to calculate the rotation & translation vector, I was using Math.Net Numerics library to perform SVD calculation - The idea came from this URL (page 3-7):
But somehow I noticed that transformation done using result from SVD calculation seems inaccurate. The result as shown below:
The transform supposed to move reference points to target points as close as possible, but as highlighted, it moves far away from target point.
In addition, I also did a simple test whereby I calculated centroid for both contours and perform deduction: (TargetCentroid - RefCentroid = translation vector). The final transformation result is the same as going through SVD.
Am I did something wrong? Can anyone suggest a better solution to transform ref point to target point?
Edit:
1. Garment transformation from reference model to various target models
This seems like an over complicated solution to the problem.
If you have the target points, you can just Lerp the given points to their corresponding target points.
Or if the target is the same mesh but of different scale and rotation as in the picture, you can just Lerp the transform values, scale and rotation respectfully, without the need to go over all the points individually.
Using Vector3.Lerp
Edit:
Additionally, lerping will cause all the points to reach their targets at the same time, which is, in most cases, the desired behavior.
I am currently doing some image segmentation on a bone qCT picture, see for instance images below.
I am trying to find the different borders in the picture for instance the outer border separating the bone to the noisy background. In this analysis I am getting a list of points (vec(1,:) containing x values and vex(2,:) containing the y values) in random order.
To get them into order I am using using a block of code which effectively takes the first point vec(1,1),vec(1,2) and then finds the closest point among the rest of the points in the vector. And then repeats.
Now my problem is that I want to smooth the data but how do I do that as the points lie in a circular formation? (I do have the Curve Fitting Toolbox)
Not exactly a smoothing procedure, but a way to simplify your data would be to compute the boundary of the convex hull of the data.
K = convhull(O(1,:), O(2,:));
plot(O(1,K), O(2,K));
You could also consider using alpha shapes if you want more control.
I am trying to plot x and y velocities using quiver function in MATLAB.
I have x,y,u and v arrays(with their usual meanings) with dimension 100x100
So, the result is my quiver plot is dense and I cannot see the arrows unless I zoom in.
Somewhat like this: quiver not drawing arrows just lots of blue, matlab
Take a look at my plot:
Is there any way to make quiver plot less dense(and with bigger arrows)? I am planning to clip x-axis range to 0-4. But anything apart from that?
I cannot make my mesh less dense for accuracy concerns. I am, however willing to ignore some fine data points if that's required to make the plot look better.
You can plot a reduced number of arrows by plotting, for example, (assuming your data are in arrays)
quiver(x(1:2:end,1:2:end),y(1:2:end,1:2:end),u(1:2:end,1:2:end),v(1:2:end,1:2:end))
where the 2 in this example means we plot only a quarter as many arrows. You can of course change it, as long as you change all of the 2's so that the arrays are all appropriately sized.
If you want to change the length of the arrows there are two options. Firstly, you can use the scale option scale=2 to scale the arrows by the amount specified, or you can normalise the velocities if you want to have all the arrows the same length. You do lose information doing that, because you can't compare the magnitude of the velocity by looking at the arrows, but it may be useful in some situations. You can do this by dividing u and v both by sqrt(u.^2+v.^2) (at the points you wish to plot arrows at.
Hope that helps and sets everything out nicely.
You need to make your interval value a bit larger in order to make your matrix more sparse.
This is very dense:
1:0.0001:100
This is very sparse:
1:1:100
EDIT:
If you have the Image Processing Toolkit you can use the imresize function to reduce the matrix resolution:
newMat = imresize(oldMat, newSize);
And if you don't have the Toolbox then you can resize in a similar manner to this example using interp2 Interpolation:
orgY = 1:size(oldMat,1);
orgX = 1:size(oldMat,2);
[orgX,orgY] = meshgrid(orgX ,orgY);
newY = linspace(1,size(mat,1),newHeight);
newX = linspace(1,size(mat,2),newWidth);
[newX,newY] = meshgrid(newX,newY);
newMat = interp2(orgX,orgY,mat,newX,newY);
And thanks to #David, if you want to just strip out some individual points you can simply do:
xPlot=x(1:2:end)