I have a cloud of points that lie randomly on a 3D object surface. The object is a CAD model, can be saved as STL. The point cloud is obtained from ray tracing, each point representing the power of light absorbed when a ray partially reflects off the surface. I would like to visualize absorbed intensity on the object, using ParaView.
So, input: [x, y, z, p] + STL. (x, y, z) are guaranteed to lie on the object surface, but are likely slightly off the STL due to it being an approximation of the real surface.
Desired output: colored STL image, with each surface element coloured according to the total absorbed power in that element divided by its area.
Optional: Ideally, the data should be smoothened, something like "sliding average" or Gaussian blur.
Difficulty: The main problem I am facing, independent of using ParaView, is that I don't know the intensity, only the power. I can calculate the intensity myself, e.g. in Matlab, and get a poor Matlab graphics (compared to Paraview) and a very noisy image (because of random fluctuation of intensity between pixels due to finite number of rays). ParaView seems to be doing magic, hope to solve this problem with it.
Can I do the above with ParaView, without programming a new filter / with minimum programming?
I have just discovered ParaView, so please excuse a very novice question. Googling for answer didn't help, hopefully I didn't miss it due to poor wording.
RessampleToDataset filter should let you ressample your point cloud unto the stl.
Related
I am trying to compute the 3D coordinates from several pair of two view points.
First, I used the matlab function estimateFundamentalMatrix() to get the F of the matched points (Number > 8) which is:
F1 =[-0.000000221102386 0.000000127212463 -0.003908602702784
-0.000000703461004 -0.000000008125894 -0.010618266198273
0.003811584026121 0.012887141181108 0.999845683961494]
And my camera - taken these two pictures - was pre-calibrated with the intrinsic matrix:
K = [12636.6659110566, 0, 2541.60550098958
0, 12643.3249022486, 1952.06628069233
0, 0, 1]
From this information I then computed the essential matrix using:
E = K'*F*K
With the method of SVD, I finally got the projective transformation matrices:
P1 = K*[ I | 0 ]
and
P2 = K*[ R | t ]
Where R and t are:
R = [ 0.657061402787646 -0.419110137500056 -0.626591577992727
-0.352566614260743 -0.905543541110692 0.235982367268031
-0.666308558758964 0.0658603659069099 -0.742761951588233]
t = [-0.940150699101422
0.320030970080146
0.117033504470591]
I know there should be 4 possible solutions, however, my computed 3D coordinates seemed to be not correct.
I used the camera to take pictures of a FLAT object with marked points. I matched the points by hand (which means there should not be obvious mistake exists about the raw material). But the result turned out to be a surface with a little bit banding.
I guess this might be due to the reason pictures did not processed with distortions (but actually I remember I did).
I just want to know whether this method to solve the 3D reconstruction issue right? Especially when we already know the camera intrinsic matrix.
Edit by JCraft at Aug.4: I have redone the process and got some pictures showing the problem, I will write another question with detail then post the link.
Edit by JCraft at Aug.4: I have posted a new question: Calibrated camera get matched points for 3D reconstruction, ideal test failed. And #Schorsch really appreciate your help formatting my question. I will try to learn how to do inputs in SO and also try to improve my gramma. Thanks!
If you only have the fundamental matrix and the intrinsics, you can only get a reconstruction up to scale. That is your translation vector t is in some unknown units. You can get the 3D points in real units in several ways:
You need to have some reference points in the world with known distances between them. This way you can compute their coordinates in your unknown units and calculate the scale factor to convert your unknown units into real units.
You need to know the extrinsics of each camera relative to a common coordinate system. For example, you can have a checkerboard calibration pattern somewhere in your scene that you can detect and compute extrinsics from. See this example. By the way, if you know the extrinsics, you can compute the Fundamental matrix and the camera projection matrices directly, without having to match points.
You can do stereo calibration to estimate the R and the t between the cameras, which would also give you the Fundamental and the Essential matrices. See this example.
Flat objects are critical surfaces, not possible to achive your goal from them. try adding two (or more) points off the plane (see Hartley and Zisserman or other text on the matter if still interested)
I'm receiving depth images of a tof camera via MATLAB. the delivered drivers of the tof camera to compute x,y,z coordinates out of the depth image are using openCV function, which are implemented in MATLAB via mex-files.
But later on I can't use those drivers anymore nor use openCV functions, therefore I need to implement the 2d to 3d mapping on my own including the compensation of radial distortion. I already got hold of the camera parameters and the computation of the x,y,z coordinates of each pixel of the depth image is working. Until now I am solving the implicit equations of the undistortion via the newton method (which isn't really fast...). But I want to implement the undistortion of the openCV function.
... and there is my problem: I dont really understand it and I hope you can help me out there. how is it actually working? I tried to search through the forum, but havent found any useful threads concerning this case.
greetings!
The equations of the projection of a 3D point [X; Y; Z] to a 2D image point [u; v] are provided on the documentation page related to camera calibration :
(source: opencv.org)
In the case of lens distortion, the equations are non-linear and depend on 3 to 8 parameters (k1 to k6, p1 and p2). Hence, it would normally require a non-linear solving algorithm (e.g. Newton's method, Levenberg-Marquardt algorithm, etc) to inverse such a model and estimate the undistorted coordinates from the distorted ones. And this is what is used behind function undistortPoints, with tuned parameters making the optimization fast but a little inaccurate.
However, in the particular case of image lens correction (as opposed to point correction), there is a much more efficient approach based on a well-known image re-sampling trick. This trick is that, in order to obtain a valid intensity for each pixel of your destination image, you have to transform coordinates in the destination image into coordinates in the source image, and not the opposite as one would intuitively expect. In the case of lens distortion correction, this means that you actually do not have to inverse the non-linear model, but just apply it.
Basically, the algorithm behind function undistort is the following. For each pixel of the destination lens-corrected image do:
Convert the pixel coordinates (u_dst, v_dst) to normalized coordinates (x', y') using the inverse of the calibration matrix K,
Apply the lens-distortion model, as displayed above, to obtain the distorted normalized coordinates (x'', y''),
Convert (x'', y'') to distorted pixel coordinates (u_src, v_src) using the calibration matrix K,
Use the interpolation method of your choice to find the intensity/depth associated with the pixel coordinates (u_src, v_src) in the source image, and assign this intensity/depth to the current destination pixel.
Note that if you are interested in undistorting the depthmap image, you should use a nearest-neighbor interpolation, otherwise you will almost certainly interpolate depth values at object boundaries, resulting in unwanted artifacts.
The above answer is correct, but do note that UV coordinates are in screen space and centered around (0,0) instead of "real" UV coordinates.
Source: own re-implementation using Python/OpenGL. Code:
def correct_pt(uv, K, Kinv, ds):
uv_3=np.stack((uv[:,0],uv[:,1],np.ones(uv.shape[0]),),axis=-1)
xy_=uv_3#Kinv.T
r=np.linalg.norm(xy_,axis=-1)
coeff=(1+ds[0]*(r**2)+ds[1]*(r**4)+ds[4]*(r**6));
xy__=xy_*coeff[:,np.newaxis]
return (xy__#K.T)[:,0:2]
I have a multiple plants in a single binary image. How would I identify each leaf in the image assuming that each leaf is approximately elliptical?
example input: http://i.imgur.com/BwhLVmd.png
I was thinking a good place to start would be finding the tip of each leaf and then getting the center of each plant. Then I could fit the curves starting from the tip and then going to the center. I've been looking online and saw something involving a watershed method, but I do not know where to begin with that idea.
You should be aware that these things are tricky to get working robustly - there will always be a failure case.
This said, I think your idea is not bad.
You could start as follows:
Identify the boundary curve of each plant (i.e. pixels with both foreground and background in their neighbourhood).
Compute the centroid of each plant.
Convert each plant boundary to a polar coordinate system, with the centroid as the origin. This amounts to setting up a coordinate system with the distance of each boundary curve point on the Y axis and the angle on the X axis.
In this representation of the boundary curve, try to identify maxima; these are the tips of the leaves. You will probably need to do some smoothing. Use the parts of the curve before and after the maxima the start fitting your ellipses or some other shape.
Generally, a polar coordinate system is always useful for analysing stuff thats roughly circular.
To fit you ellipses, once you have a rough initial position, I would probably try an EM-style approach.
I would do something like this (I is your binary image)
I=bwmorph(bwmorph(I, 'bridge'), 'clean');
SK=bwmorph(I, 'skel', Inf);
endpts = bwmorph(SK,'endpoints');
props=regionprops(I, 'All');
And then connect every segment from the centroids listed in props.centroid to the elements of endpts that should give you your leaves (petals?).
A bit of filtering is probably necessary, bwmorph is your friend. Have fun!
I have N 3D observations taken from an optical motion capture system in XYZ form.
The motion that was captured was just a simple circle arc, derived from a rigid body with fixed axis of rotation.
I used the princomp function in matlab to get all marker points on the same plane i.e. the plane on which the motion has been done.
(See a pic representing 3D data on the plane that was found, below)
What i want to do after the previous step is to look the fitted data on the plane that was found and get the curve of the captured motion in 2D.
In the princomp how to, it is said that
The first two coordinates of the principal component scores give the
projection of each point onto the plane, in the coordinate system of
the plane.
(from "Fitting an Orthogonal Regression Using Principal Components Analysis" article on mathworks help site)
So i thought that if i just plot those pc scores -plot(score(:,1),score(:,2))- i'll get the motion curve. Instead what i got is this.
(See a pic representing curve data in 2D derived from pc scores, below)
The 2d curve seems stretched and nonlinear (different y values for same x values) when it shouldn't be. The curve that i am looking for, should be interpolated by just using simple polynomial (polyfit) or circle fit in matlab.
Is this happening because the plane that was found looks like rhombus relative to the original coordinate system and the pc axes are rotated with respect to the basis of plane in such way that produce this stretch?
Then i thought that, this is happening because of the different coordinate systems of optical system and Matlab. Optical system's (ie cameras) co.sys. is XZY oriented and Matlab's default (i think) co.sys is XYZ oriented. I transformed my data to correspond to Matlab's co.sys through a rotation matrix, run again princomp but i got the same stretch in the 2D curve (the new curve just had different orientation now).
Somewhere else i read that
Principal Components Analysis chooses the first PCA axis as that line
that goes through the centroid, but also minimizes the square of the
distance of each point to that line. Thus, in some sense, the line is
as close to all of the data as possible. Equivalently, the line goes
through the maximum variation in the data. The second PCA axis also
must go through the centroid, and also goes through the maximum
variation in the data, but with a certain constraint: It must be
completely uncorrelated (i.e. at right angles, or "orthogonal") to PCA
axis 1.
I know that i am missing something but i have a problem understanding why i get a stretched curve. What i have to do so i can get the curve right?
Thanks in advance.
EDIT: Here is a sample data file (3 columns XYZ coords for 2 markers)
w w w.sendspace.com/file/2hiezc
Anyone have any starting tips for me? I want to learn from this (ie Don't want to be lazy and have someone answer this for me).
I would like to develop my understanding of mathematical 3D surfaces. My own personal project is to produce a 3D surface/graph of the concourse structure in MATLAB.
I found a link with good pictures of its geometry here. I am not expecting to get it 100% perfectly but I'd like to come close!
At the end of this exercise I would like to have a mathematical definition of the geometry as well as a visual representation of the surface. This can involve cartesian equations, parametric equations, matrices, etc.
Any help would be very much appreciated!
To give some specific advice for MATLAB:
I would load in the 'section' image from the web page you have linked, and display this in a MATLAB figure window. You can then try plotting lines over the top until you find one that fits nicely. So you might do something like:
A = imread('~/Desktop/1314019872-1244-n364-1000x707.jpg');
imshow(A)
hold on
axis on
%# my guess at the function - obviously not a good fit
x = [550:900];
plot(x, 0.0001*x.^2 + 300)
Of course, you might want to move the position of the origin or crop the picture and so on.
As an arguably better alternative to this trial-and-error method, you could trace the outline of the section (e.g by clicking points with something like ginput), and then use one of MATLAB's curve-fitting tools (e.g. fit) to fit a function to the data.
The final 3D shape looks to me (at a casual glance) to be a 3D revolution of the section shape around a central axis. Use of a cylindrical coordinate system could therefore be a good idea.
The final plotting of your 3D shape could be done with a function such as surf or mesh.
I would start by defining a function that defines for each x, y coordinate whether there is a point z, and if so with which altitude.
The shape reminds me a bit of a log or a square root.