Binary mask in image space from point cloud data - matlab

Hy!
I have a point cloud representing walls and the floor of an indoor scene.
I projected the points of the floor on a plane. That means it's a "2d point cloud" now.
All z-coordinates are zero.
I have to deal with missing parts. My idea now is to fill the holes.
Is there a way to transform the points into the image space to create a
binary mask? I would like to use techniques like: "imfill" in Matlab...
Thanks
Edit:
To make it more clear, I will explain an simple example. I have points in 2D. After I made a triangulation, I can access each triangle. For each triangle I create a binary mask with poly2mask(), and each mask I write to an final image.
Here is an example:
Now I can use morphological operations on the image.
E.G: Here is an more complex example, where the triangulation gives me bad results:
To fill the hole on the right side, I could use morphological operation.
My problem: The points can be negative, and the distance between the triangles can be very small (E.g.: x coordinates of triangle: (1.12 1.14 1.12), will give me the point 1 in the image space

Related

Is it possible to find the depth of an internal point of an object using stereo images (or any other method)?

I have image of robot with yellow markers as shown
The yellow points shown are the markers. There are two cameras used to view placed at an offset of 90 degrees. The robot bends in between the cameras. The crude schematic of the setup can be referred.
https://i.stack.imgur.com/aVyDq.png
Using the two cameras I am able to get its 3d co-ordinates of the yellow markers. But, I need to find the 3d-co-oridnates of the central point of the robot as shown.
I need to find the 3d position of the red marker points which is inside the cylindrical robot. Firstly, is it even feasible? If yes, what is the method I can use to achieve this?
As a bonus, is there any literature where they find the 3d location of such internal points which I can refer to (I searched, but could not find anything similar to my ask).
I am welcome to a theoretical solution as well(as long as it assures to find the central point within a reasonable error), which I can later translate to code.
If you know the actual dimensions, or at least, shape (e.g. perfect circle) of the white bands, then yes, it is feasible and possible.
You need to do the following steps, which are quite non trivial to do, and I won't do them here:
Optional but extremely suggested: calibrate your camera, and
undistort it.
find the equation of the projection of a 3D circle into a 2D camera, for any given rotation. You can simplify this by assuming the white line will be completely horizontal. You want some function that takes the parameters that make a circle and a rotation.
Find all white bands in the image, segment them, and make them horizontal (rotate them)
Fit points in the corrected white circle to the equation in (1). That should give you the parameters of the circle in 3d (radious, angle), if you wrote the equation right.
Now that you have an analytic equation of the actual circle (equation from 1 with parameters from 3), you can map any point from this circle (e.g. its center) to the image location. Remember to uncorrect for the rotations in step 2.
This requires understanding of curve fitting, some geometric analytical maths, and decent code skills. Not trivial, but this will provide a solution that is highly accurate.
For an inaccurate solution:
Find end points of white circles
Make line connecting endpoints
Chose center as mid point of this line.
This will be inaccurate because: choosing end points will have more error than fitting an equation with all points, ignores cone shape of view of the camera, ignores geometry.
But it may be good enough for what you want.
I have been able to extract the midpoint by fitting an ellipse to the arc visible to the camera. The centroid of the ellipse is the required midpoint.
There will be wrong ellipses as well, which can be ignored. The steps to extract the ellipse were:
Extract the markers
Binarise and skeletonise
Fit ellipse to the arc (found a matlab function for this)
Get the centroid of the ellipse
hsv_img=rgb2hsv(im);
bin=new_hsv_img(:,:,3)>marker_th; %was chosen 0.35
%skeletonise
skel=bwskel(bin);
%use regionprops to get the pixelID list
stats=regionprops(skel,'all');
for i=1:numel(stats)
el = fit_ellipse(stats(i).PixelList(:,1),stats(i).PixelList(:,2));
ellipse_draw(el.a, el.b, -el.phi, el.X0_in, el.Y0_in, 'g');
The link for fit_ellipse function
Link for ellipse_draw function

Matching RGB image with point cloud

I have an RGB image and a point cloud acquired by LIDAR.
In the RGB image I detect a feature, let's say a circle.
I want to use this circle as a ROI in my 3d point cloud.
How can I do that? I was thinking to produce a 3d point cloud from the RGB image through the camera parameters and then match the 2 with icp algorithm.
The problem's that on the moment I produce the point cloud from the 2D image, my coordinates system change, so I don't know anymore the position of my circle.
To perform 3d reconstruction I use triangulateMultiview function
I was thinking to produce a 3d point cloud from the RGB image through the camera parameters and then match the 2 with icp algorithm.
-> this would not work and not efficient.
Actually, there is a much better way. Assuming that you know the extrinsic between the camera and lidar, any circle(or ellipse) on the image can be extended into a 3d cone using the camera intrinsic and by selecting the points within the cone you can do the ROI operation.
Let's say you can define an ellipse on your image plane by detecting and finding the parameters of an ellipse equation. The ellipse equation can be extended into the quadric(cone) equation which representing the 3D cone. Now the only thing left is testing if your 3d point is within the cone by putting the cone equation.
This is a mathematically little bit complicated problem if you are not comfortable with camera model or quadric equation.

Smoothing algorithm, 2.5D

The picture below shows a triangular surface mesh. Its vertices are exactly on the surface of the original 3D object but the straight edges and faces have of course some geometric error where the original surface bends and I need some algorithm to estimate the smooth original surface.
Details: I have a height field of (a projectable part of) this surface (a 2.5D triangulation where each x,y pair has a unique height z) and I need to compute the height z of arbitrary x,y pairs. For example the z-value of the point in the image where the cursor points to.
If it was a 2D problem, I would use cubic splines but for surfaces I'm not sure what is the best solution.
As commented by #Darren what you need are patches.
It can be bi-linear patches or bi-quadratic or Coon's patches or other.
I have found no much reference doing a quick search but this links:
provide an overview: http://www.cs.cornell.edu/Courses/cs4620/2013fa/lectures/17surfaces.pdf
while this is more technical: https://www.doc.ic.ac.uk/~dfg/graphics/graphics2010/GraphicsHandout05.pdf
The concept is that you calculate splines along the edges (height function with respect to the straight edge segment itself) and then make a blending inside the surface delimited by the edges.
The patch os responsible for the blending meaning that inside any face you have an height which is a function of the point position coordinates inside the face and the values of the spline ssegments which are defined on the edges of the same face.
As per my knowledge it is quite easy to use this approach on a quadrilateral mesh (because it becomes easy to define on which edges sequence to do the splines) while I am not sure how to apply if you are forced to go for an actual triangulation.

Finding the centers of overlapping circles in a low resolution grayscale image

I am currently taking my first steps in the field of computer vision and image processing.
One of the tasks I'm working on is finding the center coordinates of (overlapping and occluded) circles.
Here is a sample image:
Here is another sample image showing two overlapping circles:
Further information about the problem:
Always a monochrome, grayscale image
Rather low resolution images
Radii of the circles are unknown
Number of circles in a given image is unknown
Center of circle is to be determined, preferably with sub-pixel accuracy
Radii do not have to be determined
Relative low overhead of the algorithm is of importance; the processing is supposed to be carried out with real-time camera images
For the first sample image, it is relatively easy to calculate the center of the circle by finding the center of mass. Unfortunately, this is not going to work for the second image.
Things I tried are mainly based on the Circle Hough Transform and the Distance Transform.
The Circle Hough Transform seemed relatively computationally expensive due to the fact that I have no information about the radii and the range of possible radii is large. Furthermore, it seems hard to identify the (appropriate) pixels along the edge because of the low resolution of the image.
As for the Distance Transform, I have trouble identifying the centers of the circles and the fact that the image needs to be binarized implies a certain loss of information.
Now I am looking for viable alternatives to the aforementioned algorithms.
Some more sample images (images like the two samples above are extracted from images like the following):
Just thinking aloud to try and get the ball rolling for you... I would be thinking of a Blob, or Connected Component analysis to separate out your blobs.
Then I would start looking at each blob individually. First thing is to see how square the bounding box is for each blob. If it is pretty square AND the centroid of the blob is central within the square, then you have a single circle. If it is not square, or the centroid is not central, you have more than one circle.
Now I am going to start looking at where the white areas touch the edges of the bounding box for some clues as to where the centres are...

Calculate the perimeter of a shape specified with a cloud of points

I have a shape which you can imagine as a lake in a field observed from the top (2D). I determined the border pixels of the shape after an image processing, so that I have the coordinates of each border point.
Now I want to calculate the perimeter of this shape. My problem is that I have the points not in following order that would give a closed loop, but unordered.
How can a problem like this be solved in Matlab? (including Curve-Fitting-Toolbox etc.)
Thank you for any suggestions!
You can use the function regionprops for this.
Turn your image into a binary image with 1 inside your 'lake' and 0 outside (which you should be easily able to do, as you mention you extracted the boundaries).
Then use:
props=regionprops(YourBinaryImage, 'Perimeter');
You can then access the perimeter as follows: props.Perimeter
If you have set of 3D points with (x,y,z) coordinates, you may set z to zero and use the 2D (x,y) points to find the convex hull using convhull regardless of their order .