I'm trying to find all the cylindrical objects in an image which represents a map. Only their edges can be seen on the map and they can be very poor.
The aim is to find their centrum. All the object are circles with a constant radius in the real world. But their are not perfectly represented in the map.
Here is an example of image that I have to process :
I'm using MATLAB 2009b.
Hough transform can be used to detect the shape of objects. You can use Matlab or OpenCv. Consider using the relevant GPU OpenCV if you are familiarized with gpu libs.
Good luck.
Related
Suppose I have an STL file which consists of vertex and face data for a given object. How can I calculate the area of the ortho projection of the object given any arbitrary orientation without using Matlab graphics, just simple math with as few built-in Matlab functions as possible. Keep in mind that this is different than just a simple silhouette as there can be gaps in the projection that aren't captured by the object outer outline only.
I originally did this by taking the pixel data from the plot and then counted non-white pixels. That worked great but I need to be able to compile the code for use with C but as far as I can tell any graphics/plotting can't be compiled with Matlab coder.
Image below is a good example (see gap between torso and left arm). A torus is also another good example geometry.
I'd like to find a transformation that projects the image from the Left camera onto the image from the Right camera so that the two become aligned. I already managed to do that with two similar cameras (IGB and RGB), by using the disparity map and shifting each pixel by the corresponding disparity value. My problem is that this doesn't work for other cameras that I'm using (for example multispectral and infrared sensors), because the calculated disparity maps have very little detail. I am currently using the Matlab Computer Vision Tool Box, and I suspect that the problem is the poor correlation of information in the images (little correspondences found by the disparity algorithm).
I would like to know if there is another way of doing this transformation, for example just by using the Extrinsic and Intrinsic Parameters of the cameras (the are already calibrated).
The disparity IS the transformation you are looking for. Any sensible left-right mapping depends on 3D information, which the disparity provides. Anything else is just hallucinating some values based on assumptions that may or may not make sense.
Is it Possible to Find out Measurement of ISNT ( Inferior, Superior, Nasal, Temporal) Distances by Detecting Optic Cup and Optic Disc using Circular Hough Transformation from following Image? -
Over on the MathWorks file exchange. There is a function for detecting circles using the Hough Transform. You should be able to calculate distances in pixels and translate those to whatever unit you wish, so long as you have a reference to go against.
I haven't personally used the function so I can't say if it is exactly what you need, but it's a place to start.
Matlab's extractLBPFeatures (from R2015b) works only on 2D images but I need to extract Local Binary Pattern features from a CT image (3D).
There are other implementation available for 2D version for LBP extractions... Is it possible to modify 2D to 3D without losing sanity? ex of 2D algo: http://www.cse.oulu.fi/CMV/Downloads/LBPMatlab
If someone come across a program/algo that can work on 3D images, please share.
It does not exist a definition of LBP in 3D, only in 2D, for the simple reason that it's not clear which path follow to go around the voxel and and create the code.
However, you can compute the LBP code for each plan XY, XZ and YZ. So each voxel will generate three codes instead of one.
Using Stereo vision and based on Multiple View Geometry book (http://www.robots.ox.ac.uk/~vgg/hzbook/), I have created a 3D point cloud in MATLAB. To do that, I first calibrated the cameras and rectified the stereo images. Then feature extraction and matching. Then eliminated the noisy matched based on camera locations. Finally created the 3D point cloud using triangulation.
Now my question is how to convert this 3D point cloud from pixel domain to actual millimeter/centimeter domain knowing my focal length and camera calibration matrices?
the goal is to find DEPTH IN MILLIMETERS.
I know how to do it in disparity/depth map case using formula: Z=(t*f)/d.
But here in the sparse case, can I do something like this? http://matlab.wikia.com/wiki/FAQ#How_do_I_measure_a_distance_or_area_in_real_world_units_instead_of_in_pixels.3F
or there is a more sophisticated method with more in depth explanation?
Thanks.
The formula you wrote is valid only in the special case when the image planes of the two cameras are on the same geometrical plane, and the motion from one to the other is a translation parallel to one of the image axes.
In the general case you'll need to triangulate actual rays in 3D space, using one of the techniques described in that book (it has a whole chapter on reconstruction). The reconstruction will be metrical if your calibration is. In particular, if the coordinate transform between the cameras has a translation vector whose units are meters (or millimeters, or inches, ...).