Getting 3D image from 2D image - matlab

I am doing a project in Matlab on Image processing
Is there any possibility of getting 3d image from 2d image?

If you have multiple images of the same object and the position of the camera when the picure was taken, then it is possible, but still not easy. You can find two such datasets and links to relevant articles here: http://vision.middlebury.edu/mview/

a 3d image would be a projection from 4d (and to show one of those you've got to project down to 2d) and most images that can be displayed on computer or in a picture frame are 2d projections of 3d objects due to this projection which in fact selects a slice of the higher dimensional space it doesn't contain the information needed to invert that projection and get back to 3d from a 2d image
but if you have sufficient sampling of the space it is possible to reconstruct a 3d object from 2d images of it but i don't know of any simple ways to do this

You can't do this without supporting data such as multiple 2D images describing the same 3D object. You then need to figure out the perspectives from which each image was taken, reconcile those into real space, and generate your points using a method such as intersection of stereo lines through each image plane onto the same physical coordinate.
You can also attempt a superpixel approach by exploiting lighting data within a single image, though these methods aren't as accurate.
This is a big field.

The Radon transform is used in tomography applications to reconstruct 3D representations (i.e.images) from many 2D projections of the 3D "scene". This transform and its inverse are present in the image processing toolbox of Matlab. You might want to have a look at it.
Hope this helps.
A.

Related

How to superimpose two stereo images so that they become aligned? (Camera intrinsics and extrinsics known)

I'd like to find a transformation that projects the image from the Left camera onto the image from the Right camera so that the two become aligned. I already managed to do that with two similar cameras (IGB and RGB), by using the disparity map and shifting each pixel by the corresponding disparity value. My problem is that this doesn't work for other cameras that I'm using (for example multispectral and infrared sensors), because the calculated disparity maps have very little detail. I am currently using the Matlab Computer Vision Tool Box, and I suspect that the problem is the poor correlation of information in the images (little correspondences found by the disparity algorithm).
I would like to know if there is another way of doing this transformation, for example just by using the Extrinsic and Intrinsic Parameters of the cameras (the are already calibrated).
The disparity IS the transformation you are looking for. Any sensible left-right mapping depends on 3D information, which the disparity provides. Anything else is just hallucinating some values based on assumptions that may or may not make sense.

Local Binary Patterns for 3D images. - MATLAB

Matlab's extractLBPFeatures (from R2015b) works only on 2D images but I need to extract Local Binary Pattern features from a CT image (3D).
There are other implementation available for 2D version for LBP extractions... Is it possible to modify 2D to 3D without losing sanity? ex of 2D algo: http://www.cse.oulu.fi/CMV/Downloads/LBPMatlab
If someone come across a program/algo that can work on 3D images, please share.
It does not exist a definition of LBP in 3D, only in 2D, for the simple reason that it's not clear which path follow to go around the voxel and and create the code.
However, you can compute the LBP code for each plan XY, XZ and YZ. So each voxel will generate three codes instead of one.

matlab: How to get textured using triangulation points in 3d reconstruction

I am working on 3d Reconstruction from two views. Till now I have got Fundamental matrix, Essential matrix and Triangulation Points. After this stage, how do I go forward to obtain textured image from the input image? and save those results in VRML model?
Are your cameras calibrated? If so, then you can rectify the images, and get a dense reconstruction. You can then plot the points using the colors from the RGB image. See this example.

How to convert 3D point cloud (extracted from 3D sparse reconstruction) to millimeters?

Using Stereo vision and based on Multiple View Geometry book (http://www.robots.ox.ac.uk/~vgg/hzbook/), I have created a 3D point cloud in MATLAB. To do that, I first calibrated the cameras and rectified the stereo images. Then feature extraction and matching. Then eliminated the noisy matched based on camera locations. Finally created the 3D point cloud using triangulation.
Now my question is how to convert this 3D point cloud from pixel domain to actual millimeter/centimeter domain knowing my focal length and camera calibration matrices?
the goal is to find DEPTH IN MILLIMETERS.
I know how to do it in disparity/depth map case using formula: Z=(t*f)/d.
But here in the sparse case, can I do something like this? http://matlab.wikia.com/wiki/FAQ#How_do_I_measure_a_distance_or_area_in_real_world_units_instead_of_in_pixels.3F
or there is a more sophisticated method with more in depth explanation?
Thanks.
The formula you wrote is valid only in the special case when the image planes of the two cameras are on the same geometrical plane, and the motion from one to the other is a translation parallel to one of the image axes.
In the general case you'll need to triangulate actual rays in 3D space, using one of the techniques described in that book (it has a whole chapter on reconstruction). The reconstruction will be metrical if your calibration is. In particular, if the coordinate transform between the cameras has a translation vector whose units are meters (or millimeters, or inches, ...).

Creating 3D volume from 2D slice set of grayscale images

I am to create a 3D volume out of grayscale image set using Matlab. A set contains a continuous and quantized slices of 2D grayscale image. I am still considered myself a rookie in Matlab, but this is what I currently have in my mind:
create an empty space for 3D volume.
On each image, we perform all the preprocessing operation so that we only got the part that is of our interest. (In this question, assume that this preprocessing part always work flawlessly)
Go through the image, each pixel's x and y coordinate on 2D will be transfer to the empty space. For z coordinate, we can use the slice number with respect to the distance between each slice. If a pixel is adjacent to another pixel, the 3D points will be connected together.
Repeat the previous 2 steps until all slices are done. We will now have all the points connected just like in the 2D slices.
But here comes the trouble, how can we connect the points between the slices, so that these points can become a volume? Or is there a more robust way to do in Matlab? Any suggestion is highly appreciated.
Part 0 - Assumptions
all 2D images are of the same dimension, hence your 3D volume can hold all of them in a rectangular cube
majority of the pixels in each of the 2D images have 3D spatial relationships (you can't visualize much if the pixels in each of the 2D images are of some random distribution. )
Part 1 - Visualizing 3D Volume from A Stack of 2D Images
To visualize or reconstruct a 3D volume from a stack of 2D images, you can try the following toolkits in matlab.
1 3D CT/MRI images interactive sliding viewer
http://www.mathworks.com/matlabcentral/fileexchange/29134-3d-ctmri-images-interactive-sliding-viewer
[2] Viewer3D
http://www.mathworks.com/matlabcentral/fileexchange/21993-viewer3d
[3] Image3
http://www.mathworks.com/matlabcentral/fileexchange/21881-image3
[4] Surface2Volume
http://www.mathworks.com/matlabcentral/fileexchange/8772-surface2volume
[5] SliceOMatic
http://www.mathworks.com/matlabcentral/fileexchange/764
Note that if you are familiar with VTK, you can try this:
[6] matVTK
http://www.cir.meduniwien.ac.at/matvtk/
I am currently sticking with [5] SliceOMatic for its simplicity and ease of use. However, by default, rendering 3D is quite slow in Matlab. Turning on openGL would give faster rendering. (http://www.mathworks.com/help/techdoc/ref/opengl.html) Or simply put, set(gcf, 'Renderer', 'OpenGL').
Part 2 - Interpolating pixels in between the slices
To interpolate pixels in between the slices, you need to specify an interpolation method (some of the above toolkits have this capability / flexibility. Otherwise, to give you a head start, some examples for interpolation are bicubic, spline, polynomial etc..(you can work this out by looking up on google or google/scholar for interpolation methods much more specific to your problem domain).
Part 3 - 3D Pre-processing
Looking at your procedures, you process the volumetric data by processing each of the 2D images first. In many advanced algorithms, or in true 3D processing, what you can do is to process the volumetric data in 3D domain first (simply put, you take the 26 neighbors or more in to account first.). Once this step is done, you can simply output the volumetric data into a stack of 2D images for cross-sectional viewing or supply to one of the aforementioned toolkits for 3D viewing or output to third party 3D viewing applications.
I have followed the above concepts for my own medical imaging research projects and the above finding is based on my research experience documented here (with latest revisions).
MATLAB generally plots volumetric data using a 3d array. The data points are spatially evenly separated along each axis. If there are sites in the 3d array for which you do not have data for, usually they are assigned the NaN value and the various plotting functions can generally handle this in a reasonable way (i.e. will generally behave as you intended).
If you load the slices into the 3d array such that adjacent points in the z-direction of the data are also adjacent in the 3rd dimension of the array then you should be fine.