create the pixel size for Dicom_Picture in matlab - matlab

I have pictures in .dcm format. From Dicominfo I learned that the pixel spacing is [0.9,0.9] mm and the slice thickness is 1.98 mm.
My task: I should get the picture size in real (world) coordinates and then display the pictures in all three projections in matlab.
I had an idea that I would create a matrix in matlab, but it is difficult for me to create the pixel size spacing.
I mean that the pixel in the matrix is like a square and is 0.9mm * 0.9mm.
I don't know if my approach is correct and if there is an easy way to solve the problem.
Thank you very much for every answer

several plotting functions allow you to specify x/y/z positions of each pixel/voxel, including imagesc, pcolor, here is an example using imagesc.
% vol stores your dicom volume
vol=rand(40,50,30);
dx=[0.9,0.9,1.98];
imagesc((0:size(vol,1)-1)*dx(1), (0:size(vol,2)-1)*dx(2), vol(:,:,1))

Related

Applying a vector field to image in matlab

How do I apply a vector field obtained via quiver, to an image which causes the pixels to displace in the direction of the vectors (image is warped)?
Also, if the vector field I have is 3 dimensional, how would I do this? Think of it as laying down a flat 2 dimensional image over a 3 dimensional terrain. How would I go about viewing this in matlab?
Thank you for your time
EDIT: I have to warp the image not just in the Z axis, but along the X and Y axes as well.
Laying down a flat 2 dimensional image over a 3 dimensional terrain:
It's not very clear the way the axes are oriented but this is an image of a clown mapped on the peaks function. Exact steps are described in the documentation of surface in the example 'Display image along surface plot.'
load clown
C = flipud(X);
figure
surface(XD,YD,ZD,C,...
'FaceColor','texturemap',...
'EdgeColor','none',...
'CDataMapping','direct')
colormap(map)
view(-35,45)
Essentially, you create your surface with CData as the image you want to be displayed and set an appropriate colormap for the axes.
Use the imwarp function in the Image Processing Toolbox.

plotting 3D edge in matlab

I have a 3D matrix of a MRI image and used matlab edge function and it gave me a 3D matrix as follow which some of the points are 1 (means edges).
I want to show this surface in matlab but I don't know that how I should do this. I know that I should use surf.
As #bdecaf said, you can use find to determine the height of the points, or in other words, in which of the 100 layers does the point lie. You can do that as follows:
z1=zeros(30,100);
temp=find(b);
[row,col,layer]=ind2sub(size(b),temp);
for i=1:size(x,1)
z1(row(i),col(i))=layer(i);
end
You can get an image as follows:

Disparity calculation of two similar images in matlab

I have two images(both are exactly same images) and I am trying to calculate the disparity between them using sum of squared distances and reconstruct disparity in 3D space.
Do I need to rectify the image before calculating disparity?
The following are the steps that I have done so far for disparity map computation(I have tried with rectification and without rectification but both are returning all zeroes disparity matrix).
For each pixel in the left image X,
Take the pixels in the same row in the right image.
Separate the row in right image to windows.
For each window,
Calculate the disparity for each pixel in that window with X
Select the pixel in the window which gives minimum SSD with X
Find the pixel with minimum disparity among all windows as the best match to X
Am I doing it correctly?
How can I visualise the 3D reconstruction of the disparity as scatter plot in matlab?
Rectification guarantees that matches are to be found in the same row (for horizontally separated cameras). If you have doubts about rectification of your images you can try to compare rows by drawing horizontal lines between horizontally separated images. If the lines hit the same features you are fine, see the picture below where images are NOT rectified. The fact that they are distorted means there was a lens distortion correction as well as attempted (but not actually performed correctly) rectification.
Now, let’s see what you meant by the same images. Did you mean the images of the same object that were taken from different viewpoints? Note that if the images are literally the same (the same viewpoints) the disparity will be zero as was noted in another answer. The definition of disparity (for horizontally separated cameras) is a value of shift (in the same row) between matching features. The disparity is related to depth (if optical axes of cameras are parallel) as disparity d=f*B/z, where z - depth, B - baseline or separation between cameras and f is a focal length. You can transform the formula above into disparity/B=f/z which basically says that disparity related to camera separation as focal length is related to distance. In other words, the ratios of horizontal and distance measures are equal.
If your images are taken with the cameras shifted horizontally the disparity (in a simple correlation algorithm) is typically calculated in 5-embedded loops:
loop over image1 y
loop over image1 x
loop over disparity d
loop over correlation window y
loop over correlation window x
Disparity, or D_best, gives you the best matching window between image1 and image2 across all possible values of d. Finally, scatterplots are for 3D point clouds while disparity can be rather visualized as a heat color map. If you need to visualize 3D reconstruction or simply saying a 3D point cloud calculate X, Y, Z as:
Z=fB/D, X=uZ/f, Y=v*Z/f, where u and v are related to column and row of wxh image as
u=col-w/2 and v=h/2-row, that is u, v form an image centered coordinate system.
If your two images are exactly the same, then the disparity would be 0 for every pixel. You either have to use two separate cameras to take the images, or take them with a single camera from two different locations. The best way to do 3D reconstruction is to use a calibrated stereo pair of cameras. Here is an example of how to do that using the Computer Vision System Toolbox for MATLAB.

How to represent points from PCA space in the RGB space

I'm trying to implement a morphological method for image colors from the article: "Probabilistic pseudo-morphology for grayscale and color images". At one point, we compute the PCA on the entire image, calculate a chebyschev inequality ( the equation 11 in the paper: http://perso.telecom-paristech.fr/~bloch/P6Image/Projets/pseudoMorphology/Caliman-PR2014.pdf) of each 3 components which gives us 3 pairs of vector. We next have to represent these vectors back in the RGB space. I don't understand how do we do that? Can someone help me?
Looking at the paper, I'm not sure which representation you're talking about. I'm guessing Fig. 16, but I'm not sure. There's a note in the caption of Fig. 16 that's helpful: "(For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)"
Possible answer: if you have a matrix of size A = (y_pixels,x_pixels,3), then you can display this as an RGB image via:
A = rand(100,100,3);
figure()
imshow(A)
Note that your matrix must be scaled in the range [0..1].
It seems easy to map your your PCA scores for each pixel onto such a matrix, and simply display that as RGB via imshow. Does that solve your problem?

Image downsampling and upsampling using bilinear interpolation

I am trying to understand how exactly the upsampling and downsampling of a 2D image I have, would happen using Bilinear interpolation. Now I am aware of how bilinear interpolation works using a 2x2 neighbourhood values to interpolate the data point inside this 2x2 area using weights. But what I am not aware of, is asked below. My objectives and specific queries are -
1.To start with I have a 2D image of values(size MxN). The width(M) and height(N) of this image is not fixed, but will change from case to case. This 2D image needs to be down-sampled using bilinear interpolation to a grid of size PxQ (P and Q are to be configured as input parameters) e.g. lets take PxQ is 8x8. And assume input 2D array image is of size 200x100. i.e 200 columns, 100 rows.
Now how while performing downsampling using bilinear interpolation of this 200x100 image, should I first obtain a downsampled image of size 100x50 (downsampling by 2 in both dimensions using bilinear interpolation); then a 50x25 image(again by doing downsampling by 2 in both dimensions), then a 25x12 image, then a 12x12(this time doing downsampling by linear(not bilinear!) interpolation only along the rows, and finally drop some pixels to get 8x8.
Any pointers to exact algorithm or different ways to achieve this, are appreciated.
2.Above question raises another one - how to downsample using bilinear interpolation by a non-integer scale factor, e.g. how to go from a say 8x8 image array to a 6x2 image wherein resampling/scaling factors in both dimensions are not integers.
3.Then when I get a 8x8 sized image I need to upsample it by bilinear interpolation to the same original size I started with- MxN. If I need to go from 8x8 to say 20x20. How would it interpolate in between points in a row and would it interpolate a full row by some means. Again in case of non-integer scale factors how would bilinear interpolation for upsampling happen. Exact steps.
And finally I need to implement this in C.
I tried visualizing these particular questions by taking different examples, but not got a clear picture of how this bilinear interpolation would happen while downsampling and upsampling. All I have is plenty of paper sheets having'dots and crossed' pictures on my desk, but still no clear solution!
Any detailed reading material, books appreciated.