Interpolation between two images with different pixelsize - matlab

For my application, I want to interpolate between two images(CT to PET).
Therefore I map between them like that:
[X,Y,Z] = ndgrid(linspace(1,size(imagedata_ct,1),size_pet(1)),...
linspace(1,size(imagedata_ct,2),size_pet(2)),...
linspace(1,size(imagedata_ct,3),size_pet(3)));
new_imageData_CT=interp3(imagedata_ct,X,Y,Z,'nearest',-1024);
The size of my new image new_imageData_CT is similar to PET image. The problem is that data of my new image is not correct scaled. So it is compressed. I think the reason for that is that the pixelsize between the two images is different and not involved to the interpolation. So for example :
CT image size : 512x512x1027
CT voxel size[mm] : 1.5x1.5x0.6
PET image size : 192x126x128
PET voxel size[mm] : 2.6x2.6x3.12
So how could I take care about the voxel size regarding to the interpolation?

You need to perform a matching in the patient coordinate system, but there is more to consider than just the resolution and the voxel size. You need to synchronize the positions (and maybe the orientations also, but this is unlikely) of the two volumes.
You may find this thread helpful to find out which DICOM Tags describe the volume and how to calculate transformation matrices to use for transforming between the patient (x, y, z in millimeters) and volume (x, y, z in column, row, slice number).
You have to make sure that the volume positions are comparable as the positions of the slices in the CT and PET do not necsesarily refer to the same origin. The easy way to do this is to compare the DICOM attribute Frame Of Reference UID (0020,0052) of the CT and PET slices. For all slices that share the same Frame Of Reference UID, the position of the slice in the DICOM header refers to the same origin.
If the datasets do not contain this tag, it is going to be much more difficult, unless you just put it as an assumption. There are methods to deduce the matching slices of two different volumes from the contents of the pixel data referred to as "registration" but this is a science of its own. See the link from Hugues Fontenelle.
BTW: In your example, you are not going to find a matching voxel in both volumes for each position as the volumes have different size. E.g. for the x-direction:
CT: 512 * 1.5 = 768 millimeters
PET: 192 * 2.6 = 499 millimeters

I'll let to someone else answering the question, but I think that you're asking the wrong one. I lack context of course, but at first glance Matlab isn't the right tool for the job.
Have a look at ITK (C++ library with python wrappers), and the "Multi-modal 3D image registration" article.
Try 3DSlicer (it has a GUI for the previous tool)
Try FreeSurfer (similar, focused on brain scans)
After you've done that registration step, you could export the resulting images (now of identical size and spacing), and continue with your interpolation in Matlab if you wish (or with the same tools).

There is a toolbox in slicer called PETCTFUSION which aligns the PET scan to the CT image.
you can install it in slicer new version.
In the module's Display panel shown below, options to select a colorizing scheme for the PET dataset are provided:
Grey will provide white to black colorization, with black indicating the highest count values.
Heat will provide a warm color scale, with Dark red lowest, and white the highest count values.
Spectrum will provide a warm color scale that goes cooler (dark blue) on the low-count end to white at the highest.
This panel also provides a means to adjust the window and level of both PET and CT volumes.
I normally use the resampleinplace tool after the registration. you can find it in the package: registration and then, resample image.
Look at the screensht here:
If you would like to know more about the PETCTFUSION, there is a link below:
https://www.slicer.org/wiki/Modules:PETCTFusion-Documentation-3.6
Since slicer is compatible with python, you can use the python interactor to run your own code too.
And let me know if you face any problem

Related

Problem with creating a 3D mask of DICOM-RT contour data using MATLAB

I have troubles extracting a tumor using a RT mask from a dicom image. Due to GDPR I am not allowed to share the dicom images even though they are anonymized. However I am allowed to share the images themself. I want to extract the drawn tumor from the CT images using the draw GTV stored as a RT structure using MATLAB.
Lets say that the file directory where my CT images are stored is called DicomCT and that the RT struct dicom file is called rtStruct.dcm.
I can read and visualize my CT images as follows:
V = dicomreadVolume(“DicomCT”);
V = squeeze(V);
volshow(V)
volume V - 3D CT image
I can load my rt structure using:
Info = dicominfo(“rtStruct.dcm”);
rtContours = dicomContours(Info);
I get the plot giving the different contours.
plotContour(rtContours)
Contours for the GTV of the CT image
I used this link for the information on how to create the mask such that I can apply it to the 3D CT image: https://nl.mathworks.com/help/images/create-and-display-3-d-mask-of-dicom-rt-contour-data.html#d124e5762
The dicom information tells mee the image should be 3mm slices, hence I took 3x3x3 for the referenceInfo.
referenceInfo = imref3d(size(V),3,3,3);
rtMask = createMask(rtContours, 1, referenceInfo)
When I plot my rtMask, I get a grey screen without any trace of the mask. I think that something is wrong with the way that I define the referenceInfo, but I have no idea how to fix it or what is wrong.
volshow(rtMask)
Volume plot of the RT mask
What would be the best way forward?
i was actually having some sort of similar problem to you a couple of days ago. I think you might have two possible problems (none of them your fault).
Your grey screen might be an error rendering that it's not showing because of how the actual volshow() script works. I found it does some things i don't understand with graphics memory and representing numeric type volumes vs logic volumes. I found this the hard way in my job PC where i only have intel HD graphics. Using
iptsetpref('VolumeViewerUseHardware',true)
for logical volumes worked fine for me. You an also test this by trying to replot the mask as double instead of logical by
rtMask = double(rtMask)
volshow(rtMask)
If it's not a rendering error caused by the interactions between your system and volshow() it might be an actual confusion and how the createMask and the actual reference info it needs (created by an actual bad explanation in the tutorial you just linked). Using pixel size info instead of actual axes limits can create partial visualization in segmentation or even missing it bc of scale. This nice person explained more elegantly in this post by using actual geometrical info of the dicom contours as limits.
https://es.mathworks.com/support/search.html/answers/1630195-how-to-convert-dicom-rt-structure-to-binary-mask.html?fq%5B%5D=asset_type_name:answer&fq%5B%5D=category:images/basic-import-and-export&page=1
basically use
plotContour(rtContours);
ax = gca;
referenceInfo = imref3d(size(V),ax.XLim,ax.YLim,ax.ZLim);
rtMask = createMask(rtContours, 1, referenceInfo)
In addition to your code and it might work.
I hope this could be of help to you.

Dilate an image based on a distribution

I have an image that show some random filled circles (e.g. see here). I want to change these circles to make some irregular shapes. In other words, I want to define a distribution by which I can expand the circles. Clearly, the resulted new objects will not be circles anymore, because the generated objects are expanded based on a distribution which is variable; see this new deformed circle.
I was wondering if there is any method that can do this? In my first try, I tried to use image dilation in Matlab, but I have no idea on how the dilation "distribution" should be used.
IM2 = imdilate(IM,SE)
If you want to do it using dilation, a solution could be:
Let say Im is your original image
ImResult = Same(Im)
ImClone = Clone(Im)
Randomly delete pixels in ImClone. The number of pixels to delete may be a percentage, or whatever you prefer
ImDilate = Dilate(ImClone), with the structuring element of size N
Result = Maximum(Result, ImDilate)
If you want different size of deformations, then you iterate from step 3 to 6, with different structuring element sizes.
But what you want is more an elastic deformation. You should take a look to the free form deformation (FFD).

Extract Rectangular Image from Scanned Image

I have scanned copies of currency notes from which I need to extract only the rectangular notes.
Although the scanned copies have a very blank background, the note itself can be rotated or aligned correctly. I'm using matlab.
Example input:
Example output:
I have tried using thresholding and canny/sobel edge detection to no avail.
I also tried the solution given here but it detects the entire image for cropping and it would not work for rotated images.
PS: My primary objective is to determine the denomination of the currency. There are a couple of methods I thought I could use:
Color based, since all currency notes have varying primary colors.
The advantage of this method is that it's independent of the
rotation or scale of the input image.
Detect the small black triangle on the lower left corner of the note. This shape is unique
for each denomination.
Calculating the difference between 2 images. Since this is a small project, all input images will be of the same dpi and resolution and hence, once aligned, the difference between the input and the true images can give a rough estimate.
Which method do you think is the most viable?
It seems you are further advanced than you looked (seeing you comments) which is good! Im going to show you more or less the way you can go to solve you problem, however im not posting the whole code, just the important parts.
You have an image quite cropped and segmented. First you need to ensure that your image is without holes. So fill them!
Iinv=I==0; % you want 1 in money, 0 in not-money;
Ifill=imfill(Iinv,8,'holes'); % Fill holes
After that, you want to get only the boundary of the image:
Iedge=edge(Ifill);
And in the end you want to get the corners of that square:
C=corner(Iedge);
Now that you have 4 corners, you should be able to know the angle of this rotated "square". Once you get it do:
Irotate=imrotate(Icroped,angle);
Once here you may want to crop it again to end up just with the money! (aaah money always as an objective!)
Hope this helps!

artifacts in processed images

This question is related to my previous post Image Processing Algorithm in Matlab in stackoverflow, which I already got the results that I wanted to.
But now I am facing another problem, and getting some artefacts in the process images. In my original images (stack of 600 images) I can't see any artefacts, please see the original image from finger nail:
But in my 10 processed results I can see these lines:
I really don't know where they come from?
Also if they belong to the camera's sensor why can't I see them in my original images? Any idea?
Edit:
I have added the following code suggested by #Jonas. It reduces the artefact, but does not completely remove them.
%averaging of images
im = D{1}(:,:);
for i = 2:100
im = imadd(im,D{i}(:,:));
end
im = im/100;
imshow(im,[]);
for i=1:100
SD{i}(:,:)=imsubtract(D{i}(:,:),im(:,:))
end
#belisarius has asked for more images, so I am going to upload 4 images from my finger with speckle pattern and 4 images from black background size( 1280x1024 ):
And here is the black background:
Your artifacts are in fact present in your original image, although not visible.
Code in Mathematica:
i = Import#"http://i.stack.imgur.com/5hM3u.png"
EntropyFilter[i, 1]
The lines are faint, but you can see them by binarization with a very low level threshold:
Binarize[i, .001]
As for what is causing them, I can only speculate. I would start tracing from the camera output itself. Also, you may post two or three images "as they come straight from the camera" to allow us some experimenting.
The camera you're using is most likely has a CMOS chip. Since they have independent column (and possibly row) amplifiers, which may have slightly different electronic properties, you can get the signal from one column more amplified than from another.
Depending on the camera, these variability in column intensity can be stable. In that case, you're in luck: Take ~100 dark images (tape something over the lens), average them, and then subtract them from each image before running the analysis. This should make the lines disappear. If the lines do not disappear (or if there are additional lines), use the post-processing scheme proposed by Amro to remove the lines after binarization.
EDIT
Here's how you'd do the background subtraction, assuming that you have taken 100 dark images and stored them in a cell array D with 100 elements:
% take the mean; convert to double for safety reasons
meanImg = mean( double( cat(3,D{:}) ), 3);
% then you cans subtract the mean from the original (non-dark-frame) image
correctedImage = rawImage - meanImg; %(maybe you need to re-cast the meanImg first)
Here is an answer that in opinion will remove the lines more gently than the above mentioned methods:
im = imread('image.png'); % Original image
imFiltered = im; % The filtered image will end up here
imChanged = false(size(im));% To document the filter performance
% 1)
% Compute the histgrams for each column in the lower part of the image
% (where the columns are most clear) and compute the mean and std each
% bin in the histogram.
histograms = hist(double(im(501:520,:)),0:255);
colMean = mean(histograms,2);
colStd = std(histograms,0,2);
% 2)
% Now loop though each gray level above zero and...
for grayLevel = 1:255
% Find the columns where the number of 'graylevel' pixels is larger than
% mean_n_graylevel + 3*std_n_graylevel). - That is columns that contains
% statistically 'many' pixel with the current 'graylevel'.
lineColumns = find(histograms(grayLevel+1,:)>colMean(grayLevel+1)+3*colStd(grayLevel+1));
% Now remove all graylevel pixels in lineColumns in the original image
if(~isempty(lineColumns))
for col = lineColumns
imFiltered(:,col) = im(:,col).*uint8(~(im(:,col)==grayLevel));
imChanged(:,col) = im(:,col)==grayLevel;
end
end
end
imshow(imChanged)
figure,imshow(imFiltered)
Here is the image after filtering
And this shows the pixels affected by the filter
You could use some sort of morphological opening to remove the thin vertical lines:
img = imread('image.png');
SE = strel('line',2,0);
img2 = imdilate(imerode(img,SE),SE);
subplot(121), imshow(img)
subplot(122), imshow(img2)
The structuring element used was:
>> SE.getnhood
ans =
1 1 1
Without really digging into your image processing, I can think of two reasons for this to happen:
The processing introduced these artifacts. This is unlikely, but it's an option. Check your algorithm and your code.
This is a side-effect because your processing reduced the dynamic range of the picture, just like quantization. So in fact, these artifacts may have already been in the picture itself prior to the processing, but they couldn't be noticed because their level was very close to the background level.
As for the source of these artifacts, it might even be the camera itself.
This is a VERY interesting question. I used to deal with this type of problem with live IR imagers (video systems). We actually had algorithms built into the cameras to deal with this problem prior to the user ever seeing or getting their hands on the image. Couple questions:
1) are you dealing with RAW images or are you dealing with already pre-processed grayscale (or RGB) images?
2) what is your ultimate goal with these images. Is the goal to simply get rid of the lines regardless of the quality in the rest of the image that results, or is the point to preserve the absolute best image quality. Are you to perform other processing afterwards?
I agree that those lines are most likely in ALL of your images. There are 2 reasons for those lines ever showing up in an image, one would be in a bright scene where OP AMPs for columns get saturated, thus causing whole columns of your image to get the brightest value camera can output. Another reason could be bad OP AMPs or ADCs (Analog to Digital Converters) themselves (Most likely not an ADC as normally there is essentially 1 ADC for th whole sensor, which would make the whole image bad, not your case). The saturation case is actually much more difficult to deal with (and I don't think this is your problem). Note: Too much saturation on a sensor can cause bad pixels and columns to arise in your sensor (which is why they say never to point your camera at the sun). The bad column problem can be dealt with. Above in another answer, someone had you averaging images. While this may be good to find out where the bad columns (or bad single pixels, or the noise matrix of your sensor) are (and you would have to average pointing the camera at black, white, essentially solid colors), it isn't the correct answer to get rid of them. By the way, what I am explaining with the black and white and averaging, and finding bad pixels, etc... is called calibrating your sensor.
OK, so saying you are able to get this calibration data, then you WILL be able to find out which columns are bad, even single pixels.
If you have this data, one way that you could erase the columns out is to:
for each bad column
for each pixel (x, y) on the bad column
pixel(x, y) = Average(pixel(x+1,y),pixel(x+1,y-1),pixel(x+1,y+1),
pixel(x-1,y),pixel(x-1,y-1),pixel(x-1,y+1))
What this essentially does is replace the bad pixel with a new pixel which is the average of the 6 remaining good pixels around it. The above is an over-simplified version of an algorithm. There are certainly cases where a singly bad pixel could be right next the bad column and shouldn't be used for averaging, or two or three bad columns right next to each other. One could imagine that you would calculate the values for a bad column, then consider that column good in order to move on to the next bad column, etc....
Now, the reason I asked about the RAW versus B/W or RGB. If you were processing a RAW, depending on the build of the sensor itself, it could be that only one sub-pixel (if you will) of the bayer filtered image sensor has the bad OP AMP. If you could detect this, then you wouldn't necessarily have to throw out the other good sub-pixel's data. Secondarily, if you are using an RGB sensor, to take a grayscale photo, and you shot it in RAW, then you may be able to calculate your own grayscale pixels. Many sensors when giving back a grayscale image when using an RGB sensor, will simply pass back the Green pixel as the overall pixel. This is due to the fact that it really serves as the luminescence of an image. This is why most image sensors implement 2 green sub-pixels for every r or g sub-pixel. If this is what they are doing (not ALL sensors do this) then you may have better luck getting rid of just the bad channel column, and performing your own grayscale conversion using.
gray = (0.299*r + 0.587*g + 0.114*b)
Apologies for the long winded answer, but I hope this is still informational to someone :-)
Since you can not see the lines in the original image, they are either there with low intensity difference in comparison with original range of image, or added by your processing algorithm.
The shape of the disturbance hints to the first option... (Unless you have an algorithm that processes each row separately.)
It seems like your sensor's columns are not uniform, try taking a picture without the finger (background only) using the same exposure (and other) settings, then subtracting it from the photo of the finger (prior to other processing). (Make sure the background is uniform before taking both images.)

Perl - Ratio of homogeneous areas of an image

I would like to check whether an image has a lot of homogeneous areas. Therefore I would like to get some kind of value of an image that declares a ratio for images depending on the amount/size of homogeneous areas (e.g. that value could have a range from 0 to 5).
Instead of a value there could be some kind of classification as well.
[many homogeneous areas -> value/class 5 ; few homogeneous areas -> value/class 0]
I would like to do that in perl. Is there a package/function or something like that?
What you want seems to be an area of image processing research which I am not familiar with. However, GraphicsMagick's mogrify utility has a -segment option:
Use -segment to segment an image by analyzing the histograms of the color components and identifying units that are homogeneous with the fuzzy c-means technique. The scale-space filter analyzes the histograms of the three color components of the image and identifies a set of classes. The extents of each class is used to coarsely segment the image with thresholding. The color associated with each class is determined by the mean color of all pixels within the extents of a particular class. Finally, any unclassified pixels are assigned to the closest class with the fuzzy c-means technique.
I don't know if this is any use to you. You might have to hit the library on this one, and read some research. You do have access to this through PerlMagick as well. However, it does not look like it gives access to the internals, but just produces an image based on parameters.
In my tests (without really understanding what the parameters do), photos turned entirely black, whereas PNG images with large areas of similar colors were reduced to a sort of an average color. Whether you can use that fact to develop a measure is an open question I am not going to investigate ;-)