Dicom: Matlab versus ImageJ grey level - matlab

I am processing a group of DICOM images using both ImageJ and Matlab.
In order to do the processing, I need to find spots that have grey levels between 110 and 120 in an 8 bit-depth version of the image.
The thing is: The image that Matlab and ImageJ shows me are different, using the same source file.
I assume that one of them is performing some sort of conversion in the grey levels of it when reading or before displaying. But which one of them?
And in this case, how can I calibrate do so that they display the same image?
The following image shows a comparison of the image read.
In the case of the imageJ, I just opened the application and opened the DICOM image.
In the second case, I used the following MATLAB script:
[image] = dicomread('I1400001');
figure (1)
imshow(image,[]);
title('Original DICOM image');
So which one is changing the original image and if that's the case, how can I modify so that both version looks the same?

It appears that by default ImageJ uses the Window Center and Window Width tags in the DICOM header to perform window and level contrast adjustment on the raw pixel data before displaying it, whereas the MATLAB code is using the full range of data for the display. Taken from the ImageJ User's Guide:
16 Display Range of DICOM Images
With DICOM images, ImageJ sets the
initial display range based on the Window Center (0028, 1050) and
Window Width (0028, 1051) tags. Click Reset on the W&L or B&C window and the display range will be set to the minimum and maximum
pixel values.
So, setting ImageJ to use the full range of pixel values should give you an image to match the one displayed in MATLAB. Alternatively, you could use dicominfo in MATLAB to get those two tag values from the header, then apply window/leveling to the data before displaying it. Your code will probably look something like this (using the formula from the first link above):
img = dicomread('I1400001');
imgInfo = dicominfo('I1400001');
c = double(imgInfo.WindowCenter);
w = double(imgInfo.WindowWidth);
imgScaled = 255.*((double(img)-(c-0.5))/(w-1)+0.5); % Rescale the data
imgScaled = uint8(min(max(imgScaled, 0), 255)); % Clip the edges
Note that 1) double is used to convert to double precision to avoid integer arithmetic, 2) the data is assumed to be unsigned 8-bit integers (which is what the result is converted back to), and 3) I didn't use the variable name image because there is already a function with that name. ;)

A normalized CT image (e.g. after the modality LUT transformation) will have an intensity value ranging from -1024 to position 2000+ in the Hounsfield unit (HU). So, an image processing filter should work within this image data range. On the other hand, a RGB display driver can only display 256 shades of gray. To overcome this limitation, most typical medical viewers apply Window Leveling to create a view of the image where the anatomy of interest has the proper contrast to display in the RGB display driver (mapping the image data of interest to 256 or less shades of gray). One of the ways to define the Window Level settings is to use Window Center (0028,1050) and Window Width (0028,1051) tags. Also, a single CT image can have multiple Window Level values and each pair is basically a view of the anatomy of interest. So using view data for image processing, instead actual image data, may not produce consistent results.

Related

How to open a txt file of IR temperatures as an image in matlab or other analysis software

I am using a therm-app camera to take infra-red photos of bats. I would like to draw around parts of the bat and find the hottest, coldest and average temperature and do further analysis.
The software that comes with the camera doesn't let me draw polygons so I would like to load the image in another program such as MATLAB or maybe imageJ (also happy to use Python or other if that would work).
The camera creates 4 files total:
I have a .jpg file, however when I open this in MATLAB it just appears as an image and I think it is just opening as a normal image, not sure how to accurately get the temperatures from this. I used the following to open it:
im=imread('C:\18. Bats\20190321_064039.jpg');
imshow(im);
I also have three other files, two are metadata (e.g. show date-time emissivity settings etc.) and one is a text file.
The text file appears to show the temperature of every pixel in the image.
e.g. (for a photo that had a minimum temperature of 15deg and max of 20deg it would be a text file with a minimum value of 1500 and maximum value of 2000)
1516 1530 1530 1540 1600 1600 1600 1600 1536 1536 ........
This file looks very useful, just wondering if there is some way I can open this as an image, probably in a program like MATLAB, which I think has image analysis so that I could draw around certain parts of the image (e.g. the wing of the bat) and find the average, max, min etc.
Has anyone had experience with this type of thing, can I just assign colours to numbers somehow? Or maybe other people have done it already and there is a much easier way. I will keep searching on the internet also and try to find out.
Alternatively maybe I need to open the .jpg image, draw around different parts, write a program to find out which pixels I drew around, find these in the txt file and then do averaging etc? Or somehow link the values in the text file to the .jpg file.
Sorry if this is the wrong place to ask, I can't find an image processing site on stack exchange.
All help is really appreciated, I will continue searching on the internet in the meantime.
the following worked in the end, it was much much easier than I thought it would be. Now a big fan of MATLAB, I thought it could take days to do this.
Just pasting here in case it is useful to someone else. I'm sure there is a more elegant way to write the code, however this is the first time I've used MATLAB in 20 years :p Use at your own risk, I haven't double checked I'm getting the correct results yet (though will do before I use it for anything important).
edit, since writing this I've found that the output .txt file of temperatures is actually sensor temperatures which need to be corrected for emissivity and background temperature to obtain the target temperatures. (One way to do this is to use the software which comes free with the camera to create new output .csv files of temperatures and use those instead).
Thanks to bla who put me on the right track with dlmread.
M=dlmread('C:\18. Bats\20190321_064039\20190321_064039_temps.txt') % read in the text file as a matrix (call it M)
% note that file seems to be a list of temperature values for each pixel
% e.g. 1934 1935 1935 1960 2000 2199...
M = rot90( M , 1 ) % rotate M anti-clockwise by 1*90 (All the pictures were saved sideways for some reason so rotate for easier viewing)
a = min(M(:)); % find the minimum temperature in the image
b = max(M(:)); % find the maximum temperature in the image
imresize(M,1.64); % resize the image to fit the computer screen prior to showing it on the screen
imshow(M,[a b]); % show image on the screen and fit the colours so that white is the value with the highest temperature in the image (b) and black is the lowest (a).
h = drawpolygon('FaceAlpha',0); % Let the user draw a polygon around the region of interest (ROI)
%(this stops code until polygon is drawn)
maskOfROI = h.createMask(); % For each pixel in the image assign a binary number, pixels inside the polygon (ROI) area are given 1 outside are 0
selectedValues = M(maskOfROI); % Now get the image values for all pixels where the mask value is '1' (i.e. all pixels within the polygon) and call this selectedValues.
averageTemperature = mean(selectedValues); % Get the mean of selectedValues (i.e. mean of the temperatures inside the polygon area)
maxTemperature = max(selectedValues); % Get the max of selectedValues
minTemperature = min(selectedValues); % Get the min of selectedValues

Image normalization of an image with large dynamic range

I have an image of size 144*2209 and its dynamic range is large (from -1108 to 984).
I want to display this image and for that this range needs to be brought into 0 to 255 and for that I need to normalize the image.
Here lies the problem: when such large dynamic range is made compact, the values of pixels after normalization becomes very close to each other such that the image does not show up as it needs to be.
What can be done to resolve this problem???
Here is the link to the IMAGE.
You can use a linear transform to change the dynamic range of the original image, but be aware that you will be modifying the information of the image.
To do so, for a 8-bit range in Matlab, just use the following snippet :
bins = pow2(8); % = range
lin_eq_img = round( (bins - 1) * (img - min_img) / (max_img - min_img) );
But it will slightly affect the image :
Just a few remarks :
even though your image dynamic range is above a 8-bit depth, it is not considered as 'a large dynamic range'
depending on what and how you want to do, you might want to consider using the previous linear transform on a 16-bit dynamic in order to avoid losing details (by "squeezing" the pixels values distribution)
you cannot want the intensity of your pixels values to be within the values given by 8-bit depth per pixel and say that you modified how your image is displayed : this is not a lossless operation !
if you know what you want to enhance, there is plenty of non-linear transformations which could be helpful
Edit :
The 16-bit equalisation version will "keep your shades clearer" (no loss of details) but, of course, the rendered image will take more space. Here is a comparison :
I would strongly recommend you to perform the histogram normalization on a 2^16 intensity values span, in order to avoid details losses.

Image segmentation algorithm in MATLAB

I need to implement an image segmentation function in MATLAB based on the principles of the connected components algorithm, but with a few modifications. This is intended for very simple, 2D images, with a background color and some objects in different colors.
The idea is that, taking the image as a matrix, I provide a tool to select the background color (it will vary for every image). Then, when the value of the color of the background of the image is selected, I have to segment all the objects in the image, and the result should be a labeled matrix, of the same size of the image, with 0's for the background, and a different number for each object.
This is a graphic example of what I mean:
I understand the idea of how to do it, but I do not know how to implement it on MATLAB. For each pixel (matrix position) I should mark it as visited and then if the value corresponds to the one of the background, assign 0, if not, assign another value. The objects can be formed by different colors, so in the end, I need to segment groups of adjacent pixels, whatever their color is. Also I have to use 8-connectivity, in order to count the green object of the example image as only one object and not 4 different ones. And also, the objects should be counted from top to bottom, and from left to right.
Is there a simple way of doing this in MATLAB? I know the bwlabel function, but it works for binary images only, so I'd like to adapt it to my case.
once you know the background color, you can easily convert your image into a binary mask of the same size:
bw=img!=bg_color;
Once you have a binary mask you can call bwlavel with 8-connectivity argument as you suggested yourself.
Note: you might want to convert your color image from RGB representation to an indexed image using rgb2ind before processing.

Interpolation between two images with different pixelsize

For my application, I want to interpolate between two images(CT to PET).
Therefore I map between them like that:
[X,Y,Z] = ndgrid(linspace(1,size(imagedata_ct,1),size_pet(1)),...
linspace(1,size(imagedata_ct,2),size_pet(2)),...
linspace(1,size(imagedata_ct,3),size_pet(3)));
new_imageData_CT=interp3(imagedata_ct,X,Y,Z,'nearest',-1024);
The size of my new image new_imageData_CT is similar to PET image. The problem is that data of my new image is not correct scaled. So it is compressed. I think the reason for that is that the pixelsize between the two images is different and not involved to the interpolation. So for example :
CT image size : 512x512x1027
CT voxel size[mm] : 1.5x1.5x0.6
PET image size : 192x126x128
PET voxel size[mm] : 2.6x2.6x3.12
So how could I take care about the voxel size regarding to the interpolation?
You need to perform a matching in the patient coordinate system, but there is more to consider than just the resolution and the voxel size. You need to synchronize the positions (and maybe the orientations also, but this is unlikely) of the two volumes.
You may find this thread helpful to find out which DICOM Tags describe the volume and how to calculate transformation matrices to use for transforming between the patient (x, y, z in millimeters) and volume (x, y, z in column, row, slice number).
You have to make sure that the volume positions are comparable as the positions of the slices in the CT and PET do not necsesarily refer to the same origin. The easy way to do this is to compare the DICOM attribute Frame Of Reference UID (0020,0052) of the CT and PET slices. For all slices that share the same Frame Of Reference UID, the position of the slice in the DICOM header refers to the same origin.
If the datasets do not contain this tag, it is going to be much more difficult, unless you just put it as an assumption. There are methods to deduce the matching slices of two different volumes from the contents of the pixel data referred to as "registration" but this is a science of its own. See the link from Hugues Fontenelle.
BTW: In your example, you are not going to find a matching voxel in both volumes for each position as the volumes have different size. E.g. for the x-direction:
CT: 512 * 1.5 = 768 millimeters
PET: 192 * 2.6 = 499 millimeters
I'll let to someone else answering the question, but I think that you're asking the wrong one. I lack context of course, but at first glance Matlab isn't the right tool for the job.
Have a look at ITK (C++ library with python wrappers), and the "Multi-modal 3D image registration" article.
Try 3DSlicer (it has a GUI for the previous tool)
Try FreeSurfer (similar, focused on brain scans)
After you've done that registration step, you could export the resulting images (now of identical size and spacing), and continue with your interpolation in Matlab if you wish (or with the same tools).
There is a toolbox in slicer called PETCTFUSION which aligns the PET scan to the CT image.
you can install it in slicer new version.
In the module's Display panel shown below, options to select a colorizing scheme for the PET dataset are provided:
Grey will provide white to black colorization, with black indicating the highest count values.
Heat will provide a warm color scale, with Dark red lowest, and white the highest count values.
Spectrum will provide a warm color scale that goes cooler (dark blue) on the low-count end to white at the highest.
This panel also provides a means to adjust the window and level of both PET and CT volumes.
I normally use the resampleinplace tool after the registration. you can find it in the package: registration and then, resample image.
Look at the screensht here:
If you would like to know more about the PETCTFUSION, there is a link below:
https://www.slicer.org/wiki/Modules:PETCTFusion-Documentation-3.6
Since slicer is compatible with python, you can use the python interactor to run your own code too.
And let me know if you face any problem

MATLAB: display RGB values of a fits image

I want to read a .fits image of wide field sky and display the RGB values contained in a star. Can you please suggest a method to do so?
I have used fitsread to read in the image but i am not able to show the RGB values for specific locations(star).
In order to do this, you'll need a proper rgb fits file. The only .fits viewer I know of, ds9, does not support saving rgb fits files, but rather as the three separate (r,g,b) fits images. You can use "getpix" from wcstools (http://tdc-www.harvard.edu/wcstools/) or scisoft (http://www.eso.org/sci/software/scisoft/) on the individual frames. Note that "getpix" returns the pixel value given an image (x,y) location. ds9 does not provide the physical image location, but rather the wcs coordinates, so you may have to convert to image coordinates before calling getpix.