How does Phil Sallee's jpeg toolbox divide images in the coef_arrays? - matlab

How does the toolbox divides images into blocks in the coef_arrays?
I have a 225x225 image.
But the coef_arrays gave three 232x232 double arrays.
With a 160x100 image, I get one 104x168 double array and two 56x88 double arrays.
Why am i getting array dimensions more than the image size?
Am i not supposed to get a total of 225x225 or 160x100 arrays no matter how many blocks the image is divided?
Say 225x225 into 10 blocks would be
11 20x20 arrays and 1 5x5 array.

JPEG compresses in blocks of 8x8. If you have an image whose size is not a multiple of 8, the encoder has to expand the image to a multiple of 8.
So 225x225 gets expanded into 232x232 with one array for each color component.
For the 160x100 image you are apparently using 2:1 of the Cb and Cr components.
I am not sure why you are getting the sizes you are getting. 160x100 could go to 160x104 (not 168x104) and 80x5o could go to 80x56.

Related

Image segmentation algorithm in MATLAB

I need to implement an image segmentation function in MATLAB based on the principles of the connected components algorithm, but with a few modifications. This is intended for very simple, 2D images, with a background color and some objects in different colors.
The idea is that, taking the image as a matrix, I provide a tool to select the background color (it will vary for every image). Then, when the value of the color of the background of the image is selected, I have to segment all the objects in the image, and the result should be a labeled matrix, of the same size of the image, with 0's for the background, and a different number for each object.
This is a graphic example of what I mean:
I understand the idea of how to do it, but I do not know how to implement it on MATLAB. For each pixel (matrix position) I should mark it as visited and then if the value corresponds to the one of the background, assign 0, if not, assign another value. The objects can be formed by different colors, so in the end, I need to segment groups of adjacent pixels, whatever their color is. Also I have to use 8-connectivity, in order to count the green object of the example image as only one object and not 4 different ones. And also, the objects should be counted from top to bottom, and from left to right.
Is there a simple way of doing this in MATLAB? I know the bwlabel function, but it works for binary images only, so I'd like to adapt it to my case.
once you know the background color, you can easily convert your image into a binary mask of the same size:
bw=img!=bg_color;
Once you have a binary mask you can call bwlavel with 8-connectivity argument as you suggested yourself.
Note: you might want to convert your color image from RGB representation to an indexed image using rgb2ind before processing.

Using dicomwrite with color images

I am trying to write a sequence of color images to a dicom file in Matlab. Each image is of type uint16. The sequence is stored in a 4D matrix named output of size 200x360x3x360 (num of rows x num of cols x num of channels x num of images). When I execute dicomwrite(output,'outputfile.dcm'), it gives the following error:
It says data bit depth is 8 but I've ensured that each image is 16-bit. Not sure what's going wrong.
The documentation for dicomwrite says it can write color images as well. In fact dicomread can read color dicom images such that the size of the matrix which stores the read data is 200x360x3x360. So I guess it should be possible to write color images as well using dicomwrite. Any help in this regard is appreciated. There is a related post but it doesn't talk about color image sequence.
The comment by JohnnyQ is correct.
From this page down in section A.8.5.4, they list the Multi-frame True Color SC Image IOD Content Constraints (partial list quote):
In the Image Pixel Module, the following constraints apply:
Samples per Pixel (0028,0002) shall be 3
Bits Allocated (0028,0100) shall be 8
Bits Stored (0028,0101) shall be 8
High Bit (0028,0102) shall be 7
Pixel Representation (0028,0103) shall be 0
It seems matlab will not do the conversion for you, so you should down convert each 16-bit color channel to 8-bit for DICOM

Dicom: Matlab versus ImageJ grey level

I am processing a group of DICOM images using both ImageJ and Matlab.
In order to do the processing, I need to find spots that have grey levels between 110 and 120 in an 8 bit-depth version of the image.
The thing is: The image that Matlab and ImageJ shows me are different, using the same source file.
I assume that one of them is performing some sort of conversion in the grey levels of it when reading or before displaying. But which one of them?
And in this case, how can I calibrate do so that they display the same image?
The following image shows a comparison of the image read.
In the case of the imageJ, I just opened the application and opened the DICOM image.
In the second case, I used the following MATLAB script:
[image] = dicomread('I1400001');
figure (1)
imshow(image,[]);
title('Original DICOM image');
So which one is changing the original image and if that's the case, how can I modify so that both version looks the same?
It appears that by default ImageJ uses the Window Center and Window Width tags in the DICOM header to perform window and level contrast adjustment on the raw pixel data before displaying it, whereas the MATLAB code is using the full range of data for the display. Taken from the ImageJ User's Guide:
16 Display Range of DICOM Images
With DICOM images, ImageJ sets the
initial display range based on the Window Center (0028, 1050) and
Window Width (0028, 1051) tags. Click Reset on the W&L or B&C window and the display range will be set to the minimum and maximum
pixel values.
So, setting ImageJ to use the full range of pixel values should give you an image to match the one displayed in MATLAB. Alternatively, you could use dicominfo in MATLAB to get those two tag values from the header, then apply window/leveling to the data before displaying it. Your code will probably look something like this (using the formula from the first link above):
img = dicomread('I1400001');
imgInfo = dicominfo('I1400001');
c = double(imgInfo.WindowCenter);
w = double(imgInfo.WindowWidth);
imgScaled = 255.*((double(img)-(c-0.5))/(w-1)+0.5); % Rescale the data
imgScaled = uint8(min(max(imgScaled, 0), 255)); % Clip the edges
Note that 1) double is used to convert to double precision to avoid integer arithmetic, 2) the data is assumed to be unsigned 8-bit integers (which is what the result is converted back to), and 3) I didn't use the variable name image because there is already a function with that name. ;)
A normalized CT image (e.g. after the modality LUT transformation) will have an intensity value ranging from -1024 to position 2000+ in the Hounsfield unit (HU). So, an image processing filter should work within this image data range. On the other hand, a RGB display driver can only display 256 shades of gray. To overcome this limitation, most typical medical viewers apply Window Leveling to create a view of the image where the anatomy of interest has the proper contrast to display in the RGB display driver (mapping the image data of interest to 256 or less shades of gray). One of the ways to define the Window Level settings is to use Window Center (0028,1050) and Window Width (0028,1051) tags. Also, a single CT image can have multiple Window Level values and each pair is basically a view of the anatomy of interest. So using view data for image processing, instead actual image data, may not produce consistent results.

Matlab extractHOG function

After taking the HOG of a whole image, and taking the HOG of a 32x32 section, comparing the same location from the whole image HOG and the 32x32 HOG they are not the same. Is there a way to set the parameters; so, I can make the two HOGs equal?
To make the HOG feature length same for different images with different sizes, you need to resize them to the same size first. Resize them to 32x32, for your example.

MATLAB: How do I resize (connected) components in a 3D binary image sequence without changing the dimensions of the sequence?

I'd like to resize the components contained in a 3D binary image sequence without changing any of the dimensions of the sequence itself.
I'm not sure if I need to do it on a component-by-component basis, if yes, then how do I create a transform such that the resized components are re-positioned 'correctly' in the image sequence? By 'correctly', I mean with the same centre of mass as the original unprocessed components.
(If that last paragraph doesn't make sense then please ignore)
A 2D example: suppose I wanted to enlarge by 10% the white blobs in the following [295x445] image
How would you do this without making the image itself larger?
you could use the imdilate function to dilate the regions of interest. The examples in the webpage show how to use this function.