Using dicomwrite with color images - matlab

I am trying to write a sequence of color images to a dicom file in Matlab. Each image is of type uint16. The sequence is stored in a 4D matrix named output of size 200x360x3x360 (num of rows x num of cols x num of channels x num of images). When I execute dicomwrite(output,'outputfile.dcm'), it gives the following error:
It says data bit depth is 8 but I've ensured that each image is 16-bit. Not sure what's going wrong.
The documentation for dicomwrite says it can write color images as well. In fact dicomread can read color dicom images such that the size of the matrix which stores the read data is 200x360x3x360. So I guess it should be possible to write color images as well using dicomwrite. Any help in this regard is appreciated. There is a related post but it doesn't talk about color image sequence.

The comment by JohnnyQ is correct.
From this page down in section A.8.5.4, they list the Multi-frame True Color SC Image IOD Content Constraints (partial list quote):
In the Image Pixel Module, the following constraints apply:
Samples per Pixel (0028,0002) shall be 3
Bits Allocated (0028,0100) shall be 8
Bits Stored (0028,0101) shall be 8
High Bit (0028,0102) shall be 7
Pixel Representation (0028,0103) shall be 0
It seems matlab will not do the conversion for you, so you should down convert each 16-bit color channel to 8-bit for DICOM

Related

Decoding Keyence LJ-X8000 Bitmap-Height Image

I have a Keyence Line Laser System LJ-X 8000, that I use to scan the surface of different objects.
The Controller saves the height information as a bitmap, with each pixel representing one height value. After a lot of tinkering, I found out, that Keyence is not using the actual colors, rather than using the 24-Bit RGB-triplets as some form of binary storage. However, no combination of these bytes seems to work for me. Are there any common storage methods for 24-bit Integers?
To decode those values, I did a scan covering the whole measurement range of the scanner, including some out of range values in the beginning and the end. If you look at the distribution of the values of each color plane, you can see, that the first and third plane actually only use values up to 8/16 which means only 3/4 Bits. This is also visible in the image itself, as it mainly shows a green color.
I concluded that Keyence uses the full byte of the green color plane, 3 Bits of the first and 4 Bits of the last plane to store the height information. Keyence seems to have chosen some weird 15 Bit Integer Format to store their data.
With a little bit-shifting and knowing that the scanner has a valid range from [-2.2, 2.2], I was able to build the following simple little (Matlab-) script to calculate the height information for each pixel:
HeightValBin = bitshift(scanIm(:,:,2),7, 'uint16') ...
+ bitshift(scanIm(:,:,1),4, 'uint16')...
+ bitshift(scanIm(:,:,3),0, 'uint16');
scanBinValScaled = interp1([0,2^15], [-2.2, 2.2], double(scanBinVal));
Keyence offers a software to convert those .bmp into .csv-files, but without an API to automate the process. As I will have to deal with a lot of these files I needed to automate this process.
The calculated values from the rgb triplets are actually even more precise than the exported csv, as the csv only shows 4 digits after the decimal point.

Find the size of 1 pixel in my CMOS camera

I have a small problem with finding the pixel size of an image. I am to find size of nano and micro particles on my BW image. I used regionprops to get the area - then the diameter. Now i know the value in pixels. How do i convert to micro or nano meter scale? Do I take into account the sensor size(6.5umx6.5um) of my camera?
I use MATLAB for image processing.
Thank you
there is a function called imfinfo which will return a struct. In this struct you will maybe find three fields (it depends on the coder that you used for the image format) called XResolution, YResolution and ResolutionUnit. Using this 3 fields you can easily get pixel size, for example if XResolution=10, YResolution=10 and ResolutionUnit='meter' then you have a 100cm2 pixels (its a bit unreal i know :))
I hope this helps and that your image file contains the XResolution and YResolution information in your header.

How does Phil Sallee's jpeg toolbox divide images in the coef_arrays?

How does the toolbox divides images into blocks in the coef_arrays?
I have a 225x225 image.
But the coef_arrays gave three 232x232 double arrays.
With a 160x100 image, I get one 104x168 double array and two 56x88 double arrays.
Why am i getting array dimensions more than the image size?
Am i not supposed to get a total of 225x225 or 160x100 arrays no matter how many blocks the image is divided?
Say 225x225 into 10 blocks would be
11 20x20 arrays and 1 5x5 array.
JPEG compresses in blocks of 8x8. If you have an image whose size is not a multiple of 8, the encoder has to expand the image to a multiple of 8.
So 225x225 gets expanded into 232x232 with one array for each color component.
For the 160x100 image you are apparently using 2:1 of the Cb and Cr components.
I am not sure why you are getting the sizes you are getting. 160x100 could go to 160x104 (not 168x104) and 80x5o could go to 80x56.

Dicom: Matlab versus ImageJ grey level

I am processing a group of DICOM images using both ImageJ and Matlab.
In order to do the processing, I need to find spots that have grey levels between 110 and 120 in an 8 bit-depth version of the image.
The thing is: The image that Matlab and ImageJ shows me are different, using the same source file.
I assume that one of them is performing some sort of conversion in the grey levels of it when reading or before displaying. But which one of them?
And in this case, how can I calibrate do so that they display the same image?
The following image shows a comparison of the image read.
In the case of the imageJ, I just opened the application and opened the DICOM image.
In the second case, I used the following MATLAB script:
[image] = dicomread('I1400001');
figure (1)
imshow(image,[]);
title('Original DICOM image');
So which one is changing the original image and if that's the case, how can I modify so that both version looks the same?
It appears that by default ImageJ uses the Window Center and Window Width tags in the DICOM header to perform window and level contrast adjustment on the raw pixel data before displaying it, whereas the MATLAB code is using the full range of data for the display. Taken from the ImageJ User's Guide:
16 Display Range of DICOM Images
With DICOM images, ImageJ sets the
initial display range based on the Window Center (0028, 1050) and
Window Width (0028, 1051) tags. Click Reset on the W&L or B&C window and the display range will be set to the minimum and maximum
pixel values.
So, setting ImageJ to use the full range of pixel values should give you an image to match the one displayed in MATLAB. Alternatively, you could use dicominfo in MATLAB to get those two tag values from the header, then apply window/leveling to the data before displaying it. Your code will probably look something like this (using the formula from the first link above):
img = dicomread('I1400001');
imgInfo = dicominfo('I1400001');
c = double(imgInfo.WindowCenter);
w = double(imgInfo.WindowWidth);
imgScaled = 255.*((double(img)-(c-0.5))/(w-1)+0.5); % Rescale the data
imgScaled = uint8(min(max(imgScaled, 0), 255)); % Clip the edges
Note that 1) double is used to convert to double precision to avoid integer arithmetic, 2) the data is assumed to be unsigned 8-bit integers (which is what the result is converted back to), and 3) I didn't use the variable name image because there is already a function with that name. ;)
A normalized CT image (e.g. after the modality LUT transformation) will have an intensity value ranging from -1024 to position 2000+ in the Hounsfield unit (HU). So, an image processing filter should work within this image data range. On the other hand, a RGB display driver can only display 256 shades of gray. To overcome this limitation, most typical medical viewers apply Window Leveling to create a view of the image where the anatomy of interest has the proper contrast to display in the RGB display driver (mapping the image data of interest to 256 or less shades of gray). One of the ways to define the Window Level settings is to use Window Center (0028,1050) and Window Width (0028,1051) tags. Also, a single CT image can have multiple Window Level values and each pair is basically a view of the anatomy of interest. So using view data for image processing, instead actual image data, may not produce consistent results.

Trying to understand how 1-bit BMP image is drawn

As can be seen in this example, each channel (R, G, B) in a BMP file takes an input. A 24-bit BMP image has 8 bit for-R , 8-bit for G, and 8 bit for B. I saved an image in MS-paint as monochrome(black and white). Its property says the image's depth is 1-bit. The question is who gets this 1 bit: R , G or B? Is it not mandatory that all the three channels must get certain value? I am not able to understand how MS-Paint has drawn this BMP image using 1 bit.
Thanks in advance for your replies.
There's multiple ways to store a bitmap. In this case, the important distinction is RGB versus indexed.
In an RGB bitmap, every pixel is associated with three separate values, one for red, another for green, and another for blue. Depending on the "bitness" (bit depth) and the specific pixel format, the different colour channels can have different amount of bits allocated for them - the simplest case is the typical true-color with 8 bits for each of the channels, and another 8 bits (optional) for the alpha channel. However, some pixel formats allocate a bit differently - the idea is that the human eye has different sensitivity to each of those channels, and you can save up on space and improve visual quality by allocating the bits in a smarter way. For example, one of the more popular pixel formats is BGR-565 - that is, 16 bits total, 5 bits for blue, 6 bits for green and 5 bits for red.
In an indexed bitmap, the value stored with each of the pixels is an index (hence "indexed bitmap") into a palette (a colour table). The palette is usually a simple table of colours, using RGB "pixel" formats to assign each index with some specific colour. For example, index 0 might mean black, 1 might mean turqouise etc.
In this case, the bit-depth doesn't exactly map into colour quality - you're not trying to map the whole colour space, you're focusing on some subset of the possible colours instead. For example, if you have 256 shades of grey (say, from black to white), a true-colour bitmap would need at least three bytes per pixel (and each of those three bytes would have the same value), while you could use an indexed bitmap with a pallete of all the grey colours, requiring only one byte per pixel (plus the cost of the pallete - 256 * 3 bytes). There's a lot of benefits to using indexed bitmaps, and a lot of tricks to improve the visual quality further without using more bits-per-pixel, but that would be way beyond the scope of this question.
This also means that you only need as many possible values as you want to show. If you only need 16 different colours, you only need four bits per pixel. If you only need a monochromatic bitmap (that is, either a pixel is "on", or it's "off"), you only need one bit per pixel - and that's exactly your case. If you have the amount of distinct colours you need, you can easily get the required bit depth by taking a base-2 logarithm (e.g. log 256 = 8).
So let's say you have an image that only uses two colours - black and white. You'll build a pallete with two colours, black and white. And for each of the pixels in the bitmap, you either save 0 if it's black, or 1 if it's white.
Now, when you want to draw a bitmap like this, you simply read the palette (0 -> RGB(0, 0, 0), 1 -> RGB(1, 1, 1) in this case), and then you read one pixel after another. If the bit is zero, paint a black pixel. If it's one, paint a white pixel. Done :)
No, it depends on the type of data you chose to save as. Because you chose to save as monochrome, the RGB mapping is not used here, and the used mapping would go as a one byte per pixel, ranging from white to black.
Each type has its own mapping ways, saving as 24-bit will give you RGB mapping, saving as 256 will map a byte for each pixel, each value represents a color( you can find the table on the internet), as for monochrome, you'll have the same as a 256 bitmap, but the color table will only have white and black colors.
Sorry for the mistake, the way I explained for monochrome is actually used by Gray Scale, the monochrome uses one bit to indicate if the pixel is black or white, depending on the value of each bit, no mapping table is used.