Extracting raw DICOM data from a DICOM file - matlab

I am working on a 3D DICOM file. After it is read (using MATLAB, for example) I see that it contains some text information apart from the actual scan image. I mean text which is visible in the image when I do implay(), not the header text in the DICOM file. Is there any way by which I can load only the raw data without the text? The text is hindering my processing.
EDIT: I cannot share the image I'm working on due to it being proprietary, but I found the following image after googling:
http://www.microsoft.com/casestudies/resources/Images/4000010832/image7.jpeg http://www.microsoft.com/casestudies/resources/Images/4000010832/image7.jpeg
Notice how the text on the left side partially overlaps the image? There is a similar effect in the image I'm working on. I need just the conical scan image for processing.

As noted, you need to provide more information, as there are a number of ways the overlay can be added: if it's burned into the image, you're generally out of luck; if it's in the overlay plane module (the 60xx tag group), you can probably just remove those prior to passing into Matlab; if it's stored in the unused high bit (an old but common method), you'll have to use the overlay bit position (60xx,0102) to clear out the data in the pixel data.
For the last one, something like the Matlab equivilent of this Java code:
int position = object.getInt( Tag.OverlayBitPosition, 0 );
if( position == 0 ) return;
// Remove the overlay data in high-bit specified.
//
int bit = 1 << position;
int[] pixels = object.getInts( Tag.PixelData );
int count = 0;
for( int pix : pixels )
{
int overlay = pix & bit;
pixels[ count++ ] = pix - overlay;
}
object.putInts( Tag.PixelData, VR.OW, pixels );

If you refer to the text in the blue area on top of the image, these contents are burned into the image itself.
The only solution to remove that is to apply a mask to this area of the image.
Be careful, because doing this is a modification of the original DICOM image. Such kind of modifications are not allowed in some scenarios.

Related

PIL simple image paste - image changing color

I'm trying to paste an image onto another, using:
original = Img.open('original.gif')
tile_img = Img.open('tile_image.jpg')
area = 0, 0, 300, 300
original.paste(tile_img, area)
new_cropped.show()
This works except the pasted image changes color to grey.
Image before:
Image after:
Is there a simple way to retain the same pasted image color? I've tried reading the other questions and the documentation, but I can't find any explanation of how to do this.
Many thanks
I believe all GIF images are palettised - that is, rather than containing an RGB triplet at each location, they contain an index into a palette of RGB triplets. This saves space and improves download speed - at the expense of only allowing 256 unique colours per image.
If you want to treat a GIF (or palettised PNG file) as RGB, you need to ensure you convert it to RGB on opening, otherwise you will be working with palette indices rather than RGB triplets.
Try changing the first line to:
original = Img.open('original.gif').convert('RGB')

remove some top, down rows and right, and left some columns of jpg image border using matlab

I have RGB museum JPG Images. most of them have image footnotes on one or more sides, and I'd like to remove them. I do that manually using paint software. now I applied the following matlab code to remove the image footnotes automatically. I get a good result for some images but for others it not remove any border. Please, can any one help me by update this code to apply it for all images?
'rgbIm = im2double(imread('A3.JPG'));
hsv=rgb2hsv(rgbIm);
m = hsv(:,:,2);
foreground = m > 0.06; % value of background
foreground = bwareaopen(foreground, 1000); % or whatever.
labeledImage = bwlabel(foreground);
measurements = regionprops(labeledImage, 'BoundingBox');
ww = measurements.BoundingBox;
croppedImage = imcrop(rgbImage, ww);'
In order to remove the boundaries you could use "imclearborder", where it checks for labelled components at boundaries and clears them. Caution! if the ROI touches the boundary, it may remove. To avoid such instance you can use "imerode" with desired "strel" -( a line or disc) before clearing the borders. The accuracy or generalizing the method to work for all images depends entirely on "threshold" which separates the foreground and background.
More generic method could be - try to extract the properties of footnotes. For instance, If they are just some texts, you can easily remove them by using a edge detection and morphology opening with line structuring element along the cols. (basic property for text detection)
Hope it helps.
I could give you a clear idea or method if you upload the image.

Dicom: Matlab versus ImageJ grey level

I am processing a group of DICOM images using both ImageJ and Matlab.
In order to do the processing, I need to find spots that have grey levels between 110 and 120 in an 8 bit-depth version of the image.
The thing is: The image that Matlab and ImageJ shows me are different, using the same source file.
I assume that one of them is performing some sort of conversion in the grey levels of it when reading or before displaying. But which one of them?
And in this case, how can I calibrate do so that they display the same image?
The following image shows a comparison of the image read.
In the case of the imageJ, I just opened the application and opened the DICOM image.
In the second case, I used the following MATLAB script:
[image] = dicomread('I1400001');
figure (1)
imshow(image,[]);
title('Original DICOM image');
So which one is changing the original image and if that's the case, how can I modify so that both version looks the same?
It appears that by default ImageJ uses the Window Center and Window Width tags in the DICOM header to perform window and level contrast adjustment on the raw pixel data before displaying it, whereas the MATLAB code is using the full range of data for the display. Taken from the ImageJ User's Guide:
16 Display Range of DICOM Images
With DICOM images, ImageJ sets the
initial display range based on the Window Center (0028, 1050) and
Window Width (0028, 1051) tags. Click Reset on the W&L or B&C window and the display range will be set to the minimum and maximum
pixel values.
So, setting ImageJ to use the full range of pixel values should give you an image to match the one displayed in MATLAB. Alternatively, you could use dicominfo in MATLAB to get those two tag values from the header, then apply window/leveling to the data before displaying it. Your code will probably look something like this (using the formula from the first link above):
img = dicomread('I1400001');
imgInfo = dicominfo('I1400001');
c = double(imgInfo.WindowCenter);
w = double(imgInfo.WindowWidth);
imgScaled = 255.*((double(img)-(c-0.5))/(w-1)+0.5); % Rescale the data
imgScaled = uint8(min(max(imgScaled, 0), 255)); % Clip the edges
Note that 1) double is used to convert to double precision to avoid integer arithmetic, 2) the data is assumed to be unsigned 8-bit integers (which is what the result is converted back to), and 3) I didn't use the variable name image because there is already a function with that name. ;)
A normalized CT image (e.g. after the modality LUT transformation) will have an intensity value ranging from -1024 to position 2000+ in the Hounsfield unit (HU). So, an image processing filter should work within this image data range. On the other hand, a RGB display driver can only display 256 shades of gray. To overcome this limitation, most typical medical viewers apply Window Leveling to create a view of the image where the anatomy of interest has the proper contrast to display in the RGB display driver (mapping the image data of interest to 256 or less shades of gray). One of the ways to define the Window Level settings is to use Window Center (0028,1050) and Window Width (0028,1051) tags. Also, a single CT image can have multiple Window Level values and each pair is basically a view of the anatomy of interest. So using view data for image processing, instead actual image data, may not produce consistent results.

Perspective correction of UIImage from Points

I'm working on a app where I'll let the user take a picture e.g of a business card or photograph.
The user will then mark the four corners of the object (which they took a picture off) - Like it is seen in a lot of document/image/business card scanning apps:
My question is how do i crop and fix the perspective according to these four points? I've been searching for days and looked at several image proccessing libraries without any luck.
Any one who can point me in the right direction?
From iOS8+ there is Filter for Core Image called CIPerspectiveCorrection. All you need to do is pass the image and four points.
Also there is one more filter supporting iOS6+ called CIPerspectiveTransform which can be used in similar way (skewing image).
If this image were loaded in as a texture, it'd be extremely simple to skew it using OpenGL. You'd literally just draw a full-screen quad and use the yellow correction points as the UV coordinate at each point.
I'm not sure if you've tried the Opencv library yet, but it has a very nice way to deskew an image. I've got here a small snippet that takes an array of corners, your four corners for example, and a final size to map it into.
You can read the man page for warpPerspective on the OpenCV site.
cv::Mat deskew(cv::Mat& capturedFrame, cv::Point2f source_points[], cv::Size finalSize)
{
cv::Point2f dest_points[4];
// Output of deskew operation has same color space as source frame, but
// is proportional to the area the document occupied; this is to reduce
// blur effects from a scaling component.
cv::Mat deskewedMat = cv::Mat(finalSize, capturedFrame.type());
cv::Size s = capturedFrame.size();
// Deskew to full output image corners
dest_points[0] = cv::Point2f(0,s.height); // lower left
dest_points[1] = cv::Point2f(0,0); // upper left
dest_points[2] = cv::Point2f(s.width,0); // upper right
dest_points[3] = cv::Point2f(s.width,s.height); // lower right
// Build quandrangle "de-skew" transform matrix values
cv::Mat transform = cv::getPerspectiveTransform( source_points, dest_points );
// Apply the deskew transform
cv::warpPerspective( capturedFrame, deskewedMat, transform, s, cv::INTER_CUBIC );
return deskewedMat;
}
I don't know exact solution of your case, but there is approach for trapezoid: http://www.comp.nus.edu.sg/~tants/tsm/TSM_recipe.html - the idea is to continuously build transformation matrix. Theoretically you can add transformation that converts your shape into trapecy.
And there are many questions like this: https://math.stackexchange.com/questions/13404/mapping-irregular-quadrilateral-to-a-rectangle , but I didn't check solutions.

How to work with images(png's) of size 2-4Mb

I am working with images of size 2 to 4MB. I am working with images of resolution 1200x1600 by performing scaling, translation and rotation operations. I want to add another image on that and save it to photo album. My app is crashing after i successfully edit one image and save to photos. Its happening because of images size i think. I want to maintain the 90% of resolution of the images.
I am releasing some images when i get memory warning. But still it crashes as i am working with 2 images of size 3MB each and context of size 1200x1600 and getting a image from the context at the same time.
Is there any way to compress images and work with it?
I doubt it. Even compressing and decompressing an image without doing anything to it loses information. I suspect that any algorithms to manipulate compressed images would be hopelessly lossy.
Having said that, it may be technically possible. For instance, rotating a Fourier transform also rotates the original image. But practical image compression isn't usually as simple as just computing a Fourier transform.
Alternatively, you could write piecemeal algorithms that chop the image up into bite-sized pieces, transform the pieces and reassemble them afterwards. You might also provide a real-time view of the process by applying the same transform to a smaller version of the full image.
The key will be never to full decode the entire image into memory at full size.
If you need to display the image, there's no reason to do that at full size -- the display on the iPhone is too small to take advantage of that. For image objects that are for display, decode the image in scaled down form.
For processing, you will need to write custom code that works on a stream of pixels rather than an in-memory array. I don't know if this is available on the iPhone already, but you can write it yourself by writing to the libpng library API directly.
For example, your code right now probably looks something like this (pseudo code)
img = ReadImageFromFile("image.png")
img2 = RotateImage(img, 90)
SaveImage(img2, "image2.png")
The key thing to understand, is that in this case, img is not the data in the PNG file (2MB), but the fully uncompressed image (~6mb). RotateImage (or whatever it's called) returns another image of about this same size. If you are scaling up, it's even worse.
You want code that looks more like this (but there might not be any API's for you to do it -- you might have to write it yourself)
imgPixelGetter = PixelDecoderFromFile("image.png")
imgPixelSaver = OpenImageForAppending("image2.png")
w = imgPixelGetter.Width
h = imgPixelGetter.Height
// set up a 90 degree rotate
imgPixelSaver.Width = h
imgPixelSaver.Height = w
// read each vertical scanline of pixels
for (x = 0; x < w; ++x) {
pixelRect = imgPixelGetter.ReadRect(x, 0, 1, h) // x, y, w, h
pixelRect.Rotate(90); // it's now got a width of h and a height of 1
imgPixelSaver.AppendScanLine(pixelRect)
}
In this algorithm, you never had the entire image in memory at once -- you read it out piece by piece and saved it. You can write similar algorithms for scaling and cropping.
The tradeoff is that it will be slower than just decoding it into memory -- it depends on the image format and the code that's doing the ReadRect(). Unfortunately, PNG is not designed for this kind of access to the pixels.