When I write the image it appears black - matlab

I have a program that returns a grayscale image. But, when I try to write the image, it appears totally black. Why is that? How can I write the image and get the expected result?
Thanks.

First check on the type of your data. You can cast the type of the data by example double() or uint16() etc. (Check the help for typecasting).
Here is an example how you rescale your values to the intensity-range of uint16, unsigned integers with ~65k possible different values. The casting of course leads to less precision of the intensity values.
new_img(:,:) = uint16((new_img(:,:)./max(max(new_img(:,:),[],1)))*65536);
Afterwards you should be able to write the data to your file.

Make sure that your grayscaled image is of the right class. Furthermore check the values in the generated image. If they're simply too low all will appear black. If you can provide more specific information it might be possible to give a more elaborate answer.

if you're working on a binary image(before being converted to gray) and you about to convert it to gray-scale, then you suddenly change the range of pixels from [0, 1] to [0, 255]. so the value '1' in binary image is totally white but in gray-scale image is almost black.
try this:
img = imread('image_name.jpg');
imshow(img*50)
it make you sure that you image is black or just its pixel-values aren't appropriate.

Related

How to save inversed DCT image

Here is the code:
imshow(idct2(CDCT),[0 255])
i=idct2(CDCT),[0 255];
imwrite(i,'fa.tif');
When I display the image, it works fine. But only white image with few black lines is saved (incorrect image). Please tell what I am doing wrong. :)
If the image data that you are writing to a file using imwrite is of type double or single (which yours is), then all values are expected to be between 0 and 1. Your values are mostly greater than 1 since your data is all between 0 and 255, so this is why the image is appearing as mostly white. You can easily normalize your data using mat2gray prior to calling imwrite.
imwrite(mat2gray(i), 'fa.tif');
Otherwise, if you pass uint8 values to imwrite, the values are expected to be within the range of 0 to 255 (as your data is). So you can simply cast your input data as a uint8 prior to saving
imwrite(uint8(i), 'fa.tif');

How to re size a too big image into small by keeping original values

I have an gray scale image of size <2559x3105 uint16>. when I try to open this image, I get warning that it is too big. I have tried imresize() function to make it small<512x512 uint8> in size. When I plot the original image and re-sized image, the intensity gets decreased after re-sizing. I want to re-size original image without changing in its pixel values. Is there any solution?
If you would like to keep your final image as uint8, I think you would be needed to first convert the uint16 image to uint8 image using im2uint8 -
uint8_image = im2uint8(uint16_image);
Then you may apply imresize on uint8_image.
But, if you don't want your final image to be of uint8 type, you can directly use imresize and it would keep the datatype, i.e. the resized image would be of uint16 type.
Read the docs and use the nearest neighbor method. That is,
resized = imresize(original, scale, 'nearest')
This will not use interpolated values. The downside is of course that edges might be jagged.
It sounds like your 16-bit image uses linear codes while the resulting 8-bit image needs to be gamma corrected. If this is the case you can use imadjust with a gamma parameter of 1/2.2 to produce the brighter image.
Do you get the warning when you display it with imshow? Does it say something like "Image to large to fit the screen, resizing to xx%"? If so, then you can simply ignore the warning. Otherwise, you can can set the 'InitialMagnification' parameter of imshow to resize the figure, but not the image itself.

MATLAB: display RGB values of a fits image

I want to read a .fits image of wide field sky and display the RGB values contained in a star. Can you please suggest a method to do so?
I have used fitsread to read in the image but i am not able to show the RGB values for specific locations(star).
In order to do this, you'll need a proper rgb fits file. The only .fits viewer I know of, ds9, does not support saving rgb fits files, but rather as the three separate (r,g,b) fits images. You can use "getpix" from wcstools (http://tdc-www.harvard.edu/wcstools/) or scisoft (http://www.eso.org/sci/software/scisoft/) on the individual frames. Note that "getpix" returns the pixel value given an image (x,y) location. ds9 does not provide the physical image location, but rather the wcs coordinates, so you may have to convert to image coordinates before calling getpix.

How to remove background marking in dicom images

it sounds silly, but it really annoy me. I will describe the problem
some DICOM images which come from digital mammography, have information about the breast side in the images themselves like Rcc,Lcc an so on.
Is there any way to remove them - except the manual way - ?
Is there a field in dicominfo function (matlab function) that has any relation with it?
Or do i have to make my own algorithm to remove them?
thank you all
I doubt there is an automatic way.
If the text is white, check this out:
The example finds the maximum and minimum values of all pixels in the image. The pixels that form the white text characters are set to the maximum pixel value.
max(I(:))
ans =
4080
min(I(:))
ans =
32
To remove these text characters, the example sets all pixels with the maximum value to the minimum value.
Imodified = I;
Imodified(Imodified == 4080) = 32;
Source: http://www.mathworks.de/help/toolbox/images/f13-29508.html
EDIT
Use this technique with extreme caution. See comments.
If you can cope with losing some information you can also try a sequence of open and close operation (morphological image processing) with reconstruction. This will remove the information that is smaller than the structuring element.
If you can post/send an example image maybe i can help you.

Iphonesdk boundries checking for coloring

im creating and app where user already have an image (with different objects) without colors, i have to check the object and then color with respected color with the touch on that objects. how should i do this. can anyone help me.
I would say that that is non-trivial. I can only give hints since I have not done such an app yet.
First, you need to convert the image into a CGImageRef, for example by doing [uiimage_object CGImage].
Next you need convert the CGImageRef into array of pixel colors. You can follow the tutorial at http://www.fiveminutes.eu/iphone-image-processing/ for sample code. But for your app you need to convert the array into two dimension based on image width and height.
Then, use the coordinate of the user touch to access the exact pixel color value from the array. Next you read off the color values of the surrounding pixels and determine if color is similar to the touched pixel or not (you might need to read some wikipedia articles etc on doing the color comparison). If the color is similar, change the color to the one you want. Recurse until the surrounding color is different (i.e. you hit the boundary).
When you are finished modifying the pixel color value array, you need to convert the array back into CGImageRef using CGImageCreate function. Then you convert back to UIImage using [UIImage imageWithCGImage:imageref].
Now you are on your own to implement the steps into code. It would be unreasonable if you expect me to code all that for you, wouldn't it?