Gaussian kernel isn't showing up [duplicate] - matlab

I have imported an image. I have parsed it to double precision and performed some filtering on it.
When I plot the result with imshow, the double image is too dark. But when I use imshowpair to plot the original and the final image, both images are correctly displayed.
I have tried to use uint8, im2uint8, multiply by 255 and then use those functions, but the only way to obtain the correct image is using imshowpair.
What can I do?

It sounds like a problem where the majority of your intensities / colour data are outside the dynamic range of what is accepted for imshow when showing double data.
I also see that you're using im2double, but im2double simply converts the image to double and if the image is already double, nothing happens. It's probably because of the way you are filtering the images. Are you doing some sort of edge detection? The reason why you're getting dark images is probably because the majority of your intensities are negative, or are hovering around 0. imshow whe displaying double type images assumes that the dynamic range of intensities is [0,1].
Therefore, one way to resolve your problem is to do:
imshow(im,[]);
This shifts the display so that range so the smallest value is mapped to 0, and the largest to 1.
If you'd like a more permanent solution, consider creating a new output variable that does this for you:
out = (im - min(im(:))) / (max(im(:)) - min(im(:)));
This will perform the same shifting that imshow does when displaying data for you. You can now just do:
imshow(out);

Related

Plot true color Sentinel-2A imagery in Matlab

Through a combination of non-matlab/non-native tools (GDAL) as well as native tools (geoimread) I can ingest Sentinel-2A data either a indiviual bands or as an RGB image having employed gdal merge. I'm stuck at a point where using
imshow(I, [])
Produces a black image, with apparently no signal. The range of intensity values in the image are 271 - 4349. I know that there is a good signal in the image because when I do:
bit_depth = 2^15;
I = swapbytes(I);
[I_indexed, color_map] = rgb2ind(I, bit_depth);
I_double = im2double(I_indexed, 'indexed');
ax1 = figure;
colormap(ax1, color_map);
image(I_double)
i.e. index the image, collect a colormap, set the colormap and then call the image function, I get a likeness of the region I'm exploring (albeit very strangely colored)
I'm currently considering whether I should try:
Find a low-level description of Sentinel-2A data, implement the scaling/correction
Use a toolbox, possibly this one.
Possibly adjust ouput settings in one of the earlier steps involving GDAL
Comments or suggestions are greatly appreciated.
A basic scaling scheme is:
% convert image to double
I_double = im2double(I);
% scaling
max_intensity = max(I_double(:));
min_intensity = min(I_double(:));
range_intensity = max_intensity - min_intensity;
I_scaled = 2^16.*((I_double - min_intensity) ./ range_intensity);
% display
imshow(uint16(I_scaled))
noting the importance of casting to uint16 from double for imshow.
A couple points...
You mention that I is an RGB image (i.e. N-by-M-by-3 data). If this is the case, the [] argument to imshow will have no effect. That only applies automatic scaling of the display for grayscale images.
Given the range of intensity values you list (271 to 4349), I'm guessing you are dealing with a uint16 data type. Since this data type has a maximum value of 65535, your image data only covers about the lower 16th of this range. This is why your image looks practically black. It also explains why you can see the signal with your given code: you apply swapbytes to I before displaying it with image, which in this case will shift values into the higher intensity ranges (e.g. swapbytes(uint16(4349)) gives a value of 64784).
In order to better visualize your data, you'll need to scale it. As a simple test, you'll probably be able to see something appear by just scaling it by 8 (to cover a little more than half of your dynamic range):
imshow(8.*I);

Reading grayscale image in matlab [duplicate]

This question already has an answer here:
What does the index refer to when selecting a pixel on an image in Matlab?
(1 answer)
Closed 6 years ago.
I have gray scale image "lena.bmp". I want read this image in matlab using imread() function.
When i use code below to read and show image my image is dark (black).
img = imread('lena.bmp');
imshow(img);
But when i use code below, I have no problem to view.
[img map]= imread('lena.bmp');
imshow(img,map);
It seems that my first code doses not reading image in grayscale mode (like what rgb2gray function generate).
My image is as follows:
What can i do to solve this problem?
Your image is an "indexed" image. That means it contains integer values which act as "labels" more than anything, and each of those labels is mapped to a colour (i.e. an rgb triplet). Your map variable represents that mapping; at row 5 you have the rgb triplet that corresponds to 'label' "5", for instance.
To see what I mean, do unique(img) and you'll see that the values of your img array are in fact quite regular. The command rgbplot can demonstrate the actual colourmap graphically. Run rgbplot(map) on your map variable to see the mapping for each of the red green and blue colours.
Now, save and read the image below on your computer as img2 and compare the array values.
This image was generated by converting from the "indexed" image you linked to, to a "grayscale" one using photoediting software (the GIMP). The difference is that
in a grayscale image, the pixel values represent actual intensities, rather than integer 'labels'. Imread reads grayscale images as uint8 images by default, meaning it assigns intensity values to pixels ranging from 0 (black) to 255 (white). Since these values happen to be integers you could still cheat and treat them as 'labels' and force a colour-mapping on them. But if you assign a 'linear map' (i.e. value 1 = intensity 1, value 2 = intensity 2, etc) then your image will look as you would expect.
You'll see that the values from unique(img2) are quite different. If you imshow(img2) you'll see this displays as you'd expect. If you don't specify a colormap for imshow, it will assume that the map is a linear mapping from the lowest to the highest value in the image array, which explains why your indexed image looked weird, since its values were never supposed to correspond to intensities.
Also try imagesc(img2) which will show this but using the "current" colormap. imagesc causes the colormap to be "scaled", so that the lowest colour goes to the lowest value in the image, and similarly for the highest.
The default colormap is jet so you should see a psychedelic looking image but you should be able to make out lena clearly. If you try colormap gray you should see the gray version again. Also try colormap hot. Now to make sense of the colormaps, try the rgbplot command on them (e.g. rgbplot(gray), rgbplot(hot) etc).
So, going back to imshow, imshow basically allows you to display an indexed image, and specify what colormap you want to use to display it. If you don't specify the colormap, it will just use a linear interpolation from the lowest value to the highest as your map. Therefore imshow(img) will show the image pretty much in the same way as imagesc(img) with a gray colormap. And since the values in your first img represent evenly spaced 'labels' rather than actual intensities, you'll get a rubbish picture out.
EDIT: If you want to convert your indexed image to a grayscale image, matlab provides the ind2gray function, e.g.:
[img, map] = imread('lena.bmp');
img_gray = ind2gray(img, map);
This is probably what you need if you mean to process pixel values as intensities.

Problems in implementing the integral image in MATLAB

I tried to implement the integral image in MATLAB by the following:
im = imread('image.jpg');
ii_im = cumsum(cumsum(double(im)')');
im is the original image and ii_im is the integral image.
The problem here is the value in ii_im flows out of the 0 to 255 range.
When using imshow(ii_im), I always get a very bright image which I am not sure is the correct result. Am I correct here?
You're implementing the integral image calculations right, but I don't understand why you would want to visualize it - especially since the sums will go beyond any normal integer range. This is expected as you are performing a summation of intensities bounded by larger and larger rectangular neighbourhoods as you move to the bottom right of the image. It is inevitable that you will get large numbers towards the bottom right. Also, you will obviously get a white image when trying to show this image because most of the values will go beyond 255, which is visualized as white.
If I can add something, one small optimization I have is to get rid of the transposing and use cumsum to specify the dimension you want to work on. Specifically, you can do this:
ii_im = cumsum(cumsum(double(im), 1), 2);
It doesn't matter what direction you specify first (2 then 1, or 1 then 2). The summation of all pixels within each bounded area, as long as you specify all directions to operate on, should be the same.
Back to your question for display, if you really, really, really really... I mean really want to, you can normalize the contrast by doing:
imshow(ii_im, []);
However, what you should expect is a gradient image which starts to be dark from the top, then becomes brighter when you get to the bottom right of the image. Remember, each point in the integral image calculates the total summation of pixel intensities bounded by the top left corner of the image to this point, thus forming a rectangle of intensities you need to sum over. Therefore, as we move further down and to the right of the integral image, the total summation should increase.
With the cameraman.tif image, this is the original image, as well as it's integral image visualized using the above command:
Either way, there is absolutely no reason why you would want to visualize it. You would use this directly with whatever application requires it (adaptive thresholding, Viola-Jones detector, etc.)
Another option could be applying a log operation for each value in the integral image. Something like:
imshow(log(1 + ii_im), []);
However, this will make most of the pixels have the same contrast and this is probably not useful. This is what I get with cameraman.tif:
The moral of this story is that you need some sort of contrast normalization so that you can fit all of the values in your integral image within the confines of the data type that is used to display the image on the screen using imshow.

stretching histogram of image in matlab

I'm trying to implement a function imshow(img,[]), using the following formula
for each pixel:
(img(x,y)-min(img))/(max(img)-min(img))*255
But I got a different result
How can I stretch the histogram without using imshow(img,[])
tnx
code:
IAS=input('please enter image address','s');
Iimg=imread(IAS);
stimg=(Iimg-min(Iimg(:)))/(max(Iimg(:))-min(Iimg(:)))*255;
subplot(1,3,1)
imshow(stimg);
title('strechself');
subplot(1,3,2)
imshow(Iimg);
title('original image');
subplot(1,3,3)
imshow(Iimg,[])
title('imshow(img,[])');
Its probably due to the wrong use of max and min.
You are doing min(img) and that will give you an array of the minimum of each row. if you want the absolute minimum of the whole image, you should call min(img(:))
Therefore, change your line to:
img=(img-min(img(:)))/(max(img(:))-min(img(:)))*255
Note that its just 1 line. In Matlab you don't need to access each pixel (img(x,y)) and change independently as in other languages, you can do it directly.
Additionally, if img is not a uint8 , I suggest you make it (because you are using 0-255 scale)
img=uint8(img);
EDIT : Having a look to your results, very probably your original image is an uint8, therefore, before the line to strech the image, you should add the following line:
img=double(img);
So you can make divisions and keep the numbers. Else, you are performing integer division, so 34/255=0

Matlab Grayscale Normalization

I am new to matlab and to image processing, and I am having some issues normalizing but I am not sure why.
In my code I store the image as a black and white image in lim3, then:
minvalue = min(min(min(lim3)));
maxvalue = max(max(max(lim3)));
normimg = (lim3-minvalue)*255/(maxvalue-minvalue);
Unfortunately, this gives a new image that is exactly the same as lim3 at all, but I am not sure why. Ideally, I don't want to use the histeq function, so if someone could explain how to fix this code to get it to work, I would appreciate it.
All of the people above in the comments have raised very good points, but if you want a tl;dr answer, here are the most important points:
If your minimum and maximum values are 0 and 255 respectively for all colour channels, this means that the minimum and maximum colour values are black and white respectively. This is the same situation if your image was a single channel image / grayscale. As such, if you try and normalize your output image, it will look the same as you would be multiplying and dividing by the same scale. However, just as a minor note, your above code will work if your image is grayscale. I would also get rid of the superfluous nested min/max calls.
You need to make sure that your image is cast to double before you do this scaling as you will most likely generate floating point numbers. Should your scale be < 1, this will inadvertently be truncated to 0. In general, you will lose precision when you're trying to normalize the intensities as the type of the image is most likely uint8. You also need to remember to cast back to uint8 when you're done, as that is what the original type of the image was before you cast. You can do this casting, or you can use im2double as this essentially does what you want under the hood, but normalizes the image's intensities to the range of [0,1].
As such, if you really really really really... really... want to use your code above, you'd have to do something like this:
lim3 = double(lim3); %// Cast to double
minvalue = min(lim3(:)); %// Note the change here
maxvalue = max(lim3(:)); %// Got rid of superfluous nested min/max calls
normimg = uint8((lim3-minvalue)*255/(maxvalue-minvalue)); %// Cast back to uint8
This code will work if the image you are reading in is grayscale.
Bonus - For Colour Images
However, if you want to apply the above for colour images, I don't recommend you use the above approach. The reason being is you will only see a difference if the minimum and maximum values for each colour plane are the same - 0 and 255 respectively. What I would recommend you do is normalize each colour plane separately so that you'll push each colour plane to the range of [0,1], not being bound to the minimum and maximum of just one colour plane.
As such, I would recommend you do something like this:
lim3 = double(lim3); %// Cast to double
normimg = uint8(zeros(size(lim3))); %// Allocate output image
for idx = 1 : 3
chan = lim3(:,:,idx);
minvalue = min(chan(:));
maxvalue = max(chan(:));
normimg(:,:,idx) = uint8((chan-minvalue)*255/(maxvalue-minvalue)); %// Cast back to uint8
end
The above code accesses each colour plane individually, normalizes the plane, then puts the result in the output image normimg. I would recommend you use the above approach instead if you want to see any contrast differences for colour images.