The following code snippet results a double image.
f = imread('C:\Users\Administrator\Desktop\2.tif');
h = double(f);
figure;
imshow(h);
whereas, this other code snippet results a uint8 image.
f = imread('C:\Users\Administrator\Desktop\2.tif');
figure;
imshow(f);
While displaying these two figures, the displayed results of these two images using imshow are different, but what is the reason behind this difference?
Images of type double are assumed to have values between 0 and 1 and uint8 images are assumed to have values between 0 and 255. Since your double data contains values between 0 and 255 (since you simply cast it as a double and don't perform any scaling), it will appear as mostly white since most values are greater than 1.
You can use the second input to imshow to indicate that you would like to ignore this assumption and automatically scale the display to the dynamic range of the data
imshow(h, [])
Or you can normalize the double version using mat2gray prior to displaying the image
h = mat2gray(h);
imshow(h)
Related
I = imread ("lena.jpg");
%imshow(I);
K = I;
C = conv2(I, K);
imshow(C);
I am expecting something like the following as indicated in this link.
But, my octave output is blank:
What could be the possible reason?
And, how can I obtain the expected output?
imshow() expect values between [0-255]. After your convolution all your value are way above 255. So of course when you use imshow(C), matlab/octave do a type conversion using uint8(). All your value equal 255 and the result is a white image. (0 = black, 255 = white).
You also should take into account severals things:
add the option 'same' to your convolution to preserve the original size of your image: conv2(I,K,'same')
If you only apply the convolution like that, you will obtain a strong border effect, because the central values of your image will be multiplied more time than the values in the border of your image. You should add a compensation matrix:
border_compensation = conv2(ones(size(K)),ones(size(K)),'same')
C = conv2(I,K,'same')./border_compensation
Normalize the final result (Don't take into account the point 2. if you really want the kind of output that you pointed out in your question)
C = uint8(C/max(C(:))*255)
I have to save the image. But when I try and keep the dimensions same, pixel values change. Is there any way to keep both intact.
C=imread('download.jpg');
C=rgb2gray(C);
%convert to DCT
[r1 c1]=size(C);
CDCT=floor(dct2(C));
dct=floor(dct2(C));
[r c]= size(dCipherText);
bye=c; %lenght of message bits
for i=r1:r1
for j=c1:-1:c1-28
.....%some operation on CDCT
end
end
imshow(idct2(CDCT),[0 255])
i=idct2(CDCT);
set(gcf,'PaperUnits','inches','PaperPosition',[0 0 c1 r1])
print -djpeg fa.jpg -r1
end
Don't use print to save the picture.
Use:
imwrite(i,'downdload_dct.jpg')
print will use the paper dimensions etc defined on your figure, rather than the image data itself. imwrite uses the data in i. You don't need imshow if you just want to re-save the image.
--
Update - sorry I see now that when you mean 'scaling', you don't mean the scaling of the picture but of the pixel values, and converting back from scalars to a colour. imshow only "scales" things on the screen, not in your actual data. So you will need to do that manually / numerically. Something like this would work, assuming i is real.
% ensure i ranges from 1 to 255
i = 1 + 254*(i-min(i(:))*(max(i(:))-min(i(:))) ;
% convert indices to RGB colour values (m x n x 3 array)
i = ind2rgb(i,jet(256));
not tested!
The following code snippet results a double image.
f = imread('C:\Users\Administrator\Desktop\2.tif');
h = double(f);
figure;
imshow(h);
whereas, this other code snippet results a uint8 image.
f = imread('C:\Users\Administrator\Desktop\2.tif');
figure;
imshow(f);
While displaying these two figures, the displayed results of these two images using imshow are different, but what is the reason behind this difference?
Images of type double are assumed to have values between 0 and 1 and uint8 images are assumed to have values between 0 and 255. Since your double data contains values between 0 and 255 (since you simply cast it as a double and don't perform any scaling), it will appear as mostly white since most values are greater than 1.
You can use the second input to imshow to indicate that you would like to ignore this assumption and automatically scale the display to the dynamic range of the data
imshow(h, [])
Or you can normalize the double version using mat2gray prior to displaying the image
h = mat2gray(h);
imshow(h)
I have a image which generated from 4 images. Each image has different type as the code
nrow=256;
ncol=256;
%% Image with double type
I1=randi(256,[nrow ncol]);
%% Image with float type in range
r2 = randn(nrow*ncol,1);
I2=reshape(r2,[nrow ncol]);
I3=I2.*20;
%% Binary image
I4=randi([0 1],[nrow ncol]);
%% make row images
I_int=[I1;I2;I3;I4]
imshow(I_int,[]);
However, the imshow can not show the above I_int image. It only show the image I3 and I2, while other I1,I4 are black. How can I use imshow to show above image with its detail? Thank all
First of all, the datatypes of your variables are not different (I'm a little confused why you think they are). It is always a good idea to use class to check this.
cellfun(#class, {I1, I2, I3, I4}, 'uni', 0)
'double' 'double' 'double' 'double'
The difference in display intensity is because the dynamic range of each of your subimages is very different.
I1 between 1 and 256
I2 between 0 and 1
I3 between 0 and 20
I4 between 0 and 1
As a result, when you combine them and display them using imshow, imshow (with the second input specified as []) sets axes clims to fit the min and max of your data. So black is set to 0 and white is 256. Because of this, I2 and I4 will appear mostly black since all of their pixels are between 0 and 1 which is much less than 256.
To fix this, you could normalize all the data (using mat2gray) prior to concatenation and display.
I_int = cat(1, mat2gray(I1), mat2gray(I2), mat2gray(I3), mat2gray(I4));
Alternately, you could display each of these images in their own axes where they will get their own clims that match their dynamic range.
So I need to take the derivative of an image in the x-direction for this assignment, with the goal of getting some form of gradient. My thought is to use the diff(command) on each row of the image and then apply a Gaussian filter. I haven't started the second part because the first is giving me trouble. In attempting to get the x-derivative I have:
origImage = imread('TightRope.png');
for h = 1:3 %%h represents color channel
for i = size(origImage,1)
newImage(i,:,h) = diff(origImage(i,:,h)); %%take derivative of row and translate to new row
end
end
The issue is somewhere along the way I get the error 'Subscripted assignment dimension mismatch.'.
Error in Untitled2 (line 14)
newImage(i,:,h) = diff(origImage(i,:,h));
Does anyone have any ideas on why that might be happening and if my approach is correct for getting the gradient/gaussian derivative?
Why not use fspecial along with imfilter instead?
figure;
I = imread('cameraman.tif');
subplot 131; imshow(I); title('original')
h = fspecial('prewitt');
derivative = imfilter(I,h','replicate'); %'
subplot 132; imshow(derivative); title('derivative')
hsize = 5;
sigma = 1;
h = fspecial('gaussian', hsize, sigma) ;
gaussian = imfilter(derivative,h','replicate'); %'
subplot 133; imshow(gaussian); title('derivative + gaussian')
The result is the following one:
If your goal is to use diff to generate the derivative rather than to create a loop, you can just tell diff to give you the derivative in the x-direction (along dimension 2):
newImage = diff(double(origImage), 1, 2);
The 1 is for the first derivative and 2 is for the derivative along the second dimension. See diff.
As #rayryeng mentions in his answer, it's important to cast the image as double.
Given a N element vector, diff returns a N-1 length vector, so the reason why you are getting an alignment mismatch is because you are trying to assign the output of diff into an incorrect number of slots. Concretely, supposing that N is the total number of columns, you are using diff on a 1 X N vector which thus returns a 1 x (N - 1) vector and you are trying to assign this output as a single row into the output image which is expected to be 1 x N. The missing element is causing the alignment mismatch. diff works by taking pairs of elements in the vector and subtracting them to produce new elements, thus the reason why there is one element missing in the final output.
If you want to get your code working, one way is to pad each row of the image or signal vector with an additional zero (for example) as input into diff. Something like this could work. Take note that I'll be converting your image to double to allow the derivative to take on negative values:
origImage = imread('...'); %// Place path to image here and read in
origImage = im2double(origImage); %// Change - Convert to double precision
newImage = zeros(size(origImage)); %// Change - Create blank new image and populate each row per channel manually
for h = 1:3 %%h represents color channel
for ii = 1:size(origImage,1) %// Change - fixed for loop iteration
newImage(ii,:,h) = diff([0 origImage(ii,:,h)]); %// Change
end
end
Take note that your for loop was incorrect since it didn't go over every row... just the last row.
When I use the onion.png image that's part of the image processing toolbox:
...and when I run this code, I get this image using imshow(newImage,[]);:
Take note that the difference filter was applied to each channel individually and I changed the intensities per channel so that the smallest value gets mapped to 0 and the largest value gets mapped to 1. How you can interpret this image is that any areasthat have a non-black colour have some non-zero differences and hence there is some activity going on in those areas and any areas that have a dark / black colour means that there is no activity going on in those areas. Take note that we applied a horizontal filter, so if you wanted to do this vertically, you'd simply repeat the behaviour but apply this column-wise instead of row-wise as you did above.