I'm doing some research on image processing using MATLAB and I've created grayscale intensity images in two different ways using rgb2gray and rgb2hsv like so:
read_image = imread(handles.myImage);
bc_gambar2 = imresize(read_image,[280 540]);
g = rgb2gray(bc_gambar2); % First intensity image
g2 = rgb2hsv(bc_gambar2);
g = g2(:,:,3); % Second intensity image
The result seems better using rgb2hsv and indexing than using rgb2gray. Can anybody tell me what the difference is and why it's happening?
Here's a sample image I'm using (if needed):
The calculation used by rgb2hsv to compute the value (i.e. lightness) channel is different than that used by rgb2gray to compute the grayscale intensity. They are described by the second and fourth bullet points here, respectively. Briefly:
The computation for the value channel (rgb2hsv) is:
g = max(bc_gambar2, [], 3);
The computation for the grayscale intensity (rgb2gray) is:
g = 0.299.*bc_gambar2(:, :, 1) + ...
0.587.*bc_gambar2(:, :, 2) + ...
0.114.*bc_gambar2(:, :, 3);
More information about different color spaces can be found here.
Related
I was trying to turn a set of images to 2D grayscale image in order to do Surf feature extraction. In the set, some of the images are already in 2D scale so when I tried to run a loop.
This error message happens:
MAP must be a m x 3 array.
Here is my code:
for ii=1:numImages
img = readimage(imdsTrain,ii);
RGB = rgb2gray(img)
points = detectSURFFeatures(RGB);
[surf1, valid_points] = extractFeatures(RGB, points);
figure; imshow(RGB); hold on;
plot(valid_points.selectStrongest(10),'showOrientation',true);
title ('Ct-scan image with Surf feature')
end;
Is there any solution that can solve this problem?
Matlab's rgb2gray will fail if you give it a grayscale image (an image with only one color channel).
If you are sure that your images are either m x n X 3 or m x n x 1, you could check for the m x n x 3 case before attempting to do the rgb2gray. If your image is already a grayscale image no conversion is needed or can be done via rgb2gray.
Something like this:
img = readimage(imdsTrain,ii);
[rows, columns, colors] = size(img)
if colors > 1
RGB = rgb2gray(img)
end
Now getting the error MAP must be a m x 3 array suggests that whatever you supplied rgb2gray got interpreted as a colormap. So I would check if all your images are indeed three dimensions, as you are assumimg.
I need a little help guys in Matlab in Matrix Dimensions,
I Have two images imported by imread function:
im1 = imread('1.jpg');
im2 = imread('2.jpg');
im1 is the reference image, while im2 is the Noisy image.
In the workspace window, Matlab shows the im2 Dimensions like this: 768x1024x3
while im2 displayed as: 768x1024
They are both RGB, there's no greyscale images,
In fact the second image is the a compressed image (performed compression algorithm on it ) while the first image is natural JPEG Image, untouched
and for calculating MSE/PNSR for both images, the matrix dimensions must be the same.
I Will need to transform im1 dimensions to be 3d like the first image (768x1024)
I tried this functions (squeeze, reshape) and with no success
You were on the right track with repmat. Here's the correct syntax:
im2 = repmat(im2, [1 1 3]);
This says you want 1 replicate along the first dimension, 1 replicate along the second dimension, and 3 replicates along the third dimension.
Are you sure that both are RGB images because im2 has only one channel and it looks grayscale but it can also be a colormap image in that case try
[im2, map] = imread('im2.jpg');
and see if anything is appearing in map variable. If the image is indeed colormap image, the map variable should be of size 256 X 3.
What donda has suggested is repeating the grayscale channel 3 times to make it of size 768x1024x3. Another possibility is that noisy image was created by converting RGB image to grayscale or by taking green channel of RGB image. Verify the source of the image in that case.
About PSNR computation I have a feeling that there is some problem with your code. I have given my code below use this and see if it works. Get back to me if you face any problem.
function [Psnr_DB] = psnr(I,I_out)
I = double(I);
I_out = double(I_out);
total_error = 0;
for iterz = 1:size(I,3)
for iterx = 1:size(I,1)
for itery = 1:size(I,2)
total_error = total_error + (I(iterx,itery,iterz)-I_out(iterx,itery,iterz))^2;
end
end
end
MSE = total_error/numel(I);
Psnr = (255^2)/MSE;
Psnr_DB = 10*log10(Psnr) %#ok<NOPRT>
I need to write a function which will match the histogram of image2 to the image that will be remapped, let's call it image1. But I am not allowed to use histeq. Could you please help me with the code?
ps: Also I am wondering how would I do that operation if I were allowed to use histeq? What should I do after extracting red-green and blue channels? (I could not use histeq(R2,R1)?)
image1 = imread('color1.jpeg');
image2 = imread('color2.jpeg');
R1 = image1(:, :, 1);
G1 = image1(:, :, 2);
B1 = image1(:, :, 3);
R2 = image2(:, :, 1);
G2 = image2(:, :, 2);
B2 = image2(:, :, 3);
Regards,
Amadeus
I don't think the question is specific enough. One way to solve this is to convert the three channels to a grayscale image (rgb2gray), compute the two histograms (hist) and then find a desired mapping between the histograms and apply it to each channel of the original image.
The conversion to grayscale is not necessary, you can perform this algorithm on each channel and then join the channels together later.
Check this question, which uses histq.
Histogram Matching algorithm consist of 3 stage:
1-compute Normalize CDF of first image(T(r)).
2-compute Normalize CDF of second image(G(z)).
3-calculate G^-1(T(r)) and transform intensity value of first image to desired one.
I'll try to be precise and short.
I have a volume (128x128x128) and a mask (same size with [0|1|2] values)
I want to make the 3D volume matrix a 3D image with RGB, and store in each channel (red,green,blue) the points marked in the mask.
This is to use a 2D representation by taking a slice of that 3D cube, and not compute it over and over to make things way more faster (very important in my project), so actually, the 3D volume + rgb would be like a store for 128 2D images.
The question is, what steps and how do I have to make all this:
- Create a volume 128x128x128x3 ?
- Define a new colormap (original is gray) ?
- Join each channel ?
- How do I use imagesc/whatever to show one slice of that cube with the points in the color as marked in the mask (ex: imageRGB(:,:,64)) ?
That's just my guess, but I don't even know how to do it properly...I'm a bit lost, I hope you can help me, this is a piece of code that may be wrong but may help you out
% Create the matrix 4D
ovImg = zeros(size(volImg,1),size(volImg,2),size(volImg,3),3); % 128x128x128x3
% Store in each channel the points marked as groups
ovImg(:,:,:,1) = volImg .* (mask==1);
ovImg(:,:,:,2) = volImg .* (mask==2);
ovImg(:,:,:,3) = volImg .* (mask==3);
many many thanks!!
UPDATE:
I'm having some trouble with transparency and the colormap, this is what I did.
% Create the matrix 4D
ovImg = zeros(size(volImg,1),size(volImg,2),size(volImg,3),3);
% Store in each channel the points marked as groups
ovImg(:,:,:,1) = imaNorm.*(mask==1);
ovImg(:,:,:,2) = imaNorm.*(mask==2);
ovImg(:,:,:,3) = imaNorm.*(mask==3);
[X,Y,Z] = meshgrid(1:128,1:128,1:128);
imaNorm = volImg - min(volImg(:));
maxval = max(imaNorm(:));
ovImg = imaNorm + mask * maxval;
N= ceil(maxval);
c = [linspace(0,1,N)' zeros(N,2)];
my_colormap = [c(:,[1 2 3]) ; c(:,[3 1 2]) ; c(:,[2 3 1])];
figure;
imshow(squeeze(ovImg(:,:,64)),my_colormap);
figure;
imagesc(squeeze(mask(:,:,64)));
Result (Overlayed image / mask)
Any ideas? Thanks again, everybody
FINAL UPDATE:
With the other approach that Gunther Struyf suggested, I had exactly what I wanted.
Thanks mate, I really appreciate it, hope this helps other people too.
You can use imshow with a colormap to 'fake' an RGB image from a grayscale image (which you have). For the scale I'd not multiply it, but add an offset to the value, so each mask is a different range in the colormap.
For plotting a slice of the 3d matrix, you can just index it and then squeeze it to remove the resulting singleton dimension:
Example:
[X,Y,Z]=meshgrid(1:128,1:128,1:128);
volImg =5*sin(X/3)+13*cos(Y/5)+8*sin(Z/10);
volImg=volImg-min(volImg(:));
mask = repmat(floor(linspace(0,3-2*eps,128))',[1 128 128]);
maxval=max(volImg(:));
ovImg=volImg+mask*maxval;
imshow(squeeze(ovImg(:,:,1)),jet(ceil(max(ovImg(:)))));
Unmasked, original image (imshow(squeeze(volImg(:,:,1)),jet(ceil(maxval))))
Resulting with mask (code block above):
For different colormaps, see here, or create your own colormap. Eg you're mask has three values, so let's match those with R,G and B:
N = ceil(maxval);
c = [linspace(0,1,N)' zeros(N,2)];
my_colormap = [c(:,[1 2 3]) ; c(:,[3 1 2]) ; c(:,[2 3 1])];
figure
imshow(squeeze(ovImg(:,:,1)),my_colormap);
which gives:
Other approach:
Now I understand your question, I see you got it quite right from the beginning, you only need rescale the variable to a value between 0 and 1, since from imshow:
Color intensity can be specified on the interval 0.0 to 1.0.
which you can do using:
minval=min(volImg(:));
maxval=max(volImg(:));
volImg=(volImg-minval)/(maxval-minval);
next up is your code:
ovImg = zeros([size(volImg),3]);
ovImg(:,:,:,1) = volImg .* (mask==1);
ovImg(:,:,:,2) = volImg .* (mask==2);
ovImg(:,:,:,3) = volImg .* (mask==3);
You just have to plot it now:
imshow(squeeze(ovImg(:,:,64,:)))
How can one implement the fisheye lens effect illustrated in that image:
One can use Google's logo for a try:
BTW, what's the term for it?
I believe this is typically referred to as either a "fisheye lens" effect or a "barrel transformation". Here are two links to demos that I found:
Sample code for how you can apply fisheye distortions to images using the 'custom' option for the function maketform from the Image Processing Toolbox.
An image processing demo which performs a barrel transformation using the function tformarray.
Example
In this example, I started with the function radial.m from the first link above and modified the way it relates points between the input and output spaces to create a nice circular image. The new function fisheye_inverse is given below, and it should be placed in a folder on your MATLAB path so you can use it later in this example:
function U = fisheye_inverse(X, T)
imageSize = T.tdata(1:2);
exponent = T.tdata(3);
origin = (imageSize+1)./2;
scale = imageSize./2;
x = (X(:, 1)-origin(1))/scale(1);
y = (X(:, 2)-origin(2))/scale(2);
R = sqrt(x.^2+y.^2);
theta = atan2(y, x);
cornerScale = min(abs(1./sin(theta)), abs(1./cos(theta)));
cornerScale(R < 1) = 1;
R = cornerScale.*R.^exponent;
x = scale(1).*R.*cos(theta)+origin(1);
y = scale(2).*R.*sin(theta)+origin(2);
U = [x y];
end
The fisheye distortion looks best when applied to square images, so you will want to make your images square by either cropping them or padding them with some color. Since the transformation of the image will not look right for indexed images, you will also want to convert any indexed images to RGB images using ind2rgb. Grayscale or binary images will also work fine. Here's how to do this for your sample Google logo:
[X, map] = imread('logo1w.png'); % Read the indexed image
rgbImage = ind2rgb(X, map); % Convert to an RGB image
[r, c, d] = size(rgbImage); % Get the image dimensions
nPad = (c-r)/2; % The number of padding rows
rgbImage = cat(1, ones(nPad, c, 3), rgbImage, ones(nPad, c, 3)); % Pad with white
Now we can create the transform with maketform and apply it with imtransform (or imwarp as recommended in newer versions):
options = [c c 3]; % An array containing the columns, rows, and exponent
tf = maketform('custom', 2, 2, [], ... % Make the transformation structure
#fisheye_inverse, options);
newImage = imtransform(rgbImage, tf); % Transform the image
imshow(newImage); % Display the image
And here's the image you should see:
You can adjust the degree of distortion by changing the third value in the options array, which is the exponential power used in the radial deformation of the image points.
I think you are referring to the fisheye lens effect. Here is some code for imitating fisheye in matlab.
Just for the record:
This effect is a type of radial distortion called "barrel distortion".
For more information please see:
http: //en.wikipedia.org/wiki/Distortion_(optics)
Here is a different method to apply an effect similar to barrel distortion using texture mapping (adapted from MATLAB Documentation):
[I,map] = imread('logo.gif');
[h,w] = size(I);
sphere;
hS = findobj('Type','surface');
hemisphere = [ones(h,w),I,ones(h,w)];
set(hS,'CData',flipud(hemisphere),...
'FaceColor','texturemap',...
'EdgeColor','none')
colormap(map)
colordef black
axis equal
grid off
set(gca,'xtick',[],'ztick',[],'ytick',[],'box','on')
view([90 0])
This will give you the circular frame you are looking for but the aliasing artifacts might be too much to deal with.