I have 2 variables:
Image which contain original image.
FilteredImage which is filtered image.
Both are RGB images. I know how to calculate the bending for 2-D images
Image = unread('C:\Users\klass\Pictures\man.jpeg');
NoiseImage = minimise(Image,'gaussian');
ImageFiltered = NoiseImage;
for c = 1 : 3
ImageFiltered (:, :, c) = medfilt2(NoiseImage(:, :, c) [3, 3]
end
Bending = norm(im2double(Image - FilteredImage))/norm(im2double(FilteredImage)) * 100;
When I try to use this formula to my figures I get this error:
Error using norm
Input must be 2-D.
I tried pass 3-D images to norm() function. The work around is convert each image to 2-D by rgb2gray()function.
Therefore I evacuate banding with formula:
Bending = norm(im2double(rgb2gray(Image) - rgb2gray(FilteredImage)) / norm(im2double(rgb2gray(Image))) * 100
Related
I trying to create my own nearest neighbor interpolation algorithm in Matlab to enlarge an image of 556×612 to 1668×1836.
This is homework!!!
I have attempted this already but encounter the error where the values inside M gets (not all but most) transformed to 255 (White Space), and I cannot get my head around why. Any help would be appreciated! The picture is a picture of a zebra.
%Take in image and convert to greyscale
I = imread('Zebra.jpg');
Igray = rgb2gray(I);
% Step-3: Resize the image to enlarged 1668x1836 by interpolation
% Step-3(a) : Using nearest neighbour
%First we will need to work out the dimension of the image
[j , k] = size(Igray);
%Now we need to set the size of the image we want
NewX = 1836;
NewY = 1668;
% Work out ratio of old to new
ScaleX = NewX./(j-1);
ScaleY = NewY./(k-1);
%Image Buffer
M = zeros(NewX, NewY);
%Create output image
for count1 = 1:NewX
for count2 = 1:NewY
M(count1,count2) = Igray(1+round(count1./ScaleX),1+round(count2./ScaleY));
end
end
%Show Images
imshow(M);
title('Scaled Image NN');
try imshow(M,[]). You created M without specifying type, which makes it double. double images are [0-1], so imshow by default makes white everything with higher value than 1.
Alternatively, create M to be uint8 as the original image
M = zeros(NewX, NewY,'uint8');
even better code would be:
M = zeros(NewX, NewY,class(Igray));
I would like to achieve image blending with smooth transitions using alpha blending using and smoothed mask. I used Gaussian filter for the mask and now I'm trying to combine the other two images. I'm using the smoothed mask as a weight.
x_i and y_i .... color information for pixel i
alpha_i ... value of the mask in pixel i
formula: z_i = alpha_i*x_i + (1 - alpha_i)*y_i
My attempt:
mask = imread('mask.png');
foreground = imread('fg.jpg');
background = imread('bg.jpg');
[r,c,~]=size(mask);
A = zeros(size(foreground),'like', foreground);
fspe = fspecial('gaussian', 100);
smoothMask = imfilter(double(mask), fspe, 'same');
for i=1:r
for j=1:c
for d=1:3
alpha = mask(i,j,d);
A(i,j,d) = alpha*foreground(i,j,d)+(1-alpha)*background(i,j,d);
end
end
end
imshow(A);
In the end I get the background but the foreground is white. Please help.
Your A is of type double and in range [0..255]. When imshowing images of type double the expected scale is [0..1] this is why all pixels are shown saturated.
Fix:
imshow(A,[])
Or
imshow(A/255)
A word about vectorization: In Matlab it is quite redundant to explicitly loop through all rows columns and channels just to multiply and sum images (3D arrays). It can be easily done
A = mask.*foreground + (1-mask).*background;
Isn't it lovely?
Note the difference between * operator and .* operator - it's the difference between matrix multiplication and element-wise multiplication.
I'll try to be precise and short.
I have a volume (128x128x128) and a mask (same size with [0|1|2] values)
I want to make the 3D volume matrix a 3D image with RGB, and store in each channel (red,green,blue) the points marked in the mask.
This is to use a 2D representation by taking a slice of that 3D cube, and not compute it over and over to make things way more faster (very important in my project), so actually, the 3D volume + rgb would be like a store for 128 2D images.
The question is, what steps and how do I have to make all this:
- Create a volume 128x128x128x3 ?
- Define a new colormap (original is gray) ?
- Join each channel ?
- How do I use imagesc/whatever to show one slice of that cube with the points in the color as marked in the mask (ex: imageRGB(:,:,64)) ?
That's just my guess, but I don't even know how to do it properly...I'm a bit lost, I hope you can help me, this is a piece of code that may be wrong but may help you out
% Create the matrix 4D
ovImg = zeros(size(volImg,1),size(volImg,2),size(volImg,3),3); % 128x128x128x3
% Store in each channel the points marked as groups
ovImg(:,:,:,1) = volImg .* (mask==1);
ovImg(:,:,:,2) = volImg .* (mask==2);
ovImg(:,:,:,3) = volImg .* (mask==3);
many many thanks!!
UPDATE:
I'm having some trouble with transparency and the colormap, this is what I did.
% Create the matrix 4D
ovImg = zeros(size(volImg,1),size(volImg,2),size(volImg,3),3);
% Store in each channel the points marked as groups
ovImg(:,:,:,1) = imaNorm.*(mask==1);
ovImg(:,:,:,2) = imaNorm.*(mask==2);
ovImg(:,:,:,3) = imaNorm.*(mask==3);
[X,Y,Z] = meshgrid(1:128,1:128,1:128);
imaNorm = volImg - min(volImg(:));
maxval = max(imaNorm(:));
ovImg = imaNorm + mask * maxval;
N= ceil(maxval);
c = [linspace(0,1,N)' zeros(N,2)];
my_colormap = [c(:,[1 2 3]) ; c(:,[3 1 2]) ; c(:,[2 3 1])];
figure;
imshow(squeeze(ovImg(:,:,64)),my_colormap);
figure;
imagesc(squeeze(mask(:,:,64)));
Result (Overlayed image / mask)
Any ideas? Thanks again, everybody
FINAL UPDATE:
With the other approach that Gunther Struyf suggested, I had exactly what I wanted.
Thanks mate, I really appreciate it, hope this helps other people too.
You can use imshow with a colormap to 'fake' an RGB image from a grayscale image (which you have). For the scale I'd not multiply it, but add an offset to the value, so each mask is a different range in the colormap.
For plotting a slice of the 3d matrix, you can just index it and then squeeze it to remove the resulting singleton dimension:
Example:
[X,Y,Z]=meshgrid(1:128,1:128,1:128);
volImg =5*sin(X/3)+13*cos(Y/5)+8*sin(Z/10);
volImg=volImg-min(volImg(:));
mask = repmat(floor(linspace(0,3-2*eps,128))',[1 128 128]);
maxval=max(volImg(:));
ovImg=volImg+mask*maxval;
imshow(squeeze(ovImg(:,:,1)),jet(ceil(max(ovImg(:)))));
Unmasked, original image (imshow(squeeze(volImg(:,:,1)),jet(ceil(maxval))))
Resulting with mask (code block above):
For different colormaps, see here, or create your own colormap. Eg you're mask has three values, so let's match those with R,G and B:
N = ceil(maxval);
c = [linspace(0,1,N)' zeros(N,2)];
my_colormap = [c(:,[1 2 3]) ; c(:,[3 1 2]) ; c(:,[2 3 1])];
figure
imshow(squeeze(ovImg(:,:,1)),my_colormap);
which gives:
Other approach:
Now I understand your question, I see you got it quite right from the beginning, you only need rescale the variable to a value between 0 and 1, since from imshow:
Color intensity can be specified on the interval 0.0 to 1.0.
which you can do using:
minval=min(volImg(:));
maxval=max(volImg(:));
volImg=(volImg-minval)/(maxval-minval);
next up is your code:
ovImg = zeros([size(volImg),3]);
ovImg(:,:,:,1) = volImg .* (mask==1);
ovImg(:,:,:,2) = volImg .* (mask==2);
ovImg(:,:,:,3) = volImg .* (mask==3);
You just have to plot it now:
imshow(squeeze(ovImg(:,:,64,:)))
I'm currently working with MATLAB to do some image processing. I've been set a task to basically recreate the convolution function for applying filters. I managed to get the code working okay and everything seemed to be fine.
The next part was for me to do the following..
Write your own m-function for unsharp masking of a given image to produce a new output image.
Your function should apply the following steps:
Apply smoothing to produce a blurred version of the original image,
Subtract the blurred image from the original image to produce an edge image,
Add the edge image to the original image to produce a sharpened image.
Again I've got code mocked up to do this but I run into a few problems. When carrying out the convolution, my image is cropped down by one pixel, this means when I go to carry out the subtraction for the unsharpening the images are not the same size and the subtraction cannot take place.
To overcome this I want to create a blank matrix in the convolution function that is the same size as the image being inputted, the new image will then go on top of this matrix so in affect the new image has a one pixel border around it to make it to its original size. When I try and implement this, all I get as an output is the blank matrix I just created. Why is this happening and if so would you be able to help me fix it?
My code is as follows.
Convolution
function [ imgout ] = convolution( img, filter )
%UNTITLED Summary of this function goes here
% Detailed explanation goes here
[height, width] = size(img); % height, width: number of im rows, etc.
[filter_height, filter_width] = size(filter);
for height_bound = 1:height - filter_height + 1; % Loop over output elements
for width_bound = 1:width - filter_width + 1;
imgout = zeros(height_bound, width_bound); % Makes an empty matrix the correct size of the image.
sum = 0;
for fh = 1:filter_height % Loop over mask elements
for fw = 1:filter_width
sum = sum + img(height_bound - fh + filter_height, width_bound - fw + filter_width) * filter(fh, fw);
end
end
imgout(height_bound, width_bound) = sum; % Store the result
end
end
imshow(imgout)
end
Unsharpen
function sharpen_image = img_sharpen(img)
blur_image = medfilt2(img);
convolution(img, filter);
edge_image = img - blur_image;
sharpen_image = img + edge_image;
end
Yes. Concatenation, e.g.:
A = [1 2 3; 4 5 6]; % Matrix
B = [7; 8]; % Column vector
C = [A B]; % Concatenate
How can one implement the fisheye lens effect illustrated in that image:
One can use Google's logo for a try:
BTW, what's the term for it?
I believe this is typically referred to as either a "fisheye lens" effect or a "barrel transformation". Here are two links to demos that I found:
Sample code for how you can apply fisheye distortions to images using the 'custom' option for the function maketform from the Image Processing Toolbox.
An image processing demo which performs a barrel transformation using the function tformarray.
Example
In this example, I started with the function radial.m from the first link above and modified the way it relates points between the input and output spaces to create a nice circular image. The new function fisheye_inverse is given below, and it should be placed in a folder on your MATLAB path so you can use it later in this example:
function U = fisheye_inverse(X, T)
imageSize = T.tdata(1:2);
exponent = T.tdata(3);
origin = (imageSize+1)./2;
scale = imageSize./2;
x = (X(:, 1)-origin(1))/scale(1);
y = (X(:, 2)-origin(2))/scale(2);
R = sqrt(x.^2+y.^2);
theta = atan2(y, x);
cornerScale = min(abs(1./sin(theta)), abs(1./cos(theta)));
cornerScale(R < 1) = 1;
R = cornerScale.*R.^exponent;
x = scale(1).*R.*cos(theta)+origin(1);
y = scale(2).*R.*sin(theta)+origin(2);
U = [x y];
end
The fisheye distortion looks best when applied to square images, so you will want to make your images square by either cropping them or padding them with some color. Since the transformation of the image will not look right for indexed images, you will also want to convert any indexed images to RGB images using ind2rgb. Grayscale or binary images will also work fine. Here's how to do this for your sample Google logo:
[X, map] = imread('logo1w.png'); % Read the indexed image
rgbImage = ind2rgb(X, map); % Convert to an RGB image
[r, c, d] = size(rgbImage); % Get the image dimensions
nPad = (c-r)/2; % The number of padding rows
rgbImage = cat(1, ones(nPad, c, 3), rgbImage, ones(nPad, c, 3)); % Pad with white
Now we can create the transform with maketform and apply it with imtransform (or imwarp as recommended in newer versions):
options = [c c 3]; % An array containing the columns, rows, and exponent
tf = maketform('custom', 2, 2, [], ... % Make the transformation structure
#fisheye_inverse, options);
newImage = imtransform(rgbImage, tf); % Transform the image
imshow(newImage); % Display the image
And here's the image you should see:
You can adjust the degree of distortion by changing the third value in the options array, which is the exponential power used in the radial deformation of the image points.
I think you are referring to the fisheye lens effect. Here is some code for imitating fisheye in matlab.
Just for the record:
This effect is a type of radial distortion called "barrel distortion".
For more information please see:
http: //en.wikipedia.org/wiki/Distortion_(optics)
Here is a different method to apply an effect similar to barrel distortion using texture mapping (adapted from MATLAB Documentation):
[I,map] = imread('logo.gif');
[h,w] = size(I);
sphere;
hS = findobj('Type','surface');
hemisphere = [ones(h,w),I,ones(h,w)];
set(hS,'CData',flipud(hemisphere),...
'FaceColor','texturemap',...
'EdgeColor','none')
colormap(map)
colordef black
axis equal
grid off
set(gca,'xtick',[],'ztick',[],'ytick',[],'box','on')
view([90 0])
This will give you the circular frame you are looking for but the aliasing artifacts might be too much to deal with.