Resizing image increases noise? - matlab

I want to know that when we resize the image from 30x30 pixels image to 150x150 pixels image using Matlab. Does resizing adds extra noise in the image? Or vice versa case

Resize down may increase SNR (if for example, each destination pixel is sum of 2x2 source pixels). Resize up increases blur, but does not suppose to decrease the SNR.
You can make a simple test:
Load a "clean image", create noisy image by adding random Gaussian noise, and measure the SNR.
Example:
I = im2double(imread('cameraman.tif')); %I is the "clean" image.
J = imnoise(I); %Add noise
N = J - I; %Noise image
r = snr(I, N);
Result: r = 14.84
Resize down:
I2 = imresize(I, 0.5);
J2 = imresize(J, 0.5);
N2 = J2 - I2;
r2 = snr(I2, N2);
Result: r2 = 22.41 (SNR is improved by about factor of sqrt(2) - theoretical improvement).
Resize up:
I3 = imresize(I2, 2);
J3 = imresize(J2, 2);
N3 = J3 - I3;
r3 = snr(I3, N3);
Result: r3 = 23.66 (SNR is a about the same)

Related

eigenvalue decomposition of structure tensor in matlab

I have a synthetic image. I want to do eigenvalue decomposition of local structure tensor (LST) of it for some edge detection purposes. I used the eigenvaluesl1 , l2 and eigenvectors e1 ,e2 of LST to generate an adaptive ellipse for each pixel of image. Unfortunately I get unequal eigenvalues l1 , l2 and so unequal semi-axes length of ellipse for homogeneous regions of my figure:
However I get good response for a simple test image:
I don't know what is wrong in my code:
function [H,e1,e2,l1,l2] = LST_eig(I,sigma1,rw)
% LST_eig - compute the structure tensor and its eigen
% value decomposition
%
% H = LST_eig(I,sigma1,rw);
%
% sigma1 is pre smoothing width (in pixels).
% rw is filter bandwidth radius for tensor smoothing (in pixels).
%
n = size(I,1);
m = size(I,2);
if nargin<2
sigma1 = 0.5;
end
if nargin<3
rw = 0.001;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% pre smoothing
J = imgaussfilt(I,sigma1);
% compute gradient using Sobel operator
Sch = [-3 0 3;-10 0 10;-3 0 3];
%h = fspecial('sobel');
gx = imfilter(J,Sch,'replicate');
gy = imfilter(J,Sch','replicate');
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% compute tensors
gx2 = gx.^2;
gy2 = gy.^2;
gxy = gx.*gy;
% smooth
gx2_sm = imgaussfilt(gx2,rw); %rw/sqrt(2*log(2))
gy2_sm = imgaussfilt(gy2,rw);
gxy_sm = imgaussfilt(gxy,rw);
H = zeros(n,m,2,2);
H(:,:,1,1) = gx2_sm;
H(:,:,2,2) = gy2_sm;
H(:,:,1,2) = gxy_sm;
H(:,:,2,1) = gxy_sm;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% eigen decomposition
l1 = zeros(n,m);
l2 = zeros(n,m);
e1 = zeros(n,m,2);
e2 = zeros(n,m,2);
for i = 1:n
for j = 1:m
Hmat = zeros(2);
Hmat(:,:) = H(i,j,:,:);
[V,D] = eigs(Hmat);
D = abs(D);
l1(i,j) = D(1,1); % eigen values
l2(i,j) = D(2,2);
e1(i,j,:) = V(:,1); % eigen vectors
e2(i,j,:) = V(:,2);
end
end
Any help is appreciated.
This is my ellipse drawing code:
% determining ellipse parameteres from eigen value decomposition of LST
M = input('Enter the maximum allowed semi-major axes length: ');
I = input('Enter the input data: ');
row = size(I,1);
col = size(I,2);
a = zeros(row,col);
b = zeros(row,col);
cos_phi = zeros(row,col);
sin_phi = zeros(row,col);
for m = 1:row
for n = 1:col
a(m,n) = (l2(m,n)+eps)/(l1(m,n)+l2(m,n)+2*eps)*M;
b(m,n) = (l1(m,n)+eps)/(l1(m,n)+l2(m,n)+2*eps)*M;
cos_phi1 = e1(m,n,1);
sin_phi1 = e1(m,n,2);
len = hypot(cos_phi1,sin_phi1);
cos_phi(m,n) = cos_phi1/len;
sin_phi(m,n) = sin_phi1/len;
end
end
%% plot elliptic structuring elements using parametric equation and superimpose on the image
figure; imagesc(I); colorbar; hold on
t = linspace(0,2*pi,50);
for i = 10:10:row-10
for j = 10:10:col-10
x0 = j;
y0 = i;
x = a(i,j)/2*cos(t)*cos_phi(i,j)-b(i,j)/2*sin(t)*sin_phi(i,j)+x0;
y = a(i,j)/2*cos(t)*sin_phi(i,j)+b(i,j)/2*sin(t)*cos_phi(i,j)+y0;
plot(x,y,'r','linewidth',1);
hold on
end
end
This my new result with the Gaussian derivative kernel:
This is the new plot with axis equal:
I created a test image similar to yours (probably less complicated) as follows:
pos = yy([400,500]) + 100 * sin(xx(400)/400*2*pi);
img = gaussianlineclip(pos+50,7) + gaussianlineclip(pos-50,7);
I = double(stretch(img));
(This requires DIPimage to run)
Then ran your LST_eig on it (sigma1=1 and rw=3) and your code to draw ellipses (no change to either, except adding axis equal), and got this result:
I suspect some non-uniformity in some of the blue areas of your image, which cause very small gradients to appear. The problem with the definition of the ellipses as you use them is that, for sufficiently oriented patterns, you'll get a line even if that pattern is imperceptible. You can get around this by defining your ellipse axes lengths as follows:
a = repmat(M,size(l2)); % longest axis is always the same
b = M ./ (l2+1); % shortest axis is shorter the more important the largest eigenvalue is
The smallest eigenvalue l1 is high in regions with strong gradients but no clear direction. The above does not take this into account. One option could be to make a depend on both energy and anisotropy measures, and b depend only on energy:
T = 1000; % some threshold
r = M ./ max(l1+l2-T,1); % circle radius, smaller for higher energy
d = (l2-l1) ./ (l1+l2+eps); % anisotropy measure in range [0,1]
a = M*d + r.*(1-d); % use `M` length for high anisotropy, use `r` length for high isotropy (circle)
b = r; % use `r` width always
This way, the whole ellipse shrinks if there are strong gradients but no clear direction, whereas it stays large and circular when there are only weak or no gradients. The threshold T depends on image intensities, adjust as needed.
You should probably also consider taking the square root of the eigenvalues, as they correspond to the variance.
Some suggestions:
You can write
a = (l2+eps)./(l1+l2+2*eps) * M;
b = (l1+eps)./(l1+l2+2*eps) * M;
cos_phi = e1(:,:,1);
sin_phi = e1(:,:,2);
without a loop. Note that e1 is normalized by definition, there is no need to normalize it again.
Use Gaussian gradients instead of Gaussian smoothing followed by Sobel or Schaar filters. See here for some MATLAB implementation details.
Use eig, not eigs, when you need all eigenvalues. Especially for such a small matrix, there is no advantage to using eigs. eig seems to produce more consistent results. There is no need to take the absolute value of the eigenvalues (D = abs(D)), as they are non-negative by definition.
Your default value of rw = 0.001 is way too small, a sigma of that size has no effect on the image. The goal of this smoothing is to average gradients in a local neighborhood. I used rw=3 with good results.
Use DIPimage. There is a structuretensor function, Gaussian gradients, and a lot more useful stuff. The 3.0 version (still in development) is a major rewrite that improves significantly on dealing with vector- and matrix-valued images. I can write all of your LST_eig as follows:
I = dip_image(I);
g = gradient(I, sigma1);
H = gaussf(g*g.', rw);
[e,l] = eig(H);
% Equivalences with your outputs:
l1 = l{2};
l2 = l{1};
e1 = e{2,:};
e2 = e{1,:};

Code efficiency, Rotating image without imrotate [duplicate]

I am trying to rotate an image with Matlab without using imrotate function. I actually made it by using transformation matrix.But it is not good enough.The problem is, the rotated image is "sliding".Let me tell you with pictures.
This is my image which I want to rotate:
But when I rotate it ,for example 45 degrees, it becomes this:
I am asking why this is happening.Here is my code,is there any mathematical or programming mistakes about it?
image=torso;
%image padding
[Rows, Cols] = size(image);
Diagonal = sqrt(Rows^2 + Cols^2);
RowPad = ceil(Diagonal - Rows) + 2;
ColPad = ceil(Diagonal - Cols) + 2;
imagepad = zeros(Rows+RowPad, Cols+ColPad);
imagepad(ceil(RowPad/2):(ceil(RowPad/2)+Rows-1),ceil(ColPad/2):(ceil(ColPad/2)+Cols-1)) = image;
degree=45;
%midpoints
midx=ceil((size(imagepad,1)+1)/2);
midy=ceil((size(imagepad,2)+1)/2);
imagerot=zeros(size(imagepad));
%rotation
for i=1:size(imagepad,1)
for j=1:size(imagepad,2)
x=(i-midx)*cos(degree)-(j-midy)*sin(degree);
y=(i-midx)*sin(degree)+(j-midy)*cos(degree);
x=round(x)+midx;
y=round(y)+midy;
if (x>=1 && y>=1)
imagerot(x,y)=imagepad(i,j); % k degrees rotated image
end
end
end
figure,imagesc(imagerot);
colormap(gray(256));
The reason you have holes in your image is because you are computing the location in imagerot of each pixel in imagepad. You need to do the computation the other way around. That is, for each pixel in imagerot interpolate in imagepad. To do this, you just need to apply the inverse transform, which in the case of a rotation matrix is just the transpose of the matrix (just change the sign on each sin and translate the other way).
Loop over pixels in imagerot:
imagerot=zeros(size(imagepad)); % midx and midy same for both
for i=1:size(imagerot,1)
for j=1:size(imagerot,2)
x= (i-midx)*cos(rads)+(j-midy)*sin(rads);
y=-(i-midx)*sin(rads)+(j-midy)*cos(rads);
x=round(x)+midx;
y=round(y)+midy;
if (x>=1 && y>=1 && x<=size(imagepad,2) && y<=size(imagepad,1))
imagerot(i,j)=imagepad(x,y); % k degrees rotated image
end
end
end
Also note that your midx and midy need to be calculated with size(imagepad,2) and size(imagepad,1) respectively, since the first dimension refers to the number of rows (height) and the second to width.
NOTE: The same approach applies when you decide to adopt an interpolation scheme other than nearest neighbor, as in Rody's example with linear interpolation.
EDIT: I'm assuming you are using a loop for demonstrative purposes, but in practice there is no need for loops. Here's an example of nearest neighbor interpolation (what you are using), keeping the same size image, but you can modify this to produce a larger image that includes the whole source image:
imagepad = imread('peppers.png');
[nrows ncols nslices] = size(imagepad);
midx=ceil((ncols+1)/2);
midy=ceil((nrows+1)/2);
Mr = [cos(pi/4) sin(pi/4); -sin(pi/4) cos(pi/4)]; % e.g. 45 degree rotation
% rotate about center
[X Y] = meshgrid(1:ncols,1:nrows);
XYt = [X(:)-midx Y(:)-midy]*Mr;
XYt = bsxfun(#plus,XYt,[midx midy]);
xout = round(XYt(:,1)); yout = round(XYt(:,2)); % nearest neighbor!
outbound = yout<1 | yout>nrows | xout<1 | xout>ncols;
zout=repmat(cat(3,1,2,3),nrows,ncols,1); zout=zout(:);
xout(xout<1) = 1; xout(xout>ncols) = ncols;
yout(yout<1) = 1; yout(yout>nrows) = nrows;
xout = repmat(xout,[3 1]); yout = repmat(yout,[3 1]);
imagerot = imagepad(sub2ind(size(imagepad),yout,xout,zout(:))); % lookup
imagerot = reshape(imagerot,size(imagepad));
imagerot(repmat(outbound,[1 1 3])) = 0; % set background value to [0 0 0] (black)
To modify the above to linear interpolation, compute the 4 neighboring pixels to each coordinate in XYt and perform a weighted sum using the fractional components product as the weights. I'll leave that as an exercise, since it would only serve to bloat my answer further beyond the scope of your question. :)
The method you are using (rotate by sampling) is the fastest and simplest, but also the least accurate.
Rotation by area mapping, as given below (this is a good reference), is much better at preserving color.
But: note that this will only work on greyscale/RGB images, but NOT on colormapped images like the one you seem to be using.
image = imread('peppers.png');
figure(1), clf, hold on
subplot(1,2,1)
imshow(image);
degree = 45;
switch mod(degree, 360)
% Special cases
case 0
imagerot = image;
case 90
imagerot = rot90(image);
case 180
imagerot = image(end:-1:1, end:-1:1);
case 270
imagerot = rot90(image(end:-1:1, end:-1:1));
% General rotations
otherwise
% Convert to radians and create transformation matrix
a = degree*pi/180;
R = [+cos(a) +sin(a); -sin(a) +cos(a)];
% Figure out the size of the transformed image
[m,n,p] = size(image);
dest = round( [1 1; 1 n; m 1; m n]*R );
dest = bsxfun(#minus, dest, min(dest)) + 1;
imagerot = zeros([max(dest) p],class(image));
% Map all pixels of the transformed image to the original image
for ii = 1:size(imagerot,1)
for jj = 1:size(imagerot,2)
source = ([ii jj]-dest(1,:))*R.';
if all(source >= 1) && all(source <= [m n])
% Get all 4 surrounding pixels
C = ceil(source);
F = floor(source);
% Compute the relative areas
A = [...
((C(2)-source(2))*(C(1)-source(1))),...
((source(2)-F(2))*(source(1)-F(1)));
((C(2)-source(2))*(source(1)-F(1))),...
((source(2)-F(2))*(C(1)-source(1)))];
% Extract colors and re-scale them relative to area
cols = bsxfun(#times, A, double(image(F(1):C(1),F(2):C(2),:)));
% Assign
imagerot(ii,jj,:) = sum(sum(cols),2);
end
end
end
end
subplot(1,2,2)
imshow(imagerot);
Output:
Rotates colored image according to angle given by user without any cropping of image in matlab.
Output of this program is similar to output of inbuilt command "imrotate" .This program dynamically creates background according to angle input given by user.By using rotation matrix and origin shifting, we get relation between coordinates of initial and final image.Using relation between coordinates of initial and final image, we now map the intensity values for each pixel.
img=imread('img.jpg');
[rowsi,colsi,z]= size(img);
angle=45;
rads=2*pi*angle/360;
%calculating array dimesions such that rotated image gets fit in it exactly.
% we are using absolute so that we get positve value in any case ie.,any quadrant.
rowsf=ceil(rowsi*abs(cos(rads))+colsi*abs(sin(rads)));
colsf=ceil(rowsi*abs(sin(rads))+colsi*abs(cos(rads)));
% define an array withcalculated dimensionsand fill the array with zeros ie.,black
C=uint8(zeros([rowsf colsf 3 ]));
%calculating center of original and final image
xo=ceil(rowsi/2);
yo=ceil(colsi/2);
midx=ceil((size(C,1))/2);
midy=ceil((size(C,2))/2);
% in this loop we calculate corresponding coordinates of pixel of A
% for each pixel of C, and its intensity will be assigned after checking
% weather it lie in the bound of A (original image)
for i=1:size(C,1)
for j=1:size(C,2)
x= (i-midx)*cos(rads)+(j-midy)*sin(rads);
y= -(i-midx)*sin(rads)+(j-midy)*cos(rads);
x=round(x)+xo;
y=round(y)+yo;
if (x>=1 && y>=1 && x<=size(img,1) && y<=size(img,2) )
C(i,j,:)=img(x,y,:);
end
end
end
imshow(C);
Check this out.
this is fastest way that you can do.
img = imread('Koala.jpg');
theta = pi/10;
rmat = [
cos(theta) sin(theta) 0
-sin(theta) cos(theta) 0
0 0 1];
mx = size(img,2);
my = size(img,1);
corners = [
0 0 1
mx 0 1
0 my 1
mx my 1];
new_c = corners*rmat;
T = maketform('affine', rmat); %# represents translation
img2 = imtransform(img, T, ...
'XData',[min(new_c(:,1)) max(new_c(:,1))],...
'YData',[min(new_c(:,2)) max(new_c(:,2))]);
subplot(121), imshow(img);
subplot(122), imshow(img2);

Image rotation by Matlab without using imrotate

I am trying to rotate an image with Matlab without using imrotate function. I actually made it by using transformation matrix.But it is not good enough.The problem is, the rotated image is "sliding".Let me tell you with pictures.
This is my image which I want to rotate:
But when I rotate it ,for example 45 degrees, it becomes this:
I am asking why this is happening.Here is my code,is there any mathematical or programming mistakes about it?
image=torso;
%image padding
[Rows, Cols] = size(image);
Diagonal = sqrt(Rows^2 + Cols^2);
RowPad = ceil(Diagonal - Rows) + 2;
ColPad = ceil(Diagonal - Cols) + 2;
imagepad = zeros(Rows+RowPad, Cols+ColPad);
imagepad(ceil(RowPad/2):(ceil(RowPad/2)+Rows-1),ceil(ColPad/2):(ceil(ColPad/2)+Cols-1)) = image;
degree=45;
%midpoints
midx=ceil((size(imagepad,1)+1)/2);
midy=ceil((size(imagepad,2)+1)/2);
imagerot=zeros(size(imagepad));
%rotation
for i=1:size(imagepad,1)
for j=1:size(imagepad,2)
x=(i-midx)*cos(degree)-(j-midy)*sin(degree);
y=(i-midx)*sin(degree)+(j-midy)*cos(degree);
x=round(x)+midx;
y=round(y)+midy;
if (x>=1 && y>=1)
imagerot(x,y)=imagepad(i,j); % k degrees rotated image
end
end
end
figure,imagesc(imagerot);
colormap(gray(256));
The reason you have holes in your image is because you are computing the location in imagerot of each pixel in imagepad. You need to do the computation the other way around. That is, for each pixel in imagerot interpolate in imagepad. To do this, you just need to apply the inverse transform, which in the case of a rotation matrix is just the transpose of the matrix (just change the sign on each sin and translate the other way).
Loop over pixels in imagerot:
imagerot=zeros(size(imagepad)); % midx and midy same for both
for i=1:size(imagerot,1)
for j=1:size(imagerot,2)
x= (i-midx)*cos(rads)+(j-midy)*sin(rads);
y=-(i-midx)*sin(rads)+(j-midy)*cos(rads);
x=round(x)+midx;
y=round(y)+midy;
if (x>=1 && y>=1 && x<=size(imagepad,2) && y<=size(imagepad,1))
imagerot(i,j)=imagepad(x,y); % k degrees rotated image
end
end
end
Also note that your midx and midy need to be calculated with size(imagepad,2) and size(imagepad,1) respectively, since the first dimension refers to the number of rows (height) and the second to width.
NOTE: The same approach applies when you decide to adopt an interpolation scheme other than nearest neighbor, as in Rody's example with linear interpolation.
EDIT: I'm assuming you are using a loop for demonstrative purposes, but in practice there is no need for loops. Here's an example of nearest neighbor interpolation (what you are using), keeping the same size image, but you can modify this to produce a larger image that includes the whole source image:
imagepad = imread('peppers.png');
[nrows ncols nslices] = size(imagepad);
midx=ceil((ncols+1)/2);
midy=ceil((nrows+1)/2);
Mr = [cos(pi/4) sin(pi/4); -sin(pi/4) cos(pi/4)]; % e.g. 45 degree rotation
% rotate about center
[X Y] = meshgrid(1:ncols,1:nrows);
XYt = [X(:)-midx Y(:)-midy]*Mr;
XYt = bsxfun(#plus,XYt,[midx midy]);
xout = round(XYt(:,1)); yout = round(XYt(:,2)); % nearest neighbor!
outbound = yout<1 | yout>nrows | xout<1 | xout>ncols;
zout=repmat(cat(3,1,2,3),nrows,ncols,1); zout=zout(:);
xout(xout<1) = 1; xout(xout>ncols) = ncols;
yout(yout<1) = 1; yout(yout>nrows) = nrows;
xout = repmat(xout,[3 1]); yout = repmat(yout,[3 1]);
imagerot = imagepad(sub2ind(size(imagepad),yout,xout,zout(:))); % lookup
imagerot = reshape(imagerot,size(imagepad));
imagerot(repmat(outbound,[1 1 3])) = 0; % set background value to [0 0 0] (black)
To modify the above to linear interpolation, compute the 4 neighboring pixels to each coordinate in XYt and perform a weighted sum using the fractional components product as the weights. I'll leave that as an exercise, since it would only serve to bloat my answer further beyond the scope of your question. :)
The method you are using (rotate by sampling) is the fastest and simplest, but also the least accurate.
Rotation by area mapping, as given below (this is a good reference), is much better at preserving color.
But: note that this will only work on greyscale/RGB images, but NOT on colormapped images like the one you seem to be using.
image = imread('peppers.png');
figure(1), clf, hold on
subplot(1,2,1)
imshow(image);
degree = 45;
switch mod(degree, 360)
% Special cases
case 0
imagerot = image;
case 90
imagerot = rot90(image);
case 180
imagerot = image(end:-1:1, end:-1:1);
case 270
imagerot = rot90(image(end:-1:1, end:-1:1));
% General rotations
otherwise
% Convert to radians and create transformation matrix
a = degree*pi/180;
R = [+cos(a) +sin(a); -sin(a) +cos(a)];
% Figure out the size of the transformed image
[m,n,p] = size(image);
dest = round( [1 1; 1 n; m 1; m n]*R );
dest = bsxfun(#minus, dest, min(dest)) + 1;
imagerot = zeros([max(dest) p],class(image));
% Map all pixels of the transformed image to the original image
for ii = 1:size(imagerot,1)
for jj = 1:size(imagerot,2)
source = ([ii jj]-dest(1,:))*R.';
if all(source >= 1) && all(source <= [m n])
% Get all 4 surrounding pixels
C = ceil(source);
F = floor(source);
% Compute the relative areas
A = [...
((C(2)-source(2))*(C(1)-source(1))),...
((source(2)-F(2))*(source(1)-F(1)));
((C(2)-source(2))*(source(1)-F(1))),...
((source(2)-F(2))*(C(1)-source(1)))];
% Extract colors and re-scale them relative to area
cols = bsxfun(#times, A, double(image(F(1):C(1),F(2):C(2),:)));
% Assign
imagerot(ii,jj,:) = sum(sum(cols),2);
end
end
end
end
subplot(1,2,2)
imshow(imagerot);
Output:
Rotates colored image according to angle given by user without any cropping of image in matlab.
Output of this program is similar to output of inbuilt command "imrotate" .This program dynamically creates background according to angle input given by user.By using rotation matrix and origin shifting, we get relation between coordinates of initial and final image.Using relation between coordinates of initial and final image, we now map the intensity values for each pixel.
img=imread('img.jpg');
[rowsi,colsi,z]= size(img);
angle=45;
rads=2*pi*angle/360;
%calculating array dimesions such that rotated image gets fit in it exactly.
% we are using absolute so that we get positve value in any case ie.,any quadrant.
rowsf=ceil(rowsi*abs(cos(rads))+colsi*abs(sin(rads)));
colsf=ceil(rowsi*abs(sin(rads))+colsi*abs(cos(rads)));
% define an array withcalculated dimensionsand fill the array with zeros ie.,black
C=uint8(zeros([rowsf colsf 3 ]));
%calculating center of original and final image
xo=ceil(rowsi/2);
yo=ceil(colsi/2);
midx=ceil((size(C,1))/2);
midy=ceil((size(C,2))/2);
% in this loop we calculate corresponding coordinates of pixel of A
% for each pixel of C, and its intensity will be assigned after checking
% weather it lie in the bound of A (original image)
for i=1:size(C,1)
for j=1:size(C,2)
x= (i-midx)*cos(rads)+(j-midy)*sin(rads);
y= -(i-midx)*sin(rads)+(j-midy)*cos(rads);
x=round(x)+xo;
y=round(y)+yo;
if (x>=1 && y>=1 && x<=size(img,1) && y<=size(img,2) )
C(i,j,:)=img(x,y,:);
end
end
end
imshow(C);
Check this out.
this is fastest way that you can do.
img = imread('Koala.jpg');
theta = pi/10;
rmat = [
cos(theta) sin(theta) 0
-sin(theta) cos(theta) 0
0 0 1];
mx = size(img,2);
my = size(img,1);
corners = [
0 0 1
mx 0 1
0 my 1
mx my 1];
new_c = corners*rmat;
T = maketform('affine', rmat); %# represents translation
img2 = imtransform(img, T, ...
'XData',[min(new_c(:,1)) max(new_c(:,1))],...
'YData',[min(new_c(:,2)) max(new_c(:,2))]);
subplot(121), imshow(img);
subplot(122), imshow(img2);

Using imnoise to add gaussian noise to an image

How do I add white Gaussian noise with SNR=5dB to an image using imnoise?
I know that the syntax is:
J = imnoise(I,type,parameters)
and:
SNR = 10log10[var(image)/var(error image)]
How do I use this SNR value to add noise to the image?
Let's start by seeing how the SNR relates to the noise. Your error image is the difference between the original image and the noisy image, meaning that the error image is the noise itself. Therefore, the SNR is actually:
SNR = 10log10[var(image)/var(noise)]
For a given image and SNR=5db, the variance of the noise would be:
var(noise) = var(image)/10SNR/10 = var(image)/sqrt(10)
Now let's translate all of this into MATLAB code. To add white Gaussian noise to an image (denote it I) using the imnoise command, the syntax is:
I_noisy = imnoise(I, 'gaussian', m, v)
where m is the mean noise and v is its variance. It is also important to note that imnoise assumes that the intensities in image I range from 0 to 1.
In our case, we'll add zero-mean noise and its variance is v = var(I(:))/sqrt(10). The complete code is:
%// Adjust intensities in image I to range from 0 to 1
I = I - min(I(:));
I = I / max(I(:));
%// Add noise to image
v = var(I(:)) / sqrt(10);
I_noisy = imnoise(I, 'gaussian', 0, v);
Clarification: we use var(I(:)) to treat compute the variance of all samples in image I (instead of var(I), which computes variance along columns).
Hope this helps!
Example
I = imread('eight.tif');
I = double(I);
%// Adjust intensities in image I to range from 0 to 1
I = I - min(I(:));
I = I / max(I(:));
%// Add noise to image
v = var(I(:)) / sqrt(10);
I_noisy = imnoise(I, 'gaussian', 0, v);
%// Show images
figure
subplot(1, 2, 1), imshow(I), title('Original image')
subplot(1, 2, 2), imshow(I_noisy), title('Noisy image, SNR=5db')
Here's the result:

Decrease Target Contrast without losing image details

I have a original image (on the left) and I want to decrease object (horizontal shape thing) contrast to simulate the reference image (on the right). I have tried all smoothing methods but all of them will impact image contrast (say background contrast). Is there any way I can build a model then subtract it from the original image?
In another word, is there any way we can add opacity to the image on the left to simulate the image on the right? The object mask has been provided and it is the same size as original image.
Any suggestion will be appreciated.
You reduce the contrast by multiplying the masked area in the input image:
I = imread('https://i.stack.imgur.com/QML1p.png'); %I = imread('1.png'); %Read image
M = imbinarize(imread('https://i.stack.imgur.com/nBqeS.png')); %imbinarize(imread('2.png')); %Read mask, and convert to binary image.
J = I;
J(M) = J(M)*.5; %Multiply the "horizontal shape thing" by 0.5 - reducing contrast.
figure;imshow(J);
Result:
The above solution is kind of setting the opacity of the "shape thing".
I tried to make it a bit smoother, but result is still very different.
I = imread('https://i.stack.imgur.com/QML1p.png');
%I = imread('1.png'); %Read image
M = imbinarize(imread('https://i.stack.imgur.com/nBqeS.png'));
%M = imbinarize(imread('2.png')); %Read mask, and convert to binary image.
M = imdilate(M, strel('disk', 20)); %Make mask thicker.
%Crate blurred "shape thing"
T = zeros(size(I));
T(M) = double(I(M)); %"shape thing" with some margins, and black surrounding.
T = T/255; %Change pixels range from [0, 255] to [0, 1].
T = imgaussfilt(double(T), 5); %Blur (2-D Gaussian filtering).
%Create blurred mask:
M = imgaussfilt(double(M), 20); %Blur the mask (2-D Gaussian filtering).
M = max(M, 0.5);
J = T.*M + im2double(I).*(1-M); %Weighed average.
J = imgaussfilt(J, 1);
J = im2uint8(J);
figure;imshow(J);
Result:
Try to play with it a little...
Another try:
Reduce the relative contrast between "shape thing" and background by averaging J and 0.5
Add gaussian noise (so result be closer to reference).
Code:
I = imread('https://i.stack.imgur.com/QML1p.png');
M = imbinarize(imread('https://i.stack.imgur.com/nBqeS.png'));
M = imdilate(M, strel('disk', 20)); %Make mask thicker.
%Crate blurred "shape thing"
T = zeros(size(I));
T(M) = double(I(M)); %"shape thing" with some margins, and black surrounding.
T = T/255; %Change pixels range from [0, 255] to [0, 1].
T = imgaussfilt(double(T), 5); %Blur (2-D Gaussian filtering).
T = T*1.5; %Scale T for changing opaquely
%Create blurred mask:
M = imgaussfilt(double(M), 20); %Blur the mask (2-D Gaussian filtering).
M = max(M, 0.5);
J = T.*M + im2double(I).*(1-M); %Weighed average.
J = imgaussfilt(J, 1);
%Reduce the relative contrast between "shape thing" and background by averaging J and 0.5
J = (J + 0.5)/2;
J = min(J, 1);
%Add gaussian noise (so result be closer to reference).
J = imnoise(J, 'gaussian', 0, 0.005);
J = im2uint8(J);
figure;imshow(J);
Result: