How to denoise a noisy image in matlab using Tihkonov regularization? - matlab

I am trying to write a code for denoising a noisy image in Matlab using Tikhonov regularization. For this purpose, I have written the following code:
refimg = im2double(mat2gray((imread('image1.png'))))*255; % original image
rng(0);
Noisyimg = refimg + randn(size(refimg))*20; % noisy image
%% Accelerated Gradient Method
x = cell(size(zeros(1,Nbiter)));
x{1} = Noisyimg; x{2} = Noisyimg;
y = zeros(size(refimg));
for iter = 3:Nbiter
y = x{iter-1} + ((iter-2)/(iter+1))*(x{iter-1}-x{iter-2});
[ux,uy] = grad(y,options);
GradFy = mu.*(y-Noisyimg)-K.*div(ux,uy,options);
xn = y - t*GradFy;
x{iter} = min(max(xn,0),255);
end
in which K, mu, and t are positive parameters that must be assinged. I have implemented the code with many different values for these parameters but I can not get a denoised image. I receive an out put image which is still very noisy. For this reason, I guess there is something wrong with the code. I really appreciate it if you could please help me to find the problem with my code. In the following, I have inserted the image that I have tried to denoise and also the energy functional that I have tried to minimize and the algorithm that I have used. Here, f is the noisy image and u is the image to be reconstructed. Many thanks.

Related

Matlab adding rgb to binary image side by side

I'm supposed to add another image next to my threshold image with its original color like so: expected image
But I'm unsure how to do it having only achieving the binary image threshold on matlab. How do I show images side by side?
my result
clear all;
close all;
clc;
% read image
palm = imread('palmDown (2).jpg');
%split into RGB
redPalm = palm(:,:,1);
greenPalm = palm(:,:,2);
bluePalm = palm(:,:,3);
redLevel = -0.1;
greenLevel = -0.1;
blueLevel = 0.06;
redThresh = imbinarize(redPalm, redLevel);
greenThresh = imbinarize(greenPalm, greenLevel);
blueThresh = imbinarize(bluePalm, blueLevel);
colorSum = (redThresh&greenThresh&blueThresh);
colorSum2 = imcomplement(colorSum);
thumbFilled = imfill(colorSum2, 'holes');
figure;
imshow(thumbFilled); title('Sum of all');
There are many ways to colorize the thresholded image. One simple way is by multiplication:
palm = im2double(palm); % it’s easier to work with doubles in MATLAB
palm2 = palm * thumbFilled;
imshow([palm, palm2])
The multiplication uses implicit Singleton expansion. If you have an older version of MATLAB it won’t work, you’ll have to use bsxfun instead.

Is the answer for the following MATLAB script correct?

I have a task regarding the matlab image processing toolbox. The task is the following:
My solution to these steps is:
I = imread('Ball.jpg');
I1 = imnoise(I, 'salt&pepper', 0.2);
G = rgb2gray(I1);
C = fspecial('Laplacian',h);
imwrite(C, 'clean.jpg');
subplot(1,2,1);
imshow(I1,[]);
subplot(1,2,2);
imshow(C,[]);
I think you made quite some mistakes.
First, the image you read is already noisy, as it doesn't specifically say "add noise to the image". This makes your second step, imnoise, redundant.
Second, by using fspecialyou are creating a filter. In this case its type is a Laplacian filter of a given alpha alpha (between 0 and 1). That alone doesn't filter your image. You have to use the function imfilter in order to process the image.
I = imread('Ball.jpg');
G = rgb2gray(I);
h = fspecial('Laplacian',0.7); % 0.1 is the alpha, try out which one suits your case the most
C = imfilter(G,h);
imwrite(C, 'clean.jpg');
subplot(1,2,1);
imshow(I,[]);
subplot(1,2,2);
imshow(C,[]);
Note, that the Laplacian filter doesn't have to be the most suitable for you. There are lots of filter types listed in the MatLab documentation that you can use. Consider using a Gaussian filter.
Your solution is incomplete, for example you don't apply your filter on your noisy picture. Here is an example that might work :
%% Load image (I.)
I = imread('Ball.jpg');
%% Convert image into grayscale (II.)
G = rgb2gray(I);
%% Add noise (if 'Ball.jpg' isn't already noisy)
I1 = imnoise(G, 'salt & pepper', 0.2); % NB : imnoise needs the image to be grayscale
%% Create the filter (III.)
C = fspecial('Laplacian');
%% Apply the filter (III.)
IClean = filter2(C,I1);
%% Write the picture in new file (IV.)
imwrite(IClean, 'clean.jpg');
%% Display images (V.)
subplot(1,2,1), imshow(I1,[]);
subplot(1,2,2), imshow(IClean,[]);
Depending on the result, you can validate your idea of an "appropriate spatial domain filter" in question III.

Matlab Rectify image with reference of corner points

I want to rectify an image with perspectival distorsion. I have points of the corners and I have also have an algorithm that perfoms what I need but it executes really slow. It has 'imtransform' and 'maketform' functions which matlab has faster functions for these actions. So I tried to replace them but I couldn't make it right. Any helps will be appreciated.
Here is the Images to make this question clearer:
Input Image with known Coordinates(x,y):
and Desired Output:
This process executed with the interval of 2 seconds, I need to replace this process via new matlab functions but I couldn't make it.
Old algorihm was:
%X has the clockwise X coordinates %Y has the clockwise Y coordinates
A=zeros(8,8);
A(1,:)=[X(1),Y(1),1,0,0,0,-1*X(1)*x(1),-1*Y(1)*x(1)];
A(2,:)=[0,0,0,X(1),Y(1),1,-1*X(1)*y(1),-1*Y(1)*y(1)];
A(3,:)=[X(2),Y(2),1,0,0,0,-1*X(2)*x(2),-1*Y(2)*x(2)];
A(4,:)=[0,0,0,X(2),Y(2),1,-1*X(2)*y(2),-1*Y(2)*y(2)];
A(5,:)=[X(3),Y(3),1,0,0,0,-1*X(3)*x(3),-1*Y(3)*x(3)];
A(6,:)=[0,0,0,X(3),Y(3),1,-1*X(3)*y(3),-1*Y(3)*y(3)];
A(7,:)=[X(4),Y(4),1,0,0,0,-1*X(4)*x(4),-1*Y(4)*x(4)];
A(8,:)=[0,0,0,X(4),Y(4),1,-1*X(4)*y(4),-1*Y(4)*y(4)];
v=[x(1);y(1);x(2);y(2);x(3);y(3);x(4);y(4)];
u=A\v;
%transfer fonksiyonumuz
U=reshape([u;1],3,3)';
w=U*[X';Y';ones(1,4)];
w=w./(ones(3,1)*w(3,:));
T=maketform('projective',U');
%transform uygulayıp resmi düzleştiriyoruz
P2=imtransform(I,T,'XData',[1 n],'YData',[1 m]);
if it helps, here is how I generated "A" matrix and U matrix:
Out Link
using the builtin MATLAB functions (fitgeotrans, imref2d, and imwarp) the following code runs in 0.06 seconds on my laptop:
% read the image
im = imread('paper.jpg');
tic
% set the moving points := the original image control points
x = [1380;2183;1282;422];
y = [727;1166;2351;1678];
movingPoints = [x,y];
% set the fixed points := the desired image control points
xfix = [1;1000;1000;1];
yfix = [1;1;1000;1000];
fixedPoints = [xfix,yfix];
% generate geometric transform
tform = fitgeotrans(movingPoints,fixedPoints,'projective');
% generate reference object (full desired image size)
R = imref2d([1000 1000]);
% warp image
outputImage = imwarp(im,tform,'OutputView',R);
toc
% show image
imshow(outputImage);

How to smoothen the edges from an image obtained from imagesc function

I have an RGB image obtained from saving the imagesc function as shown below. how to refine/smoothen the edges present in the image.
It consists of sharper edges, where I need to smoothen them. Im not able to find a solution for performing this for an RGB image. Instead of the staircase effect seen in the image I'd like to even out the edges. Please help thanks in advance.
maybe imresize will help you:
% here im just generating an image similar to yours
A = zeros(20);
for ii = -2:2
A = A + (ii + 3)*diag(ones(20-abs(ii),1),ii);
end
A([1:5 16:20],:) = 0;A(:,[1:5 16:20]) = 0;
subplot(121);
imagesc(A);
title('original')
% resizing image with bi-linear interpolation
B = imresize(A,100,'bilinear');
subplot(122);
imagesc(B);
title('resized')
EDIT
here I do resize + filtering + rounding:
% generates image
A = zeros(20);
for ii = -2:2
A = A + (ii + 3)*diag(ones(20-abs(ii),1),ii);
end
A([1:5 16:20],:) = 0;A(:,[1:5 16:20]) = 0;
subplot(121);
imagesc(A);
title('original')
% resizing
B = imresize(A,20,'nearest');
% filtering & rounding
C = ceil(imgaussfilt(B,8));
subplot(122);
imagesc(C);
title('resized')
solution
use imfilter and fspecial to perform a convolution of you image with gaussian.
I = imread('im.png');
H = fspecial('gaussian',5,5);
I2 = imfilter(I,H);
change 'blurlevel' parameter (which determines the gaussian kernel size) to make the image smoother or sharper.
result
If you are just looking for straighter edges, like an elevation map you can try contourf.
cmap = colormap();
[col,row] = meshgrid(1:size(img,2), 1:size(img,1));
v = linspace(min(img(:)),max(img(:)),size(cmap,1));
contourf(col,row,img,v,'edgecolor','none');
axis('ij');
This produces the following result using a test function that I generated.

How to implement integral image on sliding window detection?

I am doing a project to detect people in crowd using HOG-LBP. I want to make it for real-time application. I've read in some references, integral image/histogram can increase the speed of the performance from sliding window detection. I want to ask, how to implement integral image on my sliding window detection:
here is the code for integral image from matlab:
A = (cumsum(cumsum(double(img)),2));
and here my sliding window detection code:
im = strcat ('C:\Documents\Crowd_PETS09\S1\L1\Time_13-57\View_001\frame_0150.jpg');
im = imread (im);
figure (1), imshow(im);
win_size= [32, 32];
[lastRightCol lastRightRow d] = size(im);
counter = 1;
%% Scan the window by using sliding window object detection
% this for loop scan the entire image and extract features for each sliding window
% Loop on scales (based on size of the window)
for s=1
disp(strcat('s is',num2str(s)));
X=win_size(1)*s;
Y=win_size(2)*s;
for y = 1:X/4:lastRightCol-Y
for x = 1:Y/4:lastRightRow-X
%get four points for boxes
p1 = [x,y];
p2 = [x+(X-1), y+(Y-1)];
po = [p1; p2] ;
% cropped image based on the four points
crop_px = [po(1,1) po(2,1)];
crop_py = [po(1,2) po(2,2)];
topLeftRow = ceil(min(crop_px));
topLeftCol = ceil(min(crop_py));
bottomRightRow = ceil(max(crop_px));
bottomRightCol = ceil(max(crop_py));
cropedImage = im(topLeftCol:bottomRightCol,topLeftRow:bottomRightRow,:);
%Get the feature vector from croped image
HOGfeatureVector{counter}= getHOG(double(cropedImage));
LBPfeatureVector{counter}= getLBP(cropedImage);
LBPfeatureVector{counter}= LBPfeatureVector{counter}';
boxPoint{counter} = [x,y,X,Y];
counter = counter+1;
x = x+2;
end
end
end
where should i put the integral image code?
i am really appreciate, if someone can help me to figure it out.
Thank you.
The integral image is most suited for the Haar-like features. Using it for HOG or LBP would be tricky. I would suggest to first get your algorithm working, and then think about optimizing it.
By the way, the Computer Vision System Toolbox includes the extractHOGFeatures function, which would be helpful. Here's an example of training a HOG-SVM classifier to recognize hand-written digits. Also there is a vision.PeopleDetector object, which uses a HOG-SVM classifier to detect people. You could either use it directly for your project, or use it to evaluate performance of your own algorithm.