first derivative by gradient of image by kernel - matlab

Let's say For each pixel, the gradient ∇g= [∂f/∂x, ∂f/∂y]. Then the first derivative should be measured by two operators like 1/2[1,0,1;0,0,0;-1,0,-1] & 1/2[-1,0,1;0,0,0;-1,0,-1]
then:
[i,j]=gradient(im);
filt1=[1,0,1;0,0,0;-1,0,-1];
filt2=[-1,0,1;0,0,0;-1,0,-1];
ii=(1./2).*(conv2(filt1,i));
jj=(1./2).*(conv2(filt2,j));
G_x=conv2(ii,im);
G_y=conv2(jj,im);
Is it correct or I should first multiply 1/2 to the operators, and then convolve them?

since associativity (with scalars) is a quality of convolutions the order of the multiplication should not play any rolle.
On the other hand your filters don't seem to me like they perform a differentiation. The classical filter for the discrete differentiation would be a Sobel that looks something like this:
[1,0,-1
2,0,-2
1,0,-1]
and
[1,2,1
0,0,0
-1,-2,-1]

For the purposes of optimizing the computation, it helps to apply the scaling of 1/2 directly to the filter kernels
filt1 = filt1/2;
Otherwise, if done afterward, N^2 additional multiplications have to be done to the NxN image pixels, instead of just 9 multiplications to a 3x3 kernel.
Beyond that, I agree with McMa. Your computations don't look anything like a differentiation. In fact, you already apply gradient() in the very first line, so I don't understand what more you need.

I'd be inclined to use the imgradient and imgradientxy functions in MATLAB. If you want directional gradients, use imgradientxy and if you want gradient magnitude and direction components, use imgradient.
You can choose to have derivatives computed using Sobel,Prewitt or Roberts gradient kernels or using central or intermediate differences.
Here's an example:
[Gx,Gy] = imgradientxy(im,'Sobel');
Instead if you want to continue using conv2, you can get gradient kernels using the fspecial function.
kernelx = fspecial('sobel');
kernely = kernelx';

Related

Is Matlabs function gradient necessary?

I stumbled over this by visualizing changes in data. In my opinion the gradient function in Matlab has a weakness for certain inputs. Let's say I have a matrix like a chessfield and compute the gradient:
matrix = repmat([1 5;5 1], 10, 10)
[Fx,Fy] = gradient(matrix);
In matrix we have a lot of changes in both direction. But because of the behaviour of gradient, Fx and Fy will only contain zeros except for the borders.
Is this behaviour wanted? Is it then not always better to use diff() and then achieving equal size of in- and output matrix with padding? Or in other words, when is it useful to use gradient() instead of diff()?
The documentation tells use that gradient uses central differences to compute the gradient at interior points, which explains the behaviour with your chessboard-pattern.
This function is intended for computing the gradient of a function that was evaluated on a regular grid. For the gradient to make sense, we assume that the function is differentiable. But a central difference (or any finite difference scheme) only makes sense if the function is sampled sufficiently densely, otherwise you will have issues like these.

fftshift before calculating fourier transform: Matlab

I am looking at some FFT code in a matlab project and the FFT and inverse FFT is computed this way:
% Here image is a 2D image.
image_fft = fftshift(image,1);
image_fft = fftshift(image_fft,2);
image_fft = fft(image_fft,[],1);
image_fft = fft(image_fft,[],2);
image_fft = fftshift(image_fft,1);
image_fft = fftshift(image_fft,2);
% Some processing and then same sequence of fftshift, ifft and fftshift to move to
% time domain
I tried to find some information online but am having trouble wondering why the fftshift needs to be done before computing the FFT.
Another question I have is whether this is something really Matlab specific. For example, I am planning to port this code to C++ and use KISS FFT. Do I need to be vary of this?
The reason why people like to swap prior to the DFT is because it makes the center pixel of the image the one with zero-phase shift. It often makes algorithms that depend on phase easier to think about and implement. It is not matlab specific and if you want to port an exact version of the code to another language, you'll need to perform the quadswap beforehand too.
EDIT:
Let me give an example that I hope will clear things up. Let's say that our image is the sum of a bunch of sinc functions with varying locations throughout the image. In the frequency domain, each of these sinc functions is a rect function with the same amplitude but with a different linear phase component that determines the location of the sinc in the image domain. By swapping the image prior to taking the DFT, we make the linear phase component of the frequency domain representation of the center pixel be zero. Moreover, the linear phase components of the other sinc functions will now be a function of their distance from the center pixel. If we didn't swap the image beforehand, then the linear phase components of the rect functions would be a function of their distance from the pixel in the top-left of the image. This would be non-intutive and would involve the same kind of phase wrapping considerations that one sees with equating the frequencies in the range (pi,2pi) rad/sample with more the intuitive (-pi,0) rad/sample.
For images, it's better to use fft2. It's matlab's convention to arrange 2D ffts with the DC in the corners. Presumably because of the row/array conventions. fftshift allows for a more intuitive display of the FFT with the DC in the center.
I don't fully understand what the piece of code you copied is about, here is an example of fft and inverse fft of an image using matlab.
And a more detailed tutorial here.

Weighted Lucas Kanade - Gaussian Function MATLAB

I implemented the Basic Lucas Kanade Optical Flow algorithm in Matlab.
I used the algorithm from Wikipedia.
Since I want to improve this Basic optical flow algorithm, I tried adding a weightening function which makes certain Pixels in the beighbourhood more important or less important (see also Wikipedia).
I basically calculated the following for every Pixel in the beighbourhood and the Center Pixel itself.
for: Center Pixel and every neighbourhood-pixel
sigma = 10;
weight(s) = (1/(2*pi*sigma^2)) * exp(-((first-x)^2+(second-y)^2)/(2*sigma^2))
x,y ist the Center Point pixel, it always stays the same.
first,second is the current neighbourhood-pixel
Since I am using a 5x5 neighbourhood, (first-x) or (second-y) will always be one of these: "0,1,-1,2,-2"
I then apply the weight-values in each part of the sum.
Problem: With Sigma = 10 I don't get a better result for the optical flow than without the weightening function.
With smaller Sigmas it's not better. Afterall there is no difference between the Output vectors with or without the gaussian function
Is there a way to improve this Gaussian function to actually make the vectors more acurate than without weightening?
Thank you sooo much.
I'm not sure how you apply the values, but it usually should make a little difference.
For a better optical flow you could :
presmooth the images with a gaussian
use a spatiotemporal Lucas-Kanade method
or use a more advanced algorithm

Matlab image centroid simulation

I was given this task, I am a noob and need some pointers to get started with centroid calculation in Matlab:
Instead of an image first I was asked to simulate a Gaussian distribution(2 dimensional), add noise(random noise) and plot the intensities, now the position of the centroid changes due to noise and I need to bring it back to its original position by
-clipping level to get rid of the noise, noise reduction by clipping or smoothing, sliding average (lpf) (averaging filter 3-5 samples ), calculating the means or using Convolution filter kernel - which does matrix operations which represent the 2-D images
Since you are a noob, even if we wrote down the answer verbatim you probably won't understand how it works. So instead I'll do what you asked, give you pointers and you'll have to read the related documentation :
a) to produce a 2-d Gaussian use meshgrid or ndgrid
b) to add noise to the image look into rand ,randn or randi, depending what exactly you need.
c) to plot the intensities use imagesc
d) to find the centroid there are several ways, try to further search SO, you'll find many discussions. Also you can check TMW File exchange for different implementations for that.

How to remove gaussian noise from an image in MATLAB?

I'm trying to remove a Gaussian noise from an image. I've added the noise myself using:
nImg = imnoise(img,'gaussian',0,0.01);
I now need to remove the noise using my own filter, or at least reduce it. In theory, as I understand, using a convolution matrix of ones(3)/9 should help and using a Gaussian convolution matrix like [1 2 1; 2 4 2; 1 2 1]/9 or fspecial('gaussian',3) should be better. Yet, they really don't do the trick so well:
Am I missing something important? I need to use convolution, by the way.
You are not missing anything!
Obviously, you can't remove the noise completely. You can try different filters, but all of them will have a tradeoff:
More Noise + Less blur VS Less Noise + More blur
It becomes more obvious if you think of this in the following way:
Any convolution based method assumes that all of the neighbors have the same color.
But in real life, there are many objects in the image. Thus, when you apply the convolution you cause blur by mixing pixels from different adjacent objects.
There are more sophisticated denoising methods like:
Median denoising
Bilateral filter
Pattern matching based denoising
They are not using only convolution. By the way, even they can't do magic.
you can use wiener2 which works best when the noise is constant-power ("white") additive
noise, such as Gaussian noise.
You made a mistake with the Gaussian convolution matrix. You need to divide it by 16, not 9, so that it's sum equals 1. That's why the resulting image using that matrix is so light.