Is there a derivative of Gaussian filter function in Matlab? Would it be proper to convolve the Gaussian filter with [1 0 -1] to obtain the result?
As far as I know there is no built in derivative of Gaussian filter. You can very easily create one for yourself as follow:
For 2D
G1=fspecial('gauss',[round(k*sigma), round(k*sigma)], sigma);
[Gx,Gy] = gradient(G1);
[Gxx,Gxy] = gradient(Gx);
[Gyx,Gyy] = gradient(Gy);
Where k determine the size of it (depends to which extent you want support).
For 1D is the same, but you don't have two gradient directions, just one. Also you would create the gaussian filter in another way and I assume you already have your preferred method.
Here I gave you up to second order, but you can see the pattern here to proceed to further orders.
The convolving filter you posted ( [1 0 -1] ) looks like finite difference. Although yours I believe is conceptually right, the most correct and common way to do it is with [1 -1] or [-1 1], with that 0 in the middle you skip the central sample in approximating the derivative. This may work as well (but bare in mind that is an approximation that for higher orders diverge from true result), but I usually prefer the method I posted above.
Note: If you are indeed interested in 2D filters, Derivative of Gaussian family has the steerability property, meaning that you can easily create a filter for a Derivative of Gaussian in any direction from the one I gave you up. Supposing the direction you want is defined as
cos(theta), sin(theta)
Then the derivative of Gaussian in that direction is
Gtheta = cos(theta)*Gx + sin(theta)*Gy
If you reapply this recursively you can go to any order you like.
Of just use the derivative of a gaussian which is not more difficult then computing the gaussian itself.
G(x,y) = exp(-(x^2+y^2)/(2*s^2))
d/dx G(x, y) = -x/s^2 G(x,y)
d/dy G(x, y) = -y/s^2 G(x,y)
function [ y ] = dgaus( x,n )
%DGAUS nth derivative of exp(-x.^2)
odd=0*x;
even=exp(-x.^2);
for order=0:(floor(n/2)-1)
odd=-4*order*odd-2*x.*even;
even=-(4*order+2)*even-2*x.*odd;
end
if mod(n,2)==0
y=even;
else
y=-2*(n-1)*odd-2*x.*even;
end
end
Related
Trying to undertsand how MATLAB does resampling, looking at toolbox/signal/resample.m.
This is the part where low-pass FIR filter coefficients are calculated:
fc = 1/2/pqmax;
L = 2*N*pqmax + 1;
h = firls( L-1, [0 2*fc 2*fc 1], [1 1 0 0]).*kaiser(L,bta)' ;
h = p*h/sum(h);
(In this case pqmax and p both represent downsampling ratio.)
Not sure what is the purpose of last line where all filter coefficients are scaled by downsampling ratio over coeffs sum, p/sum(h)? Does anyone know the theory behind?
MATLAB firls
MATLAB kaiser
firls theory: Least Squared Error Design of FIR Filters
In most filters, you want the output magnitude to remain equal after filtering, not scaled up or down.
Mathematically this is generally properly done in the filter equation, but in discrete numbers this doesnt generally doesn't apply.
If you divide the filer by its sum, you make sure that for any input data p, the filter will always have a general scale factor of 1, as its normalized. For a p=ones(1:1000,1), if sum(h) is not 1, the result after filtering will be scaled, i.e. the values of p_filtered will not be 1.
I'm trying to plot a filtered audio, but I'm doing something wrong because the second plot shows nothing.
[wave,fs]=wavread('my-audio.wav');
t=0:1/fs:(length(wave)-1)/fs;
figure(1);plot(t,wave);
b = [1.1 1];
a = [-0.1 0 1];
FIR = filter(b,a,wave);
figure(2);plot(t,FIR);
The function I'm using is: H(z)=(z + 1.1)/(z^2 - 0.1)
What am I missing?
Thanks!
It looks like you've inverted the order of the coefficients in the a and b vectors. Inverting the order of coefficients is particularly dramatic for the feedback coefficients a which define the poles of the transfer function (and hence determine the stability of the filter). The resulting filtered output FIR thus likely exceeds numerical float capacity, which plot has trouble with.
According to filter's documentation, the a and b coefficients are defined using a transfer function of the form:
Since your transfer function is
you should be using the coefficients
b = [0 1 1.1]
a = [1 0 -0.1]
I am trying to construct the product of the FFT of a 2D box function and the FFT of a 2D Gaussian function. After, I find the inverse FFT and I am expecting a convolution of the two functions as a result. However, I am getting a weird one-sided result as seen below. The result appears on the bottom right of the subplot.
The Octave code I wrote to reproduce the above subplot as well as the calculations I performed to construct the convolution is shown below. Can anyone tell me what I'm doing wrong?
clear all;
clc;
close all;
% domain on each side is 0-9
L = 10;
% num subdivisions
N = 32;
delta=L/N;
sigma = 0.5;
% get the domain ready
[x,y] = meshgrid((0:N-1)*delta);
% since domain ranges from 0-(N-1) on both sdes
% we need to take the average
xAvg = sum(x(1, :))/length(x(1,:));
yAvg = sum(y(:, 1))/length(x(:,1));
% gaussian
gssn = exp(- ((x - xAvg) .^ 2 + (y - yAvg) .^ 2) ./ (2*sigma^2));
function ret = boxImpulse(a,b)
n = 32;
L=10;
delta = L/n;
nL = ((n-1)/2-3)*delta;
nU = ((n-1)/2+3)*delta;
if ((a >= nL) && (a <= nU) && ( b >= nL) && (b <= nU) )
ret=1;
else
ret=0;
end
ret;
endfunction
boxResponse = arrayfun(#boxImpulse, x, y);
subplot(2,2,1);mesh(x,y,gssn); title("gaussian fun");
subplot(2,2,2);mesh(x,y, abs(fft2(gssn)) .^2); title("fft of gaussian");
subplot(2,2,3);mesh(x,y,boxResponse); title("box fun");
inv_of_product_of_ffts = abs(ifft2(fft2(boxResponse) * fft2(gssn))) .^2 ;
subplot(2,2,4);mesh(x,y,inv_of_product_of_ffts); title("inv of product of fft");
Let's first address your most obvious errors. This is the way you are computing the convolution in frequency domain:
inv_of_product_of_ffts = abs(ifft2(fft2(boxResponse) * fft2(gssn))) .^2 ;
The first problem is that you are using *, which is matrix multiplication. In frequency domain, element-wise multiplication is the equivalent to convolution in the frequency domain, so you need to use .* instead. The second problem is you also don't need the .^2 term and the abs operation. You're performing convolution, then finding the absolute value and squaring each term. This isn't required. Remove the abs and .^2 operations.
Therefore, what you need is:
inv_of_product_of_ffts = ifft2(fft2(boxResponse).*fft2(gssn)));
However, what you're going to get is this result. Let's place this in a new figure instead of a subplot*:
figure;
mesh(x,y,ifft2(fft2(boxResponse).*fft2(gssn)));
title('Convolution... not right though');
You can see that it's the right result... but it's not centred.... why is that?
This is actually one of the most common problems when computing convolution in the frequency domain. In fact, even the most experienced encounter this problem because they don't understand the internals of how the FFT works.
This is a consequence with how MATLAB and Octave compute the 2D FFT. Specifically, MATLAB and Octave defines a meshgrid of coordinates that go from 0,1,...M-1 for the horizontal and 0,1,...N-1 for the vertical, giving us a M x N result. This means that the origin / DC component is at the top-left corner of the matrix, not the centre as we usually define things. Specifically, the traditional 2D FFT defines the coordinates from -(M-1)/2, ..., (M-1)/2 for the horizontal and -(N-1)/2, ..., (N-1)/2 for the vertical.
Referencing the aforementioned, you defined your signal with the centre assuming that it's the origin and not the top-left corner. To compensate for this, you need to add an fftshift (MATLAB doc, Octave doc) so that the output of the FFT is now centred at the origin and not the top-left corner to bring things back to the way they were, and therefore you really need:
figure;
mesh(x,y,fftshift(ifft2(fft2(boxResponse).*fft2(gssn))));
title('Convolution... now it is right');
If you want to double check that we have the right result, you can perform direct convolution in the spatial domain and we can compare the results between the method in frequency domain.
This is how you'd compute the result directly in spatial domain:
figure;
mesh(x,y,conv2(gssn,boxResponse,'same'));
title('Convolution... spatial domain');
conv2 (MATLAB doc, Octave doc) performs 2D convolution between two signals, and the 'same' flag ensures that the output size is the largest of the two signals, which is either the Gaussian or Box filter. You'll see that it's the same curve, and I won't show it here for brevity.
However, we can compare both of the results and see if they're the same element-wise. A method to do this would be to determine if subtracting each element in the result is less than some threshold... say.. 1e-10:
>> out1 = conv2(boxResponse, gssn, 'same');
>> out2 = fftshift(ifft2(fft2(boxResponse).*fft2(gssn)));
>> all(abs(out1(:)-out2(:)) < 1e-10)
ans =
1
This means that indeed both of these are the same.
Hope that makes sense! Remember, since you're defining your signal where the origin is at the centre, once you find the inverse FFT, you must shift things back so that the origin is now at the centre, not the top-left corner.
*: Minor note - All plots produced in this answer were generated with MATLAB R2015a. However, the code fully works in Octave - tested with Octave 4.0.0.
I have a question about the sgolay function in Matlab R2013a. My database has 165 spectra with 2884 variables and I would like to take the first and second derivatives of them. How might I define the inputs K and F to sgolay?
Below is an example:
sgolay is used to smooth a noisy sinusoid and compare the resulting first and second derivatives to the first and second derivatives computed using diff. Notice how using diff amplifies the noise and generates useless results.
K = 4; % Order of polynomial fit
F = 21; % Window length
[b,g] = sgolay(K,F); % Calculate S-G coefficients
dx = .2;
xLim = 200;
x = 0:dx:xLim-1;
y = 5*sin(0.4*pi*x)+randn(size(x)); % Sinusoid with noise
HalfWin = ((F+1)/2) -1;
for n = (F+1)/2:996-(F+1)/2,
% Zero-th derivative (smoothing only)
SG0(n) = dot(g(:,1), y(n - HalfWin: n + HalfWin));
% 1st differential
SG1(n) = dot(g(:,2), y(n - HalfWin: n + HalfWin));
% 2nd differential
SG2(n) = 2*dot(g(:,3)', y(n - HalfWin: n + HalfWin))';
end
SG1 = SG1/dx; % Turn differential into derivative
SG2 = SG2/(dx*dx); % and into 2nd derivative
% Scale the "diff" results
DiffD1 = (diff(y(1:length(SG0)+1)))/ dx;
DiffD2 = (diff(diff(y(1:length(SG0)+2)))) / (dx*dx);
subplot(3,1,1);
plot([y(1:length(SG0))', SG0'])
legend('Noisy Sinusoid','S-G Smoothed sinusoid')
subplot(3, 1, 2);
plot([DiffD1',SG1'])
legend('Diff-generated 1st-derivative', 'S-G Smoothed 1st-derivative')
subplot(3, 1, 3);
plot([DiffD2',SG2'])
legend('Diff-generated 2nd-derivative', 'S-G Smoothed 2nd-derivative')
Taking derivatives in an inherently noisy process. Thus, if you already have some noise in your data, indeed, it will be magnified as you take higher order derivatives. Savitzky-Golay is a very useful way of combining smoothing and differentiation into one operation. It's a general method and it computes derivatives to an arbitrary order. There are trade-offs, though. Other special methods exist for data with a certain structure.
In terms of your application, I don't have any concrete answers. Much depends on the nature of the data (sampling rate, noise ratio, etc.). If you use too much smoothing, you'll smear your data or produce aliasing. Same thing if you over-fit the data by using high order polynomial coefficients, K. In your demo code you should also plot the analytical derivatives of the sin function. Then play with different amounts of input noise and smoothing filters. Such a tool with known exact answers may be helpful if you can approximate aspects of your real data. In practice, I try to use as little smoothing as possible in order to produce derivatives that aren't too noisy. Often this means a third-order polynomial (K = 3) and a window size, F, as small as possible.
So yes, many suggest that you use your eyes to tune these parameters. However, there has also been some very recent research on choosing the coefficients automatically: On the Selection of Optimum Savitzky-Golay Filters (2013). There are also alternatives to Savitzky-Golay, e.g., this paper based on regularization, but you may need to implement them yourself in Matlab.
By the way, a while back I wrote a little replacement for sgolay. Like you, I only needed the second output, the differentiation filters, G, so that's all it calculates. This function is also faster (by about 2–4 times):
function G=sgolayfilt(k,f)
%SGOLAYFILT Savitzky-Golay differentiation filters
s = vander(0.5*(1-f):0.5*(f-1));
S = s(:,f:-1:f-k);
[~,R] = qr(S,0);
G = S/R/R';
A full version of this function with input validation is available on my GitHub.
I'm trying to get wavelet decomposition of arcsin(x) using, say, Haar wavelets
When using both Matlab's dwt or wavedec functions, I get strange values for approximating coefficients. Since applying low-pass Haar wavelets's filter equals to performing half-sum and the maximum of arcsin is pi/2, I assume that approximating coefficients can't surpass pi/2, yet this code:
x = linspace(0,1,128);
y = asin(x);
[cA, cD] = dwt(y, 'haar'); %//cA for approximating coefficients
returns values more than pi/2 in cA. Why is that?
I believe what makes you confused here is thinking that Haar's filter just averages two adjacent numbers when computing 1-level approximation coefficients. Due to the energy preservation feature of the scaling function, each pair of numbers gets divided by sqrt(2) instead of 2. In fact, you could see what a particular wavelet filter does by typing in the following command (for the Haar filter in this case):
[F1,F2] = wfilters('haar','d')
F1 =
0.7071 0.7071
F2 =
-0.7071 0.7071
You can then check the validity of what you have gotten above by constructing a simple loop:
CA_compare = zeros(1,64);
for k = 1 : 64
CA_compare(k) = dot( y(2*k-1 : 2*k), F1 );
end
You will then see that "CA_compare" contains exactly the same values as your "cA" does.
Hope this helps.