Why convolution obtained from matlab differs from what is obtained theoretically? - matlab

Theoretical result for convolution of an exponential function with a sinusoidal function is shown below.
When I plot the function directly using matlab, I get this,
However Matlab conv command yields this,
The two plots look similar but they are not the same, see the scales. The matlab yield is ten times of the theoretical result. Why?
The matlab code is here.
clc;
clear all;
close all;
t = 0:0.1:50;
x1 = exp(-t);
x2 = sin(t);
x = conv(x1,x2);
x_theory = 0.5.*(exp(-t) + sin(t) - cos(t));
figure(1)
subplot(313), plot(t, x(1:length(t))); subplot(311), plot(t, x1(1:length(t))); subplot(312), plot(t, x2(1:length(t)))
figure(2)
subplot(313), plot(t, x_theory); subplot(311), plot(t, x1(1:length(t))); subplot(312), plot(t, x2(1:length(t)))

conv does discrete-time convolution, it does not do the mathematical integral function. Numerically, this basically means multiplying and adding the result of the two signals many times, once for each point, with a small shift of one of the signals.
If you think about this, you will realize that the sampling of the signals will have an effect. i.e. if you have a point every 0.1 values, or 0.001 values, the amount of points you multiply is different, and thus the result is different in value (not in shape).
Therefore, every time you do numerical convolution, you always need to multiply with the sampling rate, to "normalize" the operation.
Just change your code to do
sampling_rate= 0.1;
t = 0:sampling_rate:50;
x = conv(x1,x2)*sampling_rate;

Related

Why MATLAB plots cosine in abnormal manner?

I wrote code below in MATLAB to plot cosine function from derivative of sine function but output plot was not what I expect to be!
clear;
clc;
close all;
delta = 1e-15;
t = linspace(0, 20, 1000);
y_derived = (sin(t + delta) - sin(t)) / delta;
y_expected = cos(t);
hold on
plot(y_derived)
plot(y_expected)
legend('y_{derived}', 'y_{expected}')
grid on
output plot is like this :
Can any one help me whats happen?
MATLAB plots exactly what you tell it to plot. The issue is in the way you compute the derivative: Your finite difference quotient uses a delta = 1e-15 which is very close to the machine precision eps = 2.2e-16, which is why you get lots of rounding errors. Actually the staircase-ness nicely shows the discrete nature of the number type you're using. Set e.g. delta = 1e-6 and it will probably look a lot better.

Analytical Fourier transform vs FFT of functions in Matlab

I have adapted the code in Comparing FFT of Function to Analytical FT Solution in Matlab for this question. I am trying to do FFTs and comparing the result with analytical expressions in the Wikipedia tables.
My code is:
a = 1.223;
fs = 1e5; %sampling frequency
dt = 1/fs;
t = 0:dt:30-dt; %time vector
L = length(t); % no. sample points
t = t - 0.5*max(t); %center around t=0
y = ; % original function in time
Y = dt*fftshift(abs(fft(y))); %numerical soln
freq = (-L/2:L/2-1)*fs/L; %freq vector
w = 2*pi*freq; % angular freq
F = ; %analytical solution
figure; subplot(1,2,1); hold on
plot(w,real(Y),'.')
plot(w,real(F),'-')
xlabel('Frequency, w')
title('real')
legend('numerical','analytic')
xlim([-5,5])
subplot(1,2,2); hold on;
plot(w,imag(Y),'.')
plot(w,imag(F),'-')
xlabel('Frequency, w')
title('imag')
legend('numerical','analytic')
xlim([-5,5])
If I study the Gaussian function and let
y = exp(-a*t.^2); % original function in time
F = exp(-w.^2/(4*a))*sqrt(pi/a); %analytical solution
in the above code, looks like there is good agreement when the real and imaginary parts of the function are plotted:
But if I study a decaying exponential multiplied with a Heaviside function:
H = #(x)1*(x>0); % Heaviside function
y = exp(-a*t).*H(t);
F = 1./(a+1j*w); %analytical solution
then
Why is there a discrepancy? I suspect it's related to the line Y = but I'm not sure why or how.
Edit: I changed the ifftshift to fftshift in Y = dt*fftshift(abs(fft(y)));. Then I also removed the abs. The second graph now looks like:
What is the mathematical reason behind the 'mirrored' graph and how can I remove it?
The plots at the bottom of the question are not mirrored. If you plot those using lines instead of dots you'll see the numeric results have very high frequencies. The absolute component matches, but the phase doesn't. When this happens, it's almost certainly a case of a shift in the time domain.
And indeed, you define the time domain function with the origin in the middle. The FFT expects the origin to be at the first (leftmost) sample. This is what ifftshift is for:
Y = dt*fftshift(fft(ifftshift(y)));
ifftshift moves the origin to the first sample, in preparation for the fft call, and fftshift moves the origin of the result to the middle, for display.
Edit
Your t does not have a 0:
>> t(L/2+(-1:2))
ans =
-1.5000e-05 -5.0000e-06 5.0000e-06 1.5000e-05
The sample at t(floor(L/2)+1) needs to be 0. That is the sample that ifftshift moves to the leftmost sample. (I use floor there in case L is odd in size, not the case here.)
To generate a correct t do as follows:
fs = 1e5; % sampling frequency
L = 30 * fs;
t = -floor(L/2):floor((L-1)/2);
t = t / fs;
I first generate an integer t axis of the right length, with 0 at the correct location (t(floor(L/2)+1)==0). Then I convert that to seconds by dividing by the sampling frequency.
With this t, the Y as I suggest above, and the rest of your code as-is, I see this for the Gaussian example:
>> max(abs(F-Y))
ans = 4.5254e-16
For the other function I see larger differences, in the order of 6e-6. This is due to the inability to sample the Heaviside function. You need t=0 in your sampled function, but H doesn't have a value at 0. I noticed that the real component has an offset of similar magnitude, which is caused by the sample at t=0.
Typically, the sampled Heaviside function is set to 0.5 for t=0. If I do that, the offset is removed completely, and max difference for the real component is reduced by 3 orders of magnitude (largest errors happen for values very close to 0, where I see a zig-zag pattern). For the imaginary component, the max error is reduced to 3e-6, still quite large, and is maximal at high frequencies. I attribute these errors to the difference between the ideal and sampled Heaviside functions.
You should probably limit yourself to band-limited functions (or nearly-band-limited ones such as the Gaussian). You might want to try to replace the Heaviside function with an error function (integral of Gaussian) with a small sigma (sigma = 0.8 * fs is the smallest sigma I would consider for proper sampling). Its Fourier transform is known.

Non-symbolic derivative at all sample points including boundary points

Suppose I have a vector t = [0 0.1 0.9 1 1.4], and a vector x = [1 3 5 2 3]. How can I compute the derivative of x with respect to time that has the same length as the original vectors?
I should not use any symbolic operations. The command diff(x)./diff(t) does not produce a vector of the same length. Should I first interpolate the x(t) function and then take its derivative?
Different approaches exist to calculate the derivative at the same points as your initial data:
Finite differences: Use a central difference scheme at your inner points and a forward/backward scheme at your first/last point
or
Curve fitting: Fit a curve through your points, calculate the derivative of this fitted function and sample them at the same points as the original data. Typical fitting functions are polynomials or spline functions.
Note that the curve fitting approach gives better results, but needs more tuning options and is slower (~100x).
Demonstration
As an example, I will calculate the derivative of a sine function:
t = 0:0.1:1;
y = sin(t);
Its exact derivative is well known:
dy_dt_exact = cos(t);
The derivative can approximately been calculated as:
Finite differences:
dy_dt_approx = zeros(size(y));
dy_dt_approx(1) = (y(2) - y(1))/(t(2) - t(1)); % forward difference
dy_dt_approx(end) = (y(end) - y(end-1))/(t(end) - t(end-1)); % backward difference
dy_dt_approx(2:end-1) = (y(3:end) - y(1:end-2))./(t(3:end) - t(1:end-2)); % central difference
or
Polynomial fitting:
p = polyfit(t,y,5); % fit fifth order polynomial
dp = polyder(p); % calculate derivative of polynomial
The results can be visualised as follows:
figure('Name', 'Derivative')
hold on
plot(t, dy_dt_exact, 'DisplayName', 'eyact');
plot(t, dy_dt_approx, 'DisplayName', 'finite difference');
plot(t, polyval(dp, t), 'DisplayName', 'polynomial');
legend show
figure('Name', 'Error')
hold on
plot(t, abs(dy_dt_approx - dy_dt_exact)/max(dy_dt_exact), 'DisplayName', 'finite difference');
plot(t, abs(polyval(dp, t) - dy_dt_exact)/max(dy_dt_exact), 'DisplayName', 'polynomial');
legend show
The first graph shows the derivatives itself and the second graph plots the relative errors made by both methods.
Discussion
One clearly sees that the curve fitting method gives better results than the finite differences, but it is ~100x slower. The curve fitting methods has a relative error of order 10^-5. Note that the finite differences approach becomes better when your data is sampled more densely or you use a higher order scheme. The disadvantage of the curve fitting approach is that one has to choose a good polynomial order. Spline functions may be better suited in general.
A 10x faster sampled dataset, i.e. t = 0:0.01:1;, results in the following graphs:

Finding values of x for a given y when approaching a limit

I'm trying to find two x values for each y value on a plot that is very similar to a Gaussian fn. The difficulty is that I need to be able to find the values of x for several values of y even when the gaussian fn is very close to zero.
I can't post an image due to being a new user, however think of a gaussian function and then the regions where it is close to zero on either side of the peak. This part where the fn is very close to reaching zero is where I need to find the x values for a given y.
What I've tried:
When the fn is discrete: I have tried interp1, however I get the error that it is not strictly monotonic increasing because of the many values that are close to zero.
When I fit a two-term gaussian:
I use fzero (fzero(function-yvalue)) however I get a lot of NaN's. These might be from me not having a close enough 'guess' value??
Does anyone have any other suggestions for me to try? Or how to improve what I've already attempted?
Thanks everyone
EDIT:
I've added a picture below. The data that I actually have is the blue line, while the fitted eqn is in red. The eqn should be accurate enough.
Again, I'm trying to pick out x values for a given y where y is very small (approaching 0).
I've tried splitting the function into left and right halves for the interpolation and fzero method.
Thanks for your responses anyway, I'll have a look at bisection.
Fitting a Gaussian seems to be uneffective, as its deviation (in the x-coordinate) from the real data is noticeable.
Since your data is already presented as a numeric vector y, the straightforward find(y<y0) seems adequate. Here is a sample code, in which the y-values are produced from a perturbed Gaussian.
x = 0:1:700;
y = 2000*exp(-((x-200)/50).^2 - sin(x/100).^2); % imitated data
plot(x,y)
y0 = 1e-2; % the y-value to look for
i = min(find(y>y0)); % first entry above y0
if i == 1
x1 = x(i);
else
x1 = x(i) - y(i)*(x(i)-x(i-1))/(y(i)-y(i-1)); % linear interpolation
end
i = max(find(y>y0)); % last entry above y0
if i == numel(y)
x2 = x(i);
else
x2 = x(i) - y(i)*(x(i)-x(i+1))/(y(i)-y(i+1)); % linear interpolation
end
fprintf('Roots: %g, %g \n', x1, x2)
Output: Roots: 18.0659, 379.306
The curve looks much like your plot.

Chebyshev IIR FIlter: Got Coefficients, what next?

Here is my Matlab/Octave program
clc;
close all;
%BPF of pass 400-600Hz
fs1=300;
fp1=400;
fp2=600;
fs2=700;
samp=1500;
ap=1; %passband ripple
as=60; %stopband attenuation
%Normalizing the frequency
wp=[fp1 fp2]/(samp);
ws=[fs1 fs2]/(samp);
[N,wn]=cheb1ord_test(wp,ws,ap,as); %Generates order and cutoff parameters
[b,a]=cheby1(N,ap,wn); %Generates poles and zeros for the given order and cutoff
printf("b coeffs = %f\n",b);
printf("a coeffs = %f\n",a);
[H,W]=freqz(b,a,256);
plot(W/(2*pi),20*log10(abs(H))) %Transfer function works correctly, so coefficients are correct
%100 samples of 500hz
n = 1:100;
x=10*cos(2*pi*n*500*(1/samp));
printf("Order %d\n",N); %Depends on required ripple attenuation
figure;
subplot (2,1,1); plot(x);
y=filter(b,a,x); %**Apparently i suspect this does not work**
subplot (2,1,2); plot(y);
When i see the magnitude/frequency response, the graph is perfect and indicates 400 and 600 to be my filter cutoffs.
But however when i apply an input signal of 500Hz, i must expect to see the signal pass through the filter unharmed (as observed when i used Butterworth function), but the output is distorted and almost contains no signal
So i infer that my mistake is using the filter function to combine the chebyshev coefficients with the input signal.
If this is the problem, then how do i apply chebyshev coefficients to an input digital signal ?
For cheb1ord and cheby1 the frequencies are normalized between 0 and 1 with 1 corresponding to half the sampling frequency. You should get your wp and ws using
wp=[fp1 fp2]/(samp/2);
ws=[fs1 fs2]/(samp/2);
where samp is your sampling frequency.
I think the problem is with your x signal: I don't quite know what it is, but I can tell you what it is not and that's a 500Hz input signal.
I would define the time vector first and then apply the cos function (I assume you are sampling at 1500Hz):
f_input = 500; %Hz
t = 0:1/samp:1; % Time vector [0,1] sampled at 1500Hz
x = 10*cos(2*pi*t/f_input); % 500Hz input signal