Non-symbolic derivative at all sample points including boundary points - matlab

Suppose I have a vector t = [0 0.1 0.9 1 1.4], and a vector x = [1 3 5 2 3]. How can I compute the derivative of x with respect to time that has the same length as the original vectors?
I should not use any symbolic operations. The command diff(x)./diff(t) does not produce a vector of the same length. Should I first interpolate the x(t) function and then take its derivative?

Different approaches exist to calculate the derivative at the same points as your initial data:
Finite differences: Use a central difference scheme at your inner points and a forward/backward scheme at your first/last point
or
Curve fitting: Fit a curve through your points, calculate the derivative of this fitted function and sample them at the same points as the original data. Typical fitting functions are polynomials or spline functions.
Note that the curve fitting approach gives better results, but needs more tuning options and is slower (~100x).
Demonstration
As an example, I will calculate the derivative of a sine function:
t = 0:0.1:1;
y = sin(t);
Its exact derivative is well known:
dy_dt_exact = cos(t);
The derivative can approximately been calculated as:
Finite differences:
dy_dt_approx = zeros(size(y));
dy_dt_approx(1) = (y(2) - y(1))/(t(2) - t(1)); % forward difference
dy_dt_approx(end) = (y(end) - y(end-1))/(t(end) - t(end-1)); % backward difference
dy_dt_approx(2:end-1) = (y(3:end) - y(1:end-2))./(t(3:end) - t(1:end-2)); % central difference
or
Polynomial fitting:
p = polyfit(t,y,5); % fit fifth order polynomial
dp = polyder(p); % calculate derivative of polynomial
The results can be visualised as follows:
figure('Name', 'Derivative')
hold on
plot(t, dy_dt_exact, 'DisplayName', 'eyact');
plot(t, dy_dt_approx, 'DisplayName', 'finite difference');
plot(t, polyval(dp, t), 'DisplayName', 'polynomial');
legend show
figure('Name', 'Error')
hold on
plot(t, abs(dy_dt_approx - dy_dt_exact)/max(dy_dt_exact), 'DisplayName', 'finite difference');
plot(t, abs(polyval(dp, t) - dy_dt_exact)/max(dy_dt_exact), 'DisplayName', 'polynomial');
legend show
The first graph shows the derivatives itself and the second graph plots the relative errors made by both methods.
Discussion
One clearly sees that the curve fitting method gives better results than the finite differences, but it is ~100x slower. The curve fitting methods has a relative error of order 10^-5. Note that the finite differences approach becomes better when your data is sampled more densely or you use a higher order scheme. The disadvantage of the curve fitting approach is that one has to choose a good polynomial order. Spline functions may be better suited in general.
A 10x faster sampled dataset, i.e. t = 0:0.01:1;, results in the following graphs:

Related

Why convolution obtained from matlab differs from what is obtained theoretically?

Theoretical result for convolution of an exponential function with a sinusoidal function is shown below.
When I plot the function directly using matlab, I get this,
However Matlab conv command yields this,
The two plots look similar but they are not the same, see the scales. The matlab yield is ten times of the theoretical result. Why?
The matlab code is here.
clc;
clear all;
close all;
t = 0:0.1:50;
x1 = exp(-t);
x2 = sin(t);
x = conv(x1,x2);
x_theory = 0.5.*(exp(-t) + sin(t) - cos(t));
figure(1)
subplot(313), plot(t, x(1:length(t))); subplot(311), plot(t, x1(1:length(t))); subplot(312), plot(t, x2(1:length(t)))
figure(2)
subplot(313), plot(t, x_theory); subplot(311), plot(t, x1(1:length(t))); subplot(312), plot(t, x2(1:length(t)))
conv does discrete-time convolution, it does not do the mathematical integral function. Numerically, this basically means multiplying and adding the result of the two signals many times, once for each point, with a small shift of one of the signals.
If you think about this, you will realize that the sampling of the signals will have an effect. i.e. if you have a point every 0.1 values, or 0.001 values, the amount of points you multiply is different, and thus the result is different in value (not in shape).
Therefore, every time you do numerical convolution, you always need to multiply with the sampling rate, to "normalize" the operation.
Just change your code to do
sampling_rate= 0.1;
t = 0:sampling_rate:50;
x = conv(x1,x2)*sampling_rate;

How to plot a cosine wave in MATLAB given fitted parameters?

In my project, I used a generic cosine function to fit my data:
cos_fun = #(p, theta) p(1) + p(2) * cos(theta - p(3))
p = nlinfit(x,y,cos_fun,[1 1 0])
As a result, p has three values, which are y-offset, amplitude and phase.
Can I draw a smooth cosine curve using these three parameters?
TL;DR: It is possible to both fit and plot the curve, with and without requiring toolboxes. All cases presented below.
Plotting
Plotting the function follows directly from the function used to obtain your parameters using plot(). Notice that you control the smoothness of the plotted function based on the step size for the domain (see step below).
In the figure, the results obtained from the nlinfit() (Toolbox required) are the same as "SSE" obtained without a toolbox using fminsearch().
% Plot (No toolbox Required)
step = 0.01; % smaller is smoother
Xrng = 0:step:12;
figure, hold on, box on
plot(Xdata,Ydata,'b.','DisplayName','Data')
plot(Xrng,cos_fun(p_SSE,Xrng),'r--','DisplayName','SSE')
plot(Xrng,cos_fun(p_SAE,Xrng),'k--','DisplayName','SAE')
legend('show')
As pointed out in the comment by #Daniel, you can also make the plot with nlintool() but this requires the Statistics Toolbox.
nlintool(Xdata,Ydata,cos_fun,[1 1 0]) % toolbox required
Fitting
Using nlinfit(): (Statistics Toolbox Required)
pNL = nlinfit(Xdata,Ydata,cos_fun,[1 1 0]) % same as SSE approach below
A Toolbox Free Approach:
You can construct a convex error function to minimize and return the global optimum using fminsearch() as a down and dirty approach. For example, the sum of squared error or the sum of absolute error will be convex.
% MATLAB R2019a
% Generate Example Data
sigma = 0.5; % increase this for more variable data (more noise)
Xdata = [repmat(1:10,1,4)].';
Ydata = cos(Xdata)+sigma*randn(length(Xdata),1);
% Function Evaluation
cos_fun=#(p,x) p(1) + p(2).*cos(x-p(3));
% Error Functions
SSEh =#(p) sum((cos_fun(p,Xdata)-Ydata).^2); % sum of squared error
SAEh =#(p) sum(abs(cos_fun(p,Xdata)-Ydata)); % sum of absolute error
Of course, these will give you different errors for the same parameter.
% Test
SSEh([1 1 0])
SAEh([1 1 0])
But you then call fminsearch() given an initial guess for the parameters, p0, to obtain the parameters that minimize your chosen error function. Since SSEh and SAEh are both convex with respect to p, there's no need to do this multiple times and save the best one since for every p0, you'll get the same answer.
p0 = [1 1 0.25]; % Initial starting point
[p_SSE, SSE] = fminsearch(SSEh,p0)
[p_SAE, SAE] = fminsearch(SAEh,p0)
You fit slightly different curves depending on the error function.
Notice that SSEh(pNL) and SSEh(p_SSE) are the same since pNL equals p_SSE since nlinfit() estimates the coefficients "using iterative least squares estimation."

Fit a curve in MATLAB where points have specified normals

I have two 2D points p1, p2 in MATLAB, and each point has a normal n1, n2. I wish to find the (cubic) polynomial which joins the two points and agrees with the specified normals at each end. Is there something built-in to MATLAB to do this?
Of course, I could derive the equations for the polynomial manually, but MATLAB's curve fitting toolbox has so much built-in that I assumed it would be possible. I haven't been able to find any examples of curve, spline or polynomial fitting where the normals are specified.
As an extrapolation of this, I would like to fit splines where each data point has a normal specified.
1. If your points are points of a function, then you need cubic Hermite spline interpolation:
In numerical analysis, a cubic Hermite spline or cubic Hermite
interpolator is a spline where each piece is a third-degree polynomial
specified in Hermite form: that is, by its values and first derivatives at the
end points of the corresponding domain interval.
Cubic Hermite splines
are typically used for interpolation of numeric data specified at
given argument values x(1), x(2), ..., x(n), to obtain a smooth
continuous function. The data should consist of the desired function
value and derivative at each x(k). (If only the values are provided,
the derivatives must be estimated from them.) The Hermite formula is
applied to each interval (x(k), x(k+1)) separately. The resulting
spline will be continuous and will have continuous first derivative.
Cubic polynomial splines can be specified in other ways, the Bézier
form being the most common. However, these two methods provide the
same set of splines, and data can be easily converted between the
Bézier and Hermite forms; so the names are often used as if they were
synonymous.
Specifying the normals at each point is the same as specifying the tangents (slopes, 1st derivatives), because the latter are perpendicular to the former.
In Matlab, the function for calculating the Piecewise Cubic Hermite Interpolating Polynomial is pchip. The only problem is that pchip is a bit too clever:
The careful reader will notice that pchip takes function values as
input, but no derivative values. This is because pchip uses the
function values f(x) to estimate the derivative values. [...] To do a
good derivative approximation, the function has to use an
approximation using 4 or more points [...] Luckily, using Matlab we
can write our own functions to do interpolation using real cubic
Hermite splines.
...the author shows how to do this, using the function mkpp.
2. If your points are not necessarily points of a function, then each interval should be interpolated by a quadratic Bezier curve:
In this example, 3 points are given: the endpoints P(0) and P(2), and P(1), which is the intersection of the tangents at the endpoints. The position of P(1) can be easily calculated from the coordinates of P(0) and P(2), and the normals at these points.
In Matlab, you can use spmak, see the examples here and here.
You could do something like this:
function neumann_spline(p, m, q, n)
% example data
p = [0; 1];
q = [2; 5];
m = [0; 1];
n = [1; 1];
if (m(2) ~= 0)
s1 = atan(-m(1)/m(2));
else
s1 = pi/2;
end
if (n(2) ~= 0)
s2 = atan(-n(1)/n(2));
else
s2 = pi/2;
end
hold on
grid on
axis equal
plot([p(1) p(1)+0.5*m(1)], [p(2) p(2)+0.5*m(2)], 'r', 'Linewidth', 1)
plot([q(1) q(1)+0.5*n(1)], [q(2) q(2)+0.5*n(2)], 'r', 'Linewidth', 1)
sp = csape([p(1) q(1)], [s1 p(2) q(2) s2], [1 1]);
fnplt(sp)
plot(p(1), p(2), 'k.', 'MarkerSize', 16)
plot(q(1), q(2), 'k.', 'MarkerSize', 16)
title('Cubic spline with prescribed normals at the endpoints')
end
The result is

Using nlinfit to fit Gaussian to x,y paired data

I am trying to use Matlab's nlinfit function to estimate the best fitting Gaussian for x,y paired data. In this case, x is a range of 2D orientations and y is the probability of a "yes" response.
I have copied #norm_funct from relevant posts and I'd like to return a smoothed, normal distribution that best approximates the observed data in y, and returns the magnitude, mean and SD of the best fitting pdf. At the moment, the fitted function appears to be incorrectly scaled and less than smooth - any help much appreciated!
x = -30:5:30;
y = [0,0.20,0.05,0.15,0.65,0.85,0.88,0.80,0.55,0.20,0.05,0,0;];
% plot raw data
figure(1)
plot(x, y, ':rs');
axis([-35 35 0 1]);
% initial paramter guesses (based on plot)
initGuess(1) = max(y); % amplitude
initGuess(2) = 0; % mean centred on 0 degrees
initGuess(3) = 10; % SD in degrees
% equation for Gaussian distribution
norm_func = #(p,x) p(1) .* exp(-((x - p(2))/p(3)).^2);
% use nlinfit to fit Gaussian using Least Squares
[bestfit,resid]=nlinfit(y, x, norm_func, initGuess);
% plot function
xFine = linspace(-30,30,100);
figure(2)
plot(x, y, 'ro', x, norm_func(xFine, y), '-b');
Many thanks
If your data actually represent probability estimates which you expect come from normally distributed data, then fitting a curve is not the right way to estimate the parameters of that normal distribution. There are different methods of different sophistication; one of the simplest is the method of moments, which means you choose the parameters such that the moments of the theoretical distribution match those of your sample. In the case of the normal distribution, these moments are simply mean and variance (or standard deviation). Here's the code:
% normalize y to be a probability (sum = 1)
p = y / sum(y);
% compute weighted mean and standard deviation
m = sum(x .* p);
s = sqrt(sum((x - m) .^ 2 .* p));
% compute theoretical probabilities
xs = -30:0.5:30;
pth = normpdf(xs, m, s);
% plot data and theoretical distribution
plot(x, p, 'o', xs, pth * 5)
The result shows a decent fit:
You'll notice the factor 5 in the last line. This is due to the fact that you don't have probability (density) estimates for the full range of values, but from points at distances of 5. In my treatment I assumed that they correspond to something like an integral over the probability density, e.g. over an interval [x - 2.5, x + 2.5], which can be roughly approximated by multiplying the density in the middle by the width of the interval. I don't know if this interpretation is correct for your data.
Your data follow a Gaussian curve and you describe them as probabilities. Are these numbers (y) your raw data – or did you generate them from e.g. a histogram over a larger data set? If the latter, the estimate of the distribution parameters could be improved by using the original full data.

Line of best fit scatter plot

I'm trying to do a scatter plot with a line of best fit in matlab, I can get a scatter plot using either scatter(x1,x2) or scatterplot(x1,x2) but the basic fitting option is shadowed out and lsline returns the error 'No allowed line types found. Nothing done'
Any help would be great,
Thanks,
Jon.
lsline is only available in the Statistics Toolbox, do you have the statistics toolbox? A more general solution might be to use polyfit.
You need to use polyfit to fit a line to your data. Suppose you have some data in y and you have corresponding domain values in x, (ie you have data approximating y = f(x) for arbitrary f) then you can fit a linear curve as follows:
p = polyfit(x,y,1); % p returns 2 coefficients fitting r = a_1 * x + a_2
r = p(1) .* x + p(2); % compute a new vector r that has matching datapoints in x
% now plot both the points in y and the curve fit in r
plot(x, y, 'x');
hold on;
plot(x, r, '-');
hold off;
Note that if you want to fit an arbitrary polynomial to your data you can do so by changing the last parameter of polyfit to be the dimensionality of the curvefit. Suppose we call this dimension d, you'll receive back d+1 coefficients in p, which represent a polynomial conforming to an estimate of f(x):
f(x) = p(1) * x^d + p(2) * x^(d-1) + ... + p(d)*x + p(d+1)
Edit, as noted in a comment you can also use polyval to compute r, its syntax would like like this:
r = polyval(p, x);
Infs, NaNs, and imaginaryparts of complex numbers are ignored in the data.
Curve Fitting Tool provides a flexible graphical user interfacewhere you can interactively fit curves and surfaces to data and viewplots. You can:
Create, plot, and compare multiple fits
Use linear or nonlinear regression, interpolation,local smoothing regression, or custom equations
View goodness-of-fit statistics, display confidenceintervals and residuals, remove outliers and assess fits with validationdata
Automatically generate code for fitting and plottingsurfaces, or export fits to workspace for further analysis