Fit a curve in MATLAB where points have specified normals - matlab

I have two 2D points p1, p2 in MATLAB, and each point has a normal n1, n2. I wish to find the (cubic) polynomial which joins the two points and agrees with the specified normals at each end. Is there something built-in to MATLAB to do this?
Of course, I could derive the equations for the polynomial manually, but MATLAB's curve fitting toolbox has so much built-in that I assumed it would be possible. I haven't been able to find any examples of curve, spline or polynomial fitting where the normals are specified.
As an extrapolation of this, I would like to fit splines where each data point has a normal specified.

1. If your points are points of a function, then you need cubic Hermite spline interpolation:
In numerical analysis, a cubic Hermite spline or cubic Hermite
interpolator is a spline where each piece is a third-degree polynomial
specified in Hermite form: that is, by its values and first derivatives at the
end points of the corresponding domain interval.
Cubic Hermite splines
are typically used for interpolation of numeric data specified at
given argument values x(1), x(2), ..., x(n), to obtain a smooth
continuous function. The data should consist of the desired function
value and derivative at each x(k). (If only the values are provided,
the derivatives must be estimated from them.) The Hermite formula is
applied to each interval (x(k), x(k+1)) separately. The resulting
spline will be continuous and will have continuous first derivative.
Cubic polynomial splines can be specified in other ways, the Bézier
form being the most common. However, these two methods provide the
same set of splines, and data can be easily converted between the
Bézier and Hermite forms; so the names are often used as if they were
synonymous.
Specifying the normals at each point is the same as specifying the tangents (slopes, 1st derivatives), because the latter are perpendicular to the former.
In Matlab, the function for calculating the Piecewise Cubic Hermite Interpolating Polynomial is pchip. The only problem is that pchip is a bit too clever:
The careful reader will notice that pchip takes function values as
input, but no derivative values. This is because pchip uses the
function values f(x) to estimate the derivative values. [...] To do a
good derivative approximation, the function has to use an
approximation using 4 or more points [...] Luckily, using Matlab we
can write our own functions to do interpolation using real cubic
Hermite splines.
...the author shows how to do this, using the function mkpp.
2. If your points are not necessarily points of a function, then each interval should be interpolated by a quadratic Bezier curve:
In this example, 3 points are given: the endpoints P(0) and P(2), and P(1), which is the intersection of the tangents at the endpoints. The position of P(1) can be easily calculated from the coordinates of P(0) and P(2), and the normals at these points.
In Matlab, you can use spmak, see the examples here and here.

You could do something like this:
function neumann_spline(p, m, q, n)
% example data
p = [0; 1];
q = [2; 5];
m = [0; 1];
n = [1; 1];
if (m(2) ~= 0)
s1 = atan(-m(1)/m(2));
else
s1 = pi/2;
end
if (n(2) ~= 0)
s2 = atan(-n(1)/n(2));
else
s2 = pi/2;
end
hold on
grid on
axis equal
plot([p(1) p(1)+0.5*m(1)], [p(2) p(2)+0.5*m(2)], 'r', 'Linewidth', 1)
plot([q(1) q(1)+0.5*n(1)], [q(2) q(2)+0.5*n(2)], 'r', 'Linewidth', 1)
sp = csape([p(1) q(1)], [s1 p(2) q(2) s2], [1 1]);
fnplt(sp)
plot(p(1), p(2), 'k.', 'MarkerSize', 16)
plot(q(1), q(2), 'k.', 'MarkerSize', 16)
title('Cubic spline with prescribed normals at the endpoints')
end
The result is

Related

Non-symbolic derivative at all sample points including boundary points

Suppose I have a vector t = [0 0.1 0.9 1 1.4], and a vector x = [1 3 5 2 3]. How can I compute the derivative of x with respect to time that has the same length as the original vectors?
I should not use any symbolic operations. The command diff(x)./diff(t) does not produce a vector of the same length. Should I first interpolate the x(t) function and then take its derivative?
Different approaches exist to calculate the derivative at the same points as your initial data:
Finite differences: Use a central difference scheme at your inner points and a forward/backward scheme at your first/last point
or
Curve fitting: Fit a curve through your points, calculate the derivative of this fitted function and sample them at the same points as the original data. Typical fitting functions are polynomials or spline functions.
Note that the curve fitting approach gives better results, but needs more tuning options and is slower (~100x).
Demonstration
As an example, I will calculate the derivative of a sine function:
t = 0:0.1:1;
y = sin(t);
Its exact derivative is well known:
dy_dt_exact = cos(t);
The derivative can approximately been calculated as:
Finite differences:
dy_dt_approx = zeros(size(y));
dy_dt_approx(1) = (y(2) - y(1))/(t(2) - t(1)); % forward difference
dy_dt_approx(end) = (y(end) - y(end-1))/(t(end) - t(end-1)); % backward difference
dy_dt_approx(2:end-1) = (y(3:end) - y(1:end-2))./(t(3:end) - t(1:end-2)); % central difference
or
Polynomial fitting:
p = polyfit(t,y,5); % fit fifth order polynomial
dp = polyder(p); % calculate derivative of polynomial
The results can be visualised as follows:
figure('Name', 'Derivative')
hold on
plot(t, dy_dt_exact, 'DisplayName', 'eyact');
plot(t, dy_dt_approx, 'DisplayName', 'finite difference');
plot(t, polyval(dp, t), 'DisplayName', 'polynomial');
legend show
figure('Name', 'Error')
hold on
plot(t, abs(dy_dt_approx - dy_dt_exact)/max(dy_dt_exact), 'DisplayName', 'finite difference');
plot(t, abs(polyval(dp, t) - dy_dt_exact)/max(dy_dt_exact), 'DisplayName', 'polynomial');
legend show
The first graph shows the derivatives itself and the second graph plots the relative errors made by both methods.
Discussion
One clearly sees that the curve fitting method gives better results than the finite differences, but it is ~100x slower. The curve fitting methods has a relative error of order 10^-5. Note that the finite differences approach becomes better when your data is sampled more densely or you use a higher order scheme. The disadvantage of the curve fitting approach is that one has to choose a good polynomial order. Spline functions may be better suited in general.
A 10x faster sampled dataset, i.e. t = 0:0.01:1;, results in the following graphs:

Polyfit and polyval to perform interpolation

I have
x = linspace(-5,5,256)
y = 1./(1+x.^2)
plot(x,y,'...') %plot of (x,y)
I want to estimate this with a polynomial of order 10, such that the polynomial intersects the graph at 11 points.
So, I did this:
x2 = linspace(-5,5,11)
y2 = 1./(1+x2.^2)
p = polyfit(x2,y2,10) %finds coefficients of polynomial of degree 10 that fits x2,y2
y3 = polyval(p,x2)
plot(x,y,x2,y3,'...')
I thought the polyfit would give me the coefficients for a polynomial up to order 10, which intersects the points (x2,y2) (i.e 11 points)
then y3 is essentially just the y values of where the 10th order polynomial lands, so plotting them altogether would give me the 10th order polynomial, intersecting my original graph at 11 unique points?
What have I done wrong?
My result:
Your computations are correct, but you are not plotting the function the right way. The blue line in your generated plot is piecewise linear. That's because you are only evaluating your polynomial p at the interpolation points x2. The plot command then draws line segments between those points and you are presented with your unexpected plot.
To get the expected result you simply have to evaluate your polynomial more densely like so:
x3 = linspace(-5,-5,500);
y3 = polyval(p,x3);
plot(x3,y3);
Consider the points (1,3), (2,6.2) and (3,13.5). Use Matlab's builtin function polyfit to obtain the best parameters for fitting the model P = Poekt to this data

Using nlinfit to fit Gaussian to x,y paired data

I am trying to use Matlab's nlinfit function to estimate the best fitting Gaussian for x,y paired data. In this case, x is a range of 2D orientations and y is the probability of a "yes" response.
I have copied #norm_funct from relevant posts and I'd like to return a smoothed, normal distribution that best approximates the observed data in y, and returns the magnitude, mean and SD of the best fitting pdf. At the moment, the fitted function appears to be incorrectly scaled and less than smooth - any help much appreciated!
x = -30:5:30;
y = [0,0.20,0.05,0.15,0.65,0.85,0.88,0.80,0.55,0.20,0.05,0,0;];
% plot raw data
figure(1)
plot(x, y, ':rs');
axis([-35 35 0 1]);
% initial paramter guesses (based on plot)
initGuess(1) = max(y); % amplitude
initGuess(2) = 0; % mean centred on 0 degrees
initGuess(3) = 10; % SD in degrees
% equation for Gaussian distribution
norm_func = #(p,x) p(1) .* exp(-((x - p(2))/p(3)).^2);
% use nlinfit to fit Gaussian using Least Squares
[bestfit,resid]=nlinfit(y, x, norm_func, initGuess);
% plot function
xFine = linspace(-30,30,100);
figure(2)
plot(x, y, 'ro', x, norm_func(xFine, y), '-b');
Many thanks
If your data actually represent probability estimates which you expect come from normally distributed data, then fitting a curve is not the right way to estimate the parameters of that normal distribution. There are different methods of different sophistication; one of the simplest is the method of moments, which means you choose the parameters such that the moments of the theoretical distribution match those of your sample. In the case of the normal distribution, these moments are simply mean and variance (or standard deviation). Here's the code:
% normalize y to be a probability (sum = 1)
p = y / sum(y);
% compute weighted mean and standard deviation
m = sum(x .* p);
s = sqrt(sum((x - m) .^ 2 .* p));
% compute theoretical probabilities
xs = -30:0.5:30;
pth = normpdf(xs, m, s);
% plot data and theoretical distribution
plot(x, p, 'o', xs, pth * 5)
The result shows a decent fit:
You'll notice the factor 5 in the last line. This is due to the fact that you don't have probability (density) estimates for the full range of values, but from points at distances of 5. In my treatment I assumed that they correspond to something like an integral over the probability density, e.g. over an interval [x - 2.5, x + 2.5], which can be roughly approximated by multiplying the density in the middle by the width of the interval. I don't know if this interpretation is correct for your data.
Your data follow a Gaussian curve and you describe them as probabilities. Are these numbers (y) your raw data – or did you generate them from e.g. a histogram over a larger data set? If the latter, the estimate of the distribution parameters could be improved by using the original full data.

Matlab Bessel function and interpolation

I am trying to finish an assignment and I don't really know how to do what the question asks. I am not looking for a complete answer, but just an understanding on what I need to use/do to solve the question. Here is the question:
We are asked to provide an interpolant for the Bessel function of the first
kind of order zero, J0(x).
(a) Create a table of data points listed to 7 decimal places for the interpolation points
x1 = 1.0, x2 = 1.3, x3 = 1.6, x4 = 1.9, x5 = 2.2.
[Hint: See Matlab's help on BesselJ.]
(b) Fit a second-degree polynomial through the points x1, x2, x3. Use this interpolant
to estimate J0(1.5). Compute the error.
What exactly does BesselJ do? And how do I fit a second degree polynomial through the three points?
Thanks,
Mikeshiny
Here's the zeroth order Bessel function of the first kind:
http://mathworld.wolfram.com/BesselFunctionoftheFirstKind.html
Bessel functions are to differential equations in cylindrical coordinates as sines and cosines are to ODEs in rectangular coordinates.
Both have series representations; both have polynomial approximations.
Here's a general second-order polynomial:
y = a0 + a1*x + a2*x^2
Substitute in three points (x1, y1), (x2, y2), and (x3, y3) and you'll have three equations for three unknown coefficients a0, a1, and a2. Solve for those coefficients.
Take a look at the plot of y = J0(x) in the link I gave you. You want to fit a 2nd order poly through some range. So - pick one. The first point is (0, 1). Pick two more - maybe x = 1 and x = 2. Look up the values for y at those values of x from a J0 table and evaluate your coefficients.
Here are my three points: (0,1), (1, 0.7652), (2.4048, 0).
When I calculate the coefficients, here's the 2nd order polynomial I get:
J0(x) = 1 -0.105931124*x -0.128868876*x*x

Line of best fit scatter plot

I'm trying to do a scatter plot with a line of best fit in matlab, I can get a scatter plot using either scatter(x1,x2) or scatterplot(x1,x2) but the basic fitting option is shadowed out and lsline returns the error 'No allowed line types found. Nothing done'
Any help would be great,
Thanks,
Jon.
lsline is only available in the Statistics Toolbox, do you have the statistics toolbox? A more general solution might be to use polyfit.
You need to use polyfit to fit a line to your data. Suppose you have some data in y and you have corresponding domain values in x, (ie you have data approximating y = f(x) for arbitrary f) then you can fit a linear curve as follows:
p = polyfit(x,y,1); % p returns 2 coefficients fitting r = a_1 * x + a_2
r = p(1) .* x + p(2); % compute a new vector r that has matching datapoints in x
% now plot both the points in y and the curve fit in r
plot(x, y, 'x');
hold on;
plot(x, r, '-');
hold off;
Note that if you want to fit an arbitrary polynomial to your data you can do so by changing the last parameter of polyfit to be the dimensionality of the curvefit. Suppose we call this dimension d, you'll receive back d+1 coefficients in p, which represent a polynomial conforming to an estimate of f(x):
f(x) = p(1) * x^d + p(2) * x^(d-1) + ... + p(d)*x + p(d+1)
Edit, as noted in a comment you can also use polyval to compute r, its syntax would like like this:
r = polyval(p, x);
Infs, NaNs, and imaginaryparts of complex numbers are ignored in the data.
Curve Fitting Tool provides a flexible graphical user interfacewhere you can interactively fit curves and surfaces to data and viewplots. You can:
Create, plot, and compare multiple fits
Use linear or nonlinear regression, interpolation,local smoothing regression, or custom equations
View goodness-of-fit statistics, display confidenceintervals and residuals, remove outliers and assess fits with validationdata
Automatically generate code for fitting and plottingsurfaces, or export fits to workspace for further analysis