Matlab Bessel function and interpolation - matlab

I am trying to finish an assignment and I don't really know how to do what the question asks. I am not looking for a complete answer, but just an understanding on what I need to use/do to solve the question. Here is the question:
We are asked to provide an interpolant for the Bessel function of the first
kind of order zero, J0(x).
(a) Create a table of data points listed to 7 decimal places for the interpolation points
x1 = 1.0, x2 = 1.3, x3 = 1.6, x4 = 1.9, x5 = 2.2.
[Hint: See Matlab's help on BesselJ.]
(b) Fit a second-degree polynomial through the points x1, x2, x3. Use this interpolant
to estimate J0(1.5). Compute the error.
What exactly does BesselJ do? And how do I fit a second degree polynomial through the three points?
Thanks,
Mikeshiny

Here's the zeroth order Bessel function of the first kind:
http://mathworld.wolfram.com/BesselFunctionoftheFirstKind.html
Bessel functions are to differential equations in cylindrical coordinates as sines and cosines are to ODEs in rectangular coordinates.
Both have series representations; both have polynomial approximations.
Here's a general second-order polynomial:
y = a0 + a1*x + a2*x^2
Substitute in three points (x1, y1), (x2, y2), and (x3, y3) and you'll have three equations for three unknown coefficients a0, a1, and a2. Solve for those coefficients.
Take a look at the plot of y = J0(x) in the link I gave you. You want to fit a 2nd order poly through some range. So - pick one. The first point is (0, 1). Pick two more - maybe x = 1 and x = 2. Look up the values for y at those values of x from a J0 table and evaluate your coefficients.
Here are my three points: (0,1), (1, 0.7652), (2.4048, 0).
When I calculate the coefficients, here's the 2nd order polynomial I get:
J0(x) = 1 -0.105931124*x -0.128868876*x*x

Related

Polyfit and polyval to perform interpolation

I have
x = linspace(-5,5,256)
y = 1./(1+x.^2)
plot(x,y,'...') %plot of (x,y)
I want to estimate this with a polynomial of order 10, such that the polynomial intersects the graph at 11 points.
So, I did this:
x2 = linspace(-5,5,11)
y2 = 1./(1+x2.^2)
p = polyfit(x2,y2,10) %finds coefficients of polynomial of degree 10 that fits x2,y2
y3 = polyval(p,x2)
plot(x,y,x2,y3,'...')
I thought the polyfit would give me the coefficients for a polynomial up to order 10, which intersects the points (x2,y2) (i.e 11 points)
then y3 is essentially just the y values of where the 10th order polynomial lands, so plotting them altogether would give me the 10th order polynomial, intersecting my original graph at 11 unique points?
What have I done wrong?
My result:
Your computations are correct, but you are not plotting the function the right way. The blue line in your generated plot is piecewise linear. That's because you are only evaluating your polynomial p at the interpolation points x2. The plot command then draws line segments between those points and you are presented with your unexpected plot.
To get the expected result you simply have to evaluate your polynomial more densely like so:
x3 = linspace(-5,-5,500);
y3 = polyval(p,x3);
plot(x3,y3);
Consider the points (1,3), (2,6.2) and (3,13.5). Use Matlab's builtin function polyfit to obtain the best parameters for fitting the model P = Poekt to this data

Fit a curve in MATLAB where points have specified normals

I have two 2D points p1, p2 in MATLAB, and each point has a normal n1, n2. I wish to find the (cubic) polynomial which joins the two points and agrees with the specified normals at each end. Is there something built-in to MATLAB to do this?
Of course, I could derive the equations for the polynomial manually, but MATLAB's curve fitting toolbox has so much built-in that I assumed it would be possible. I haven't been able to find any examples of curve, spline or polynomial fitting where the normals are specified.
As an extrapolation of this, I would like to fit splines where each data point has a normal specified.
1. If your points are points of a function, then you need cubic Hermite spline interpolation:
In numerical analysis, a cubic Hermite spline or cubic Hermite
interpolator is a spline where each piece is a third-degree polynomial
specified in Hermite form: that is, by its values and first derivatives at the
end points of the corresponding domain interval.
Cubic Hermite splines
are typically used for interpolation of numeric data specified at
given argument values x(1), x(2), ..., x(n), to obtain a smooth
continuous function. The data should consist of the desired function
value and derivative at each x(k). (If only the values are provided,
the derivatives must be estimated from them.) The Hermite formula is
applied to each interval (x(k), x(k+1)) separately. The resulting
spline will be continuous and will have continuous first derivative.
Cubic polynomial splines can be specified in other ways, the Bézier
form being the most common. However, these two methods provide the
same set of splines, and data can be easily converted between the
Bézier and Hermite forms; so the names are often used as if they were
synonymous.
Specifying the normals at each point is the same as specifying the tangents (slopes, 1st derivatives), because the latter are perpendicular to the former.
In Matlab, the function for calculating the Piecewise Cubic Hermite Interpolating Polynomial is pchip. The only problem is that pchip is a bit too clever:
The careful reader will notice that pchip takes function values as
input, but no derivative values. This is because pchip uses the
function values f(x) to estimate the derivative values. [...] To do a
good derivative approximation, the function has to use an
approximation using 4 or more points [...] Luckily, using Matlab we
can write our own functions to do interpolation using real cubic
Hermite splines.
...the author shows how to do this, using the function mkpp.
2. If your points are not necessarily points of a function, then each interval should be interpolated by a quadratic Bezier curve:
In this example, 3 points are given: the endpoints P(0) and P(2), and P(1), which is the intersection of the tangents at the endpoints. The position of P(1) can be easily calculated from the coordinates of P(0) and P(2), and the normals at these points.
In Matlab, you can use spmak, see the examples here and here.
You could do something like this:
function neumann_spline(p, m, q, n)
% example data
p = [0; 1];
q = [2; 5];
m = [0; 1];
n = [1; 1];
if (m(2) ~= 0)
s1 = atan(-m(1)/m(2));
else
s1 = pi/2;
end
if (n(2) ~= 0)
s2 = atan(-n(1)/n(2));
else
s2 = pi/2;
end
hold on
grid on
axis equal
plot([p(1) p(1)+0.5*m(1)], [p(2) p(2)+0.5*m(2)], 'r', 'Linewidth', 1)
plot([q(1) q(1)+0.5*n(1)], [q(2) q(2)+0.5*n(2)], 'r', 'Linewidth', 1)
sp = csape([p(1) q(1)], [s1 p(2) q(2) s2], [1 1]);
fnplt(sp)
plot(p(1), p(2), 'k.', 'MarkerSize', 16)
plot(q(1), q(2), 'k.', 'MarkerSize', 16)
title('Cubic spline with prescribed normals at the endpoints')
end
The result is

MATLAB - How to calculate 2D least squares regression based on both x and y. (regression surface)

I have a set of data with independent variable x and y. Now I'm trying to build a two dimensional regression model that has a regression surface cutting through my data points. However, I couldn't find a way to achieve this. Can anyone give me some assistance?
You could use my favorite, polyfitn for linear or polynomial models. If you would like a different model, please edit your question or add a comment. HTH!
EDIT
Also, take a look here under Multiple Regression, likely can help you as well.
EDIT AGAIN
Sorry, I'm having too much fun with this, here's an example of multivariate regression using least squares with stock Matlab:
t = (1:10)';
x = t;
y = exp(-t);
A = [ y x ];
z = 10*y + 0.5*x;
A\z
ans =
10.0000
0.5000
If you are performing linear regression, the best tool is the regress function. Note that, if you are fitting a model of the form y(x1,x2) = b1.f(x1) + b2.g(x2) + b3 this is still a linear regression, as long as you know the functions f and g.
Nsamp = 100; %number of samples
X1 = randn(Nsamp,1); %regressor 1 (could also be some computed f(x1) )
X2 = randn(Nsamp,1); %regressor 2 (could also be some computed g(x2) )
Y = X1 + X2 + randn(Nsamp,1); %generate some data to be regressed
%now run the regression
[b,bint,r,rint,stats] = regress(Y,[X1 X2 ones(Nsamp,1)]);
% 'b' contains the coefficients, b1,b2,b3 of the fit; can be used to plot regression surface)
% 'r' contains residuals of the fit
% 'stats' contains the overall regression R^2, F stat, p-value and error variance

intersection of two curves in matlab

I have to find the two intersection points of pdf function of normal distribution.
I have calculated all the point (x,y) for the curves by iy = pdf('normal', ix, mu, sd) and plotted them on the screen which has two intersection points.
I have tried fzero function but it does not work the means and standard deviations are different for both curves so the length of the arrays are different.
I tried simplest logic two for loops but it did not work either.
The brute force approach did not work for me because of the precision in matlab it does not consider 24.000 and 24.001 for example and the resulting values from the gaussian has 15 integers after decimal point which made it impossible for matlab to check for equality.
Only jump to numerical methods if analysis fails. Finding the intersection points of two normal distributions is a fairly simple algebra problem, which I am too lazy now to do properly, but Matlab can do it for me:
>> syms x sig1 sig2 mu1 mu2;
>> solve(1/sig1/sqrt(2*pi) * exp(-1/2*((x-mu1)/sig1)^2) == ...
1/sig2/sqrt(2*pi) * exp(-1/2*((x-mu2)/sig2)^2), x)
ans =
+(mu2*sig1^2 - mu1*sig2^2 + sig1*sig2*(2*sig2^2*log(sig2/sig1) - 2*sig1^2*log(sig2/sig1) - 2*mu1*mu2 + mu1^2 + mu2^2)^(1/2))/(sig1^2 - sig2^2)
-(mu1*sig2^2 - mu2*sig1^2 + sig1*sig2*(2*sig2^2*log(sig2/sig1) - 2*sig1^2*log(sig2/sig1) - 2*mu1*mu2 + mu1^2 + mu2^2)^(1/2))/(sig1^2 - sig2^2)
where sig1, sig2 are the first and second standard deviation, and mu1, mu2 are the first and second mean, respectively.
If you prefer a numerical approach to an analytic one, you can use fzero and the normpdf function.
x_intersect = fzero(#(x) normpdf(x, mu1, std1) - normpdf(x, mu2, std2), x0);
Since the normal distribution is well behaved, and any two distributions must intersect, any initial guess x0 should work.
Trying to improve the answer as this is an accepted answer(Full credit to Eitan T who has explained beautifully in this related answer here about intersection of curves)
You'll have to find the point of intersection (px, py) manually:
idx = find(y1 - y2 < eps, 1); %// Index of coordinate in array
px = x(idx);
py = y1(idx);
Remember that we're comparing two numbers in floating point representation, so instead of y1 == y2 we must set a tolerance. I've chosen it as eps, but it's up to you to decide.
To draw a circle around this point, you can compute its points and then plot them, but a better approach would be to plot one point with a blown-up circle marker (credit to Jonas for this suggestion):
plot(px, py, 'ro', 'MarkerSize', 18)
This way the dimensions of the circle are not affected by the axes and the aspect ratio of the plot.

Calculate distance from point p to high dimensional Gaussian (M, V)

I have a high dimensional Gaussian with mean M and covariance matrix V. I would like to calculate the distance from point p to M, taking V into consideration (I guess it's the distance in standard deviations of p from M?).
Phrased differentially, I take an ellipse one sigma away from M, and would like to check whether p is inside that ellipse.
If V is a valid covariance matrix of a gaussian, it then is symmetric positive definite and therefore defines a valid scalar product. By the way inv(V) also does.
Therefore, assuming that M and p are column vectors, you could define distances as:
d1 = sqrt((M-p)'*V*(M-p));
d2 = sqrt((M-p)'*inv(V)*(M-p));
the Matlab way one would rewrite d2as (probably some unnecessary parentheses):
d2 = sqrt((M-p)'*(V\(M-p)));
The nice thing is that when V is the unit matrix, then d1==d2and it correspond to the classical euclidian distance. To find wether you have to use d1 or d2is left as an exercise (sorry, part of my job is teaching). Write the multi-dimensional gaussian formula and compare it to the 1D case, since the multidimensional case is only a particular case of the 1D (or perform some numerical experiment).
NB: in very high dimensional spaces or for very many points to test, you might find a clever / faster way from the eigenvectors and eigenvalues of V (i.e. the principal axes of the ellipsoid and their corresponding variance).
Hope this helps.
A.
Consider computing the probability of the point given the normal distribution:
M = [1 -1]; %# mean vector
V = [.9 .4; .4 .3]; %# covariance matrix
p = [0.5 -1.5]; %# 2d-point
prob = mvnpdf(p,M,V); %# probability P(p|mu,cov)
The function MVNPDF is provided by the Statistics Toolbox
Maybe I'm totally off, but isn't this the same as just asking for each dimension: Am I inside the sigma?
PSEUDOCODE:
foreach(dimension d)
(M(d) - sigma(d) < p(d) < M(d) + sigma(d)) ?
Because you want to know if p is inside every dimension of your gaussian. So actually, this is just a space problem and your Gaussian hasn't have to do anything with it (except for M and sigma which are just distances).
In MATLAB you could try something like:
all(M - sigma < p < M + sigma)
A distance to that place could be, where I don't know the function for the Euclidean distance. Maybe dist works:
dist(M, p)
Because M is just a point in space and p as well. Just 2 vectors.
And now the final one. You want to know the distance in a form of sigma's:
% create a distance vector and divide it by sigma
M - p ./ sigma
I think that will do the trick.