lagrange interpolation - matlab

I checked the answers about Lagrange interpolation, but I couldn't find a suitable one to my question. I'm trying to use Lagrange interpolation for a surface with matlab. Let's say I have a x and y vector and f=f(x,y). I want to interpolate this f function. I think, what I did is mathematically correct:
function q = laginterp(x,y,f,ff)
n = length(x);
m = length(y);
v = zeros(size(ff));
for k = 1:n
for l = 1:m
w1 = ones(size(ff));
w2 = ones(size(ff))
for j = [1:k-1 k+1:n]
for j = [1:l-1 l+1:n]
w1 = (x-x(j))./(x(k)-x(j)).*w1;
w2 = (y-y(i))./(y(l)-y(i)).*w2;
end
end
ff = ff + w1.*w2.*f(k,l);
end
end
It is my function and then I'm waiting for an answer for any given x,y,f like
x= 0:4;
y = [-6 -3 -1 6];
f=[2 9 4 25 50];
v = laginterp(x,y,f,ff);
plot3(x,y,'o',f,q,'-')
I'm always grateful for any help!

Lagrange interpolation is essentially NEVER a good choice for interpolation. Yes, it is used in the first chapter of many texts that discuss interpolation. Does that make it good? No. That just makes it convenient, a good way to INTRODUCE ideas of interpolation, and sometimes to prove some simple results.
A serious problem is that a user decides to try this miserable excuse for an interpolation method, and finds that lo and behold, it does work for 2 or 3 points. Wow, look at that! So the obvious continuation is to use it on their real data sets with 137 points, or 10000 data points or more, some of which points are usually replicates. What happened? Why does my code not give good results? Or, maybe they will just blindly assume that it did work, and then publish a paper containing meaningless results.
Yes, there is a Lagrange tool on the File Exchange. Yes, it even probably got some good reviews, written by first year students who had no real idea what they were looking at, and who sadly have no concept of numerical analysis. Don't use it.
If you need an interpolation tool in MATLAB, you could start with griddata or TriScatteredInterp. These will yield quite reasonable results. Other methods are radial basis function interpolations, of which there is also a tool on the FEX, and a wide variety of splines, my personal favorite. Note that ANY interpolation, used blindly without understanding or appreciation of the pitfalls can and will produce meaningless results. But this is true of almost any numerical method.

This doesn't address your question directly, but there's a Lagrange interpolation function on the Matlab File Exchange which seems pretty popular.

Related

Finding intersection between two functions using Matlab or Scilab

I'm doing this:
>> plot(x,y1,x,y2);
>> x=0:0.001:5;
>> y1=sin(x)+cos(1+x.^2)-1;
>> y2 = ((1/2).*x)-1;
>> find (y1==y2)
And getting this:
ans =
Empty matrix: 1-by-0
As an answer and it is simply driving me crazy! I do not know why Matlab and Scilab does not give me the answer of the intersects. I have been trying to make the intervals smaller like x = 0:0.0001:5; but it did not change anything. How can I make it return to me the intersection values?
Thank you.
You have to remember that Matlab is used to find numerical solutions to problems. You are providing a discrete set of input points x=0:0.001:5; and asking it to calculate the discrete output points y1[x] and y2[x]. This means that y1 and y2 are not continuous and don't necessarily intersect as their continuous counterparts do. I don't have Matlab so I did not run your code, but your discrete functions most likely do not interset. That is to say, there is no pair of points a = y1[x_i] and b = y2[x_i] where a = b. Instead what you most likely want to do is look for points where y2-y1 is on one side of zero at a particular input, and on the other side of zero for the next input. This would mean that the function's continuous conterparts would have crossed somewhere in between.
The case where the functions meet but don't cross is a little more tricky but the same kind of idea.
EDIT:
This sort of thing is easiest to wrap your head around with image so I created one illustrate what I mean.
Here I used many fewer points than you are trying to use, but the idea is the same. You can see that the continuous versions of y1 and y2 cross in several places, but what you're asking matlab to do is find a point in y1 that is equal to a point in y2 for identical values of x. In this image you can see that many are close, but your computer stores floating point numbers to a very high precision and so the chances of them actually being equal is very small.
When you increase the number of sample points, the image starts to look more like its' continuous counterpart.
The two existing answers explain why you can't find an exact intersection so easily. But what you really need is an answer to what to do instead to obtain precise intersections?
In your specific case, you know the analytical functions which you want to figure out the intersection of. You can use fzero with an (optionally anonymous) function to find the zero of the function defined by the difference of your two original functions:
y1fun = #(x) sin(x)+cos(1+x.^2)-1;
y2fun = #(x) ((1/2).*x)-1;
diff_fun = #(x) y1fun(x)-y2fun(x);
x0 = 1; % starting point for fzero's zero search
x_cross = fzero(diff_fun,x0);
Now, this will give you one zero of the difference function, i.e. one intersection of your functions. It turns out that finding every zero of a function is a challenging task. Generally you have to call fzero multiple times with various starting points x0. If you suspect what your functions look like, this is not hopeless at all.
So what happens if your functions are more messy? In the general case, you can use an interpolating function to play the part of y1fun and y2fun in the example above, for instance by using interp1:
% generate data
xdata = 0:0.001:5;
y1data = sin(xdata)+cos(1+xdata.^2)-1;
y2data = ((1/2).*xdata)-1;
y1fun = #(x) interp1(xdata,y1data,x);
y2fun = #(x) interp1(xdata,y2data,x);
x0 = 1; % starting point for fzero's zero search
x_cross = fzero(#(x)y1fun(x)-y2fun(x),x0);
which leads back to the original problem. Note that interp1 by default uses linear interpolation, depending on what your function looks like and how your data are scatted you can choose other options. Also note the option for extrapolation (to be avoided).
So in both cases, you get one crossing for each call to fzero. By choosing the starting points carefully, you should be able to find all the zeros, as exactly as possible.
Maybe the two vectors do not have exactly equal values anywhere. You could try to search for a smallest difference:
abs(y1-y2)<tolerance
where tolerance=0.001 is a small number

MATLAB: Approximate tomorrow's temperature with 2nd, 3rd and 4th polynomial using the Least Squares method

The following is Exercise 3 of a Numerical Analysis task I have to do as part of my university course on the subject.
Find an approximation of tomorrow's temperature based on the last 23
values of hourly temperature of your city ( Meteorological history for
Thessaloniki {The city of my univ} can be found here:
http://freemeteo.com)
You will approximate the temperature function with a polynomial of
2nd, 3rd and 4th degree, using the Least Squares method. Following
that, you will find the value of the function at the point that
interests you. Compare your approximations qualitatively and make a
note to the time and date you're doing the approximation on.
Maybe it's due to fatigue due to doing the first two tasks without break, or it's my lack of experience on numerical analysis, but I am completely stumped. I do not even know where to start.
I know it's disgusting to ask for a solution without even showing signs of effort, but I would appreciate anything. Leads, tutorials, outlines of the things I need to work on, one after the other, anything.
I'd be very much obliged to you.
NOTE: I am not able to use any MATLAB in-built approximation functions.
In general, if y is your data vector belonging to the times t, and c is the coefficient vector you're interested in, then you need to solve the linear system
Ac = y
in a least-squares sense, where
A = bsxfun(#power, t(:), 0:n)
In MATLAB you can do this with mldivide:
c = A\y(:)
Example:
>> t = 0 : 0.1 : 20; %// Define some times
>> y = pi + 0.8*t - 3.2*t.^2; %// Create some synthetic data
>> y = y + randn(size(y)); %// Add some noise for good measure
>>
>> n = 2; %// The order of the polynomial for the fit
>> A = bsxfun(#power, t(:), 0:n); %// Design matrix
>> c = A\y(:) %// Solve for the coefficient matrix
c =
3.142410118189416e+000
7.978077631488009e-001 %// Which works pretty well
-3.199865079047185e+000
But since you are not allowed to use any built-in functions, you can use this simple solution only to check your own outcomes. You'll have to write an implementation of the equations given on (for example) Wolfram's MathWorld.

What is the fastest method for solving exp(ax)-ax+c=0 for x in matlab

What is the least computational time consuming way to solve in Matlab the equation:
exp(ax)-ax+c=0
where a and c are constants and x is the value I'm trying to find?
Currently I am using the in built solver function, and I know the solution is single valued, but it is just taking longer than I would like.
Just wanting something to run more quickly is insufficient for that to happen.
And, sorry, but if fzero is not fast enough then you won't do much better for a general root finding tool.
If you aren't using fzero, then why not? After all, that IS the built-in solver you did not name. (BE EXPLICIT! Otherwise we must guess.) Perhaps you are using solve, from the symbolic toolbox. It will be more slow, since it is a symbolic tool.
Having said the above, I might point out that you might be able to improve by recognizing that this is really a problem with a single parameter, c. That is, transform the problem to solving
exp(y) - y + c = 0
where
y = ax
Once you know the value of y, divide by a to get x.
Of course, this way of looking at the problem makes it obvious that you have made an incorrect statement, that the solution is single valued. There are TWO solutions for any negative value of c less than -1. When c = -1, the solution is unique, and for c greater than -1, no solutions exist in real numbers. (If you allow complex results, then there will be solutions there too.)
So if you MUST solve the above problem frequently and fzero was inadequate, then I would consider a spline model, where I had precomputed solutions to the problem for a sufficient number of distinct values of c. Interpolate that spline model to get a predicted value of y for any c.
If I needed more accuracy, I might take a single Newton step from that point.
In the event that you can use the Lambert W function, then solve actually does give us a solution for the general problem. (As you see, I am just guessing what you are trying to solve this with, and what are your goals. Explicit questions help the person trying to help you.)
solve('exp(y) - y + c')
ans =
c - lambertw(0, -exp(c))
The zero first argument to lambertw yields the negative solution. In fact, we can use lambertw to give us both the positive and negative real solutions for any c no larger than -1.
X = #(c) c - lambertw([0 -1],-exp(c));
X(-1.1)
ans =
-0.48318 0.41622
X(-2)
ans =
-1.8414 1.1462
Solving your system symbolically
syms a c x;
fx0 = solve(exp(a*x)-a*x+c==0,x)
which results in
fx0 =
(c - lambertw(0, -exp(c)))/a
As #woodchips pointed out, the Lambert W function has two primary branches, W0 and W−1. The solution given is with respect to the upper (or principal) branch, denoted W0, your equation actually has an infinite number of complex solutions for Wk (the W0 and W−1 solutions are real if c is in [−∞, 0]). In Matlab, lambertw is only implemented for symbolic inputs and thus is very slow method of solving your equation if you're interested in numerical (double precision) solutions.
If you wish to solve such equations numerically in an efficient manner, you might look at Corless, et al. 1996. But, as long as your parameter c is in [−∞, 0], i.e., -exp(c) in [−1/e, 0] and you're interested in the W0 branch, you can use the Matlab code that I wrote to answer a similar question at Math.StackExchange. This code should be much much more efficient that using a naïve approach with fzero.
If your values of c are not in [−∞, 0] or you want the solution corresponding to a different branch, then your solution may be complex-valued and you won't be able to use the simple code I linked to above. In that case, you can more fully implement the function by reading the Corless, et al. 1996 paper or you can try converting the Lambert W to a Wright ω function: W0(z) = ω(log(z)), W−1(z) = ω(log(z)−2πi). In your case, using Matlab's wrightOmega, the W0 branch corresponds to:
fx0 =
(c - wrightOmega(log(-exp(c))))/a
and the W−1 branch to:
fxm1 =
(c - wrightOmega(log(-exp(c))-2*sym(pi)*1i))/a
If c is real, then the above reduces to
fx0 =
(c - wrightOmega(c+sym(pi)*1i))/a
and
fxm1 =
(c - wrightOmega(c-sym(pi)*1i))/a
Matlab's wrightOmega function is also symbolic only, but I have written a double precision implementation (based on Lawrence, et al. 2012) that you can find on my GitHub here and that is 3+ orders of magnitude faster than evaluating the function symbolically. As your problem is technically in terms of a Lambert W, it may be more efficient, and possibly more numerically accurate, to implement that more complicated function for the regime of interest (this is due to the log transformation and the extra evaluation of a complex log). But feel free to test.

Metropolis-Hastings algorithm, using a proposal distribution other than a Gaussian in Matlab

I am currently working on my final year project for my mathematics degree which is based on giving an overview of the Metropolis-Hastings algorithm and some numerical examples.
So far I have got some great results by using my proposal distribution as a Gaussian, and sampling from a few other distributions, however I am trying to go one step further by using a different proposal distribution.
So far I have got this code (I am using Matlab), however with limited resources online about using different proposals it is hard to tell if I am close at all, as in reality I am not too sure how to attempt this, (especially as this gives no useful data output so far).
It would be fantastic if someone could lend a hand if they know or forward me to some easily accessible information (I understand that I am not just asking coding advice, but Mathematics as well).
So, I want to sample from a Gaussian using a proposal distribution of a Laplace, this my code so far:
n = 1000; %%%%number of iterations
x(1) = -3; %%%%Generate a starting point
%%%%Target distribution: Gaussian:
strg = '1/(sqrt(2*pi*(sig)))*exp(-0.5*((x - mu)/sqrt(sig)).^2)';
tnorm = inline(strg, 'x', 'mu', 'sig');
mu = 1; %%%%Gaussian Parameters (I will be estimating these from my markov chain x)
sig = 3;
%%%%Proposal distribution: Laplace:
strg = '(1/(2*b))*exp((-1)*abs(x - mu)/b)';
laplace = inline(strg, 'x', 'b', 'mu');
b = 2; %%%%Laplace parameter, I will be using my values for y and x(i-1) for mu
%%%%Generate markov chain by acceptance-rejection
for i = 2:n
%%%%Generate a candidate from the proposal distribution
y = laplace(randn(1), b, x(i-1));
%%%%Generate a uniform for comparison
u = rand(1);
alpha = min([1, (tnorm(y, mu, sig)*laplace(x(i-1), b, y))/(tnorm(x(i-1), mu, sig)*laplace(y, b, x(i-1)))]);
if u <= alpha
x(i) = y;
else
x(i) = x(i-1);
end
end
If anyone can tell me if the above is completely wrong/going about it in the wrong way, or there are just a few mistakes (I am very wary about my generation of 'y' in for the for loop being completely wrong) that would be fantastic.
Thanks, Tom
For reference, this has been solved on another site by #ripegraph, my method for generating random numbers from the Laplace distribution was incorrect and should actually be done using: http://en.wikipedia.org/wiki/Laplace_distribution#Generating_random_variables_according_to_the_Laplace_distribution
He was also noted that the Laplace distribution is symmetric so does not need to be included in the code at all.
After doing a bit more research, I found that if you have X~Gamma(v/2, 2) it becomes X~ChiSquare(v) and is a much better example of using a non-gaussian proposal. To use this example however you need to use the independence sampler http://www.math.mcmaster.ca/canty/teaching/stat744/Lectures5.pdf (slide 89).
Hope this might be useful for someone.

vector valued limits in matlab integral

Is it possible to use vector limits for any matlab function of integration? I have to avoid from for loops because of the speed of my program. Can you please give me a clue on do
k=0:5
f=#(x)x^2
quad(f,k,k+1)
If somebody need, I found the answer of my question:quad with vector limit
I will try giving you an answer, based on my experience with quad function.
Starting from this:
k=0:5;
f=#(x) x.^2;
Notice the difference in your f definition (incorrect) and mine (correct).
If you only mean to integrate f within the range (0,5) you can easily call
quad(f,k(1),k(end))
Without handle function, you may reach the same results in a different way, by making use of trapz:
x = 0:5;
y = x.^2;
trapz(x,y)
If, instead, you mean to perform a step-by-step integration in the small range [k(i),k(i+1)] you may type
arrayfun(#(ii) quad(f,k(ii),k(ii+1)),1:numel(k)-1)
For a sake of convenince, notice that
sum(arrayfun(#(ii) quad(f,k(ii),k(ii+1)),1:numel(k)-1)) == quad(f,k(1),k(end))
I hope this helps.