Interpolation using chebyshev points - matlab

Interpolate the Runge function of Example 10.6 at Chebyshev points for n from 10 to 170
in increments of 10. Calculate the maximum interpolation error on the uniform evaluation
mesh x = -1:.001:1 and plot the error vs. polynomial degree as in Figure 10.8 using
semilogy. Observe spectral accuracy.
The runge function is given by: f(x) = 1 / (1 + 25x^2)
My code so far:
x = -1:0.001:1;
n = 170;
i = 10:10:170;
cx = cos(((2*i + 1)/(2*(n+1)))*pi); %chebyshev pts
y = 1 ./ (1 + 25*x.^2); %true fct
%chebyshev polynomial, don't know how to construct using matlab
yc = polyval(c, x); %graph of approx polynomial fct
plot(x, yc);
mErr = (1 / ((2.^n).*(n+1)!))*%n+1 derivative of f evaluated at max x in [-1,1], not sure how to do this
%plotting stuff
I know very little matlab, so I am struggling on creating the interpolating polynomial. I did some google work, but I was confused with the current functions as I didn't find one that just simply took in points and the polynomial to be interpolated. I am also a bit confused in this case of whether I should be doing i = 0:1:n and n=10:10:170 or if n is fixed here. Any help is appreciated, thank you

Since you know very little about MATLAB, I will try explain everything step by step:
First, to visualize the Runge function, you can type:
f = #(x) 1./(1+25*x.^2); % Runge function
% plot Runge function over [-1,1];
x = -1:1e-3:1;
y = f(x);
figure;
plot(x,y); title('Runge function)'); xlabel('x');ylabel('y');
The #(x) part of the code is a function handle, a very useful feature of MATLAB. Notice the function is properly vecotrized, so it can receive as an argument a variable or an array. The plot function is straightforward.
To understand the Runge phenomenon, consider a linearly spaced vector of [-1,1] of 10 elements and use these points to obtain the interpolating (Lagrange) polynomial. You get the following:
% 10 linearly spaced points
xc = linspace(-1,1,10);
yc = f(xc);
p = polyfit(xc,yc,9); % gives the coefficients of the polynomial of degree 10
hold on; plot(xc,yc,'o',x,polyval(p,x));
The polyfit function does a polynomial curve fitting - it obtains the coefficients of the interpolating polynomial, given the poins x,y and the degree of the polynomial n. You can easily evaluate the polynomial at other points with the polyval function.
Obseve that, close to the end domains, you get an oscilatting polynomial and the interpolation is not a good approximation of the function. As a matter of fact, you can plot the absolute error, comparing the value of the function f(x) and the interpolating polynomial p(x):
plot(x,abs(y-polyval(p,x))); xlabel('x');ylabel('|f(x)-p(x)|');title('Error');
This error can be reduced if, instead of using a linearly space vector, you use other points to do the interpolation. A good choice is to use the Chebyshev nodes, which should reduce the error. As a matter of fact, notice that:
% find 10 Chebyshev nodes and mark them on the plot
n = 10;
k = 1:10; % iterator
xc = cos((2*k-1)/2/n*pi); % Chebyshev nodes
yc = f(xc); % function evaluated at Chebyshev nodes
hold on;
plot(xc,yc,'o')
% find polynomial to interpolate data using the Chebyshev nodes
p = polyfit(xc,yc,n-1); % gives the coefficients of the polynomial of degree 10
plot(x,polyval(p,x),'--'); % plot polynomial
legend('Runge function','Chebyshev nodes','interpolating polynomial','location','best')
Notice how the error is reduced close to the end domains. You don't get now that high oscillatory behaviour of the interpolating polynomial. If you plot the error, you will observe:
plot(x,abs(y-polyval(p,x))); xlabel('x');ylabel('|f(x)-p(x)|');title('Error');
If, now, you change the number of Chebyshev nodes, you will get an even better approximation. A little modification on the code lets you run it again for different numbers of nodes. You can store the maximum error and plot it as a function of the number of nodes:
n=1:20; % number of nodes
% pre-allocation for speed
e_ln = zeros(1,length(n)); % error for the linearly spaced interpolation
e_cn = zeros(1,length(n)); % error for the chebyshev nodes interpolation
for ii=1:length(n)
% linearly spaced vector
x_ln = linspace(-1,1,n(ii)); y_ln = f(x_ln);
p_ln = polyfit(x_ln,y_ln,n(ii)-1);
e_ln(ii) = max( abs( y-polyval(p_ln,x) ) );
% Chebyshev nodes
k = 1:n(ii); x_cn = cos((2*k-1)/2/n(ii)*pi); y_cn = f(x_cn);
p_cn = polyfit(x_cn,y_cn,n(ii)-1);
e_cn(ii) = max( abs( y-polyval(p_cn,x) ) );
end
figure
plot(n,e_ln,n,e_cn);
xlabel('no of points'); ylabel('maximum absolute error');
legend('linearly space','chebyshev nodes','location','best')

Related

Curve fitting using for loop for polynomial up to degree i?

I have this hard coded version which fits data to a curve for linear, quadratic and cubic polynomials:
for some data x and a function y
M=[x.^0 x.^1];
L=[x.^0 x.^1 x.^2];
linear = (M'*M)\(M'*y);
plot(x, linear(1)+linear(2)*x, ';linear;r');
deg2 = (L'*L)\(L'*y);
plot(x, deg2(1)+deg2(2)*x+deg2(3)*(x.*x), ';quadratic plot;b');
I am wondering how can I turn this into a for loop to plot curves for degree n polynomials? The part I'm stuck on is the plotting part, how would I be able to translate the increase in the number of coefficients in to the for loop?
what I have:
for i = 1:5 % say we're trying to plot curves up to degree 5 polynomials...
curr=x.^(0:i);
degI = (curr'*curr)\(curr'*y);
plot(x, ???) % what goes in here<-
end
If it is only the plotting, you can use the polyval function to evaluate polynomials of desired grade by supplying a vector of coefficients
% For example, some random coefficients for a 5th order polynomial
% degI = (curr'*curr)\(curr'*y) % Your case
degi = [3.2755 0.8131 0.5950 2.4918 4.7987 1.5464]; % for 5th order polynomial
x = linspace(-2, 2, 10000);
hold on
% Using polyval to loop over the grade of the polynomials
for i = 1:length(degI)
plot(x, polyval(degI(1:i), x))
end
gives the all polynomials in one plot
I believe this should answer your question exactly. You just have to be careful with the matrix dimensions.
for i = 1:5
curr=x.^(0:i);
degI = (curr'*curr)\(curr'*y);
plot(x, x.^(0:i)*degI)
end

MATLAB fft vs Mathematical fourier

I am using the following code to generate a fft and mathematical Fourier transform of a signal. I want to then mathematically recreate the original signal of the fft. This works on the mathematical signal but not on the fft since it is a Discrete Transform. Does anyone know what change I can make to my inverse transform equation that will make it work for fft?
clear all; clc;
N = 1024;
N2 = 1023;
SNR = -10;
fs = 1024;
Ts = 1/fs;
t = (0:(N-1))*Ts;
x = 0.5*sawtooth(2*2*pi*t);
x1 = fft(x);
Magnitude1 = abs(x1);
Phase1 = angle(x1)*360/(2*pi);
for m = 1:1024
f(m) = m; % Sinusoidal frequencies
a = (2/N)*sum(x.*cos(2*pi*f(m)*t)); % Cosine coeff.
b = (2/N)*sum(x.*sin(2*pi*f(m)*t)); % Sine coeff
Magnitude(m) = sqrt(a^2 + b^2); % Magnitude spectrum
Phase(m) = -atan2(b,a); % Phase spectrum
end
subplot(2,1,1);
plot(f,Magnitude1./512); % Plot magnitude spectrum
......Labels and title.......
subplot(2,1,2);
plot(f,Magnitude,'k'); % Plot phase spectrum
ylabel('Phase (deg)','FontSize',14);
pause();
x2 = zeros(1,1024); % Waveform vector
for m = 1:24
f(m) = m; % Sinusoidal frequencies
x2 = (1/m)*(x2 + Magnitude1(m)*cos(2*pi*f(m)*t + Phase1(m)));
end
x3 = zeros(1,1024); % Waveform vector
for m = 1:24
f(m) = m; % Sinusoidal frequencies
x3 = (x3 + Magnitude(m)*cos(2*pi*f(m)*t + Phase(m)));
end
plot(t,x,'--k'); hold on;
plot(t,x2,'k');
plot(t,x3,'b');```
There are a few comments about the Fourier Transform, and I hope I can explain everything for you. Also, I don't know what you mean by "Mathematical Fourier transform", as none of the expressions in your code is resembles the Fourier series of the sawtooth wave.
To understand exactly what the fft function does, we can do things step by step.
First, following your code, we create and plot one period of the sawtooth wave.
n = 1024;
fs = 1024;
dt = 1/fs;
t = (0:(n-1))*dt;
x = 0.5*sawtooth(2*pi*t);
figure; plot(t,x); xlabel('t [s]'); ylabel('x');
We can now calculate a few things.
First, the Nyquist frequency, the maximum detectable frequency from the samples.
f_max = 0.5*fs
f_max =
512
Also, the minimum detectable frequency,
f_min = 1/t(end)
f_min =
1.000977517106549
Calculate now the discrete Fourier transform with MATLAB function:
X = fft(x)/n;
This function obtains the complex coefficients of each term of the discrete Fourier transform. Notice it calculates the coefficients using the exp notation, not in terms of sines and cosines. The division by n is to guarantee that the first coefficient is equal to the arithmetic mean of the samples
If you want to plot the magnitude/phase of the transformed signal, you can type:
f = linspace(f_min,f_max,n/2); % frequency vector
a0 = X(1); % constant amplitude
X(1)=[]; % we don't have to plot the first component, as it is the constant amplitude term
XP = X(1:n/2); % we get only the first half of the array, as the second half is the reflection along the y-axis
figure
subplot(2,1,1)
plot(f,abs(XP)); ylabel('Amplitude');
subplot(2,1,2)
plot(f,angle(XP)); ylabel('Phase');
xlabel('Frequency [Hz]')
What does this plot means? It shows in a figure the amplitude and phase of the complex coefficients of the terms in the Fourier series that represent the original signal (the sawtooth wave). You can use this coefficients to obtain the signal approximation in terms of a (truncated) Fourier series. Of course, to do that, we need the whole transform (not only the first half, as it is usual to plot it).
X = fft(x)/n;
amplitude = abs(X);
phase = angle(X);
f = fs*[(0:(n/2)-1)/n (-n/2:-1)/n]; % frequency vector with all components
% we calculate the value of x for each time step
for j=1:n
x_approx(j) = 0;
for k=1:n % summation done using a for
x_approx(j) = x_approx(j)+X(k)*exp(2*pi*1i/n*(j-1)*(k-1));
end
x_approx(j) = x_approx(j);
end
Notice: The code above is for clarification and does not intend to be well coded. The summation can be done in MATLAB in a much better way than using a for loop, and some warnings will pop up in the code, warning the user to preallocate each variable for speed.
The above code calculates the x(ti) for each time ti, using the terms of the truncated Fourier series. If we plot both the original signal and the approximated one, we get:
figure
plot(t,x,t,x_approx)
legend('original signal','signal from fft','location','best')
The original signal and the approximated one are nearly equal. As a matter of fact,
norm(x-x_approx)
ans =
1.997566360514140e-12
Is almost zero, but not exactly zero.
Also, the plot above will issue a warning, due to the use of complex coefficients when calculating the approximated signal:
Warning: Imaginary parts of complex X and/or Y arguments ignored
But you can check that the imaginary term is very close to zero. It is not exactly zero due to roundoff errors in the computations.
norm(imag(x_approx))
ans =
1.402648396024229e-12
Notice in the codes above how to interpret and use the results from the fft function and how they are represented in the exp form, not on terms of sines and cosines, as you coded.

Obtaining a 2D interpolation polynomial in Matlab

I have three vectors, one of X locations, another of Y locations and the third is a f(x, y). I want to find the algebraic expression interpolation polynomial (using matlab) since I will later on use the result in an optimization problem in AMPL. As far as I know, there are not any functions that return the interpolation polynomial.
I have tried https://la.mathworks.com/help/matlab/ref/griddedinterpolant.html, but this function only gives the interpolated values at certain points.
I have also tried https://la.mathworks.com/help/matlab/ref/triscatteredinterp.html as sugested in Functional form of 2D interpolation in Matlab, but the output isn't the coefficents of the polynomial. I cannot see it, it seems to be locked inside of a weird variable.
This is a small program that I have done to test what I am doing:
close all
clear
clc
[X,Y] = ndgrid(1:10,1:10);
V = X.^2 + 3*(Y).^2;
F = griddedInterpolant(X,Y,V,'cubic');
[Xq,Yq] = ndgrid(1:0.5:10,1:0.5:10);
Vq = F(Xq,Yq);
mesh(Xq,Yq,Vq)
figure
mesh(X, Y, V)
I want an output that instead of returning the value at grid points returns whatever it has used to calculate said values. I am aware that it can be done in mathematica with https://reference.wolfram.com/language/ref/InterpolatingPolynomial.html, so I find weird that matlab can't.
You can use fit if you have the curve fitting toolbox.
If it's not the case you can use a simple regression, if I take your example:
% The example data
[X,Y] = ndgrid(1:10,1:10);
V = X.^2 + 3*(Y).^2;
% The size of X
s = size(X(:),1);
% Let's suppose that you want to fit a polynome of degree 2.
% Create all the possible combination for a polynome of degree 2
% cst x y x^2 y^2 x*y
A = [ones(s,1), X(:), Y(:), X(:).^2, Y(:).^2, X(:).*Y(:)]
% Then using mldivide
p = A\V(:)
% We obtain:
p =
0 % cst
0 % x
0 % y
1 % x^2
3 % y^2
0 % x*y

Confidence intervals for linear curve fit under constraints in MATLAB

I have fitted a straight line to a dataset with 68 samples, under the constraint that the line passes through (x0,y0) using the function lsqlin in MATLAB. How can I find the confidence intervals for this?
My code (Source):
I import the dataset containing x and y vectors from a mat file, which also contains the values of constraints x0 and y0.
n = 1; % Degree of polynomial to fit
V(:,n+1) = ones(length(x),1,class(x)); %V=Vandermonde matrix for 'x'
for j = n:-1:1
V(:,j) = x.*V(:,j+1);
end
d = y; % 'd' is the vector of target values, 'y'.
% There are no inequality constraints in this case, i.e.,
A = [];b = [];
% We use linear equality constraints to force the curve to hit the required point. In
% this case, 'Aeq' is the Vandermoonde matrix for 'x0'
Aeq = x0.^(n:-1:0);
% and 'beq' is the value the curve should take at that point
beq = y0;
%%
[p, resnorm, residual, exitflag, output, lambda] = lsqlin(V, d, A, b, Aeq, beq);
%%
% We can then use POLYVAL to evaluate the fitted curve
yhat = polyval( p, x );
The function bootci can be used to find confidence intervals when using lsqlin. Here's how it can be used:
ci=bootci(68,{#(x,y)func(x,y),x,y},'type','student');
The first argument is the number of data points, or the length of the vector x.
The function in the second argument is basically supposed to compute any statistic for which you need to find the confidence intervals. In this case, this statistic is the coefficients of our fitted line. Hence, the function func(x,y) here should return the regression coefficients returned by lsqnonlin. The inputs to this function are the dataset vectors x and y.
The third and fourth argument lets you specify the distribution of your dataset. You can get an idea of this by plotting a histogram of the residuals like this:
histogram(residuals);

Finding the best monotonic curve fit

Edit: Some time after I asked this question, an R package called MonoPoly (available here) came out that does exactly what I want. I highly recommend it.
I have a set of points I want to fit a curve to. The curve must be monotonic (never decreasing in value) i.e. the curve can only go upward or stay flat.
I originally had been polyfitting my results and this had been working great until I found a particular dataset. The polyfit for data in this dataset was non-monotonic.
I did some research and found a possible solution in this post:
Use lsqlin. Constrain the first derivative to be non-negative at both
ends of the domain of interest.
I'm coming from a programming rather than math background so this is a little beyond me. I don't know how to constrain the first derivative to be non-negative as he said. Also, I think in my case I need a curve so I should use lsqcurvefit but I don't know how to constrain it to produce monotonic curves.
Further research turned up this post recommending lsqcurvefit but I can't figure out how to use the important part:
Try this non-linear function F(x) also. You use it together with
lsqcurvefit but it require a start guess on the parameters. But it is
a nice analytic expression to give as a semi-empirical formula in a
paper or a report.
%Monotone function F(x), with c0,c1,c2,c3 varitional constants F(x)=
c3 + exp(c0 - c1^2/(4*c2))sqrt(pi)...
Erfi((c1 + 2*c2*x)/(2*sqrt(c2))))/(2*sqrt(c2))
%Erfi(x)=erf(i*x) (look mathematica) but the function %looks much like
x^3 %derivative f(x), probability density f(x)>=0
f(x)=dF/dx=exp(c0+c1*x+c2*x.^2)
I must have a monotonic curve but I'm not sure how to do it, even with all of this information. Would a random number be enough for a "start guess". Is lsqcurvefit best? How can I use it to produce a best fitting monotonic curve?
Thanks
Here is a simple solution using lsqlin. The derivative constrain is enforced in each data point, this could be easily modified if needed.
Two coefficient matrices are needed, one (C) for least square error calculation and one (A) for derivatives in the data points.
% Following lsqlin's notations
%--------------------------------------------------------------------------
% PRE-PROCESSING
%--------------------------------------------------------------------------
% for reproducibility
rng(125)
degree = 3;
n_data = 10;
% dummy data
x = rand(n_data,1);
d = rand(n_data,1) + linspace(0,1,n_data).';
% limit on derivative - in each data point
b = zeros(n_data,1);
% coefficient matrix
C = nan(n_data, degree+1);
% derivative coefficient matrix
A = nan(n_data, degree);
% loop over polynomial terms
for ii = 1:degree+1
C(:,ii) = x.^(ii-1);
A(:,ii) = (ii-1)*x.^(ii-2);
end
%--------------------------------------------------------------------------
% FIT - LSQ
%--------------------------------------------------------------------------
% Unconstrained
% p1 = pinv(C)*y
p1 = fliplr((C\d).')
p2 = polyfit(x,d,degree)
% Constrained
p3 = fliplr(lsqlin(C,d,-A,b).')
%--------------------------------------------------------------------------
% PLOT
%--------------------------------------------------------------------------
xx = linspace(0,1,100);
plot(x, d, 'x')
hold on
plot(xx, polyval(p1, xx))
plot(xx, polyval(p2, xx),'--')
plot(xx, polyval(p3, xx))
legend('data', 'lsq-pseudo-inv', 'lsq-polyfit', 'lsq-constrained', 'Location', 'southoutside')
xlabel('X')
ylabel('Y')
For the specified input the fitted curves:
Actually this code is more general than what you requested, since the degree of polynomial can be changed as well.
EDIT: enforce derivative constrain in additional points
The issue pointed out in the comments is due to that the derivative checks are enforced only in the data points. Between those no checks are performed. Below is a solution to alleviate this problem. The idea: convert the problem to an unconstrained optimization by using a penalty term.
Note that it is using a term pen to penalize the violation of the derivative check, thus the result is not a true least square error solution. Additionally, the result is dependent on the penalty function.
function lsqfit_constr
% Following lsqlin's notations
%--------------------------------------------------------------------------
% PRE-PROCESSING
%--------------------------------------------------------------------------
% for reproducibility
rng(125)
degree = 3;
% data from comment
x = [0.2096 -3.5761 -0.6252 -3.7951 -3.3525 -3.7001 -3.7086 -3.5907].';
d = [95.7750 94.9917 90.8417 62.6917 95.4250 89.2417 89.4333 82.0250].';
n_data = length(d);
% number of equally spaced points to enforce the derivative
n_deriv = 20;
xd = linspace(min(x), max(x), n_deriv);
% limit on derivative - in each data point
b = zeros(n_deriv,1);
% coefficient matrix
C = nan(n_data, degree+1);
% derivative coefficient matrix
A = nan(n_deriv, degree);
% loop over polynom terms
for ii = 1:degree+1
C(:,ii) = x.^(ii-1);
A(:,ii) = (ii-1)*xd.^(ii-2);
end
%--------------------------------------------------------------------------
% FIT - LSQ
%--------------------------------------------------------------------------
% Unconstrained
% p1 = pinv(C)*y
p1 = (C\d);
lsqe = sum((C*p1 - d).^2);
p2 = polyfit(x,d,degree);
% Constrained
[p3, fval] = fminunc(#error_fun, p1);
% correct format for polyval
p1 = fliplr(p1.')
p2
p3 = fliplr(p3.')
fval
%--------------------------------------------------------------------------
% PLOT
%--------------------------------------------------------------------------
xx = linspace(-4,1,100);
plot(x, d, 'x')
hold on
plot(xx, polyval(p1, xx))
plot(xx, polyval(p2, xx),'--')
plot(xx, polyval(p3, xx))
% legend('data', 'lsq-pseudo-inv', 'lsq-polyfit', 'lsq-constrained', 'Location', 'southoutside')
xlabel('X')
ylabel('Y')
%--------------------------------------------------------------------------
% NESTED FUNCTION
%--------------------------------------------------------------------------
function e = error_fun(p)
% squared error
sqe = sum((C*p - d).^2);
der = A*p;
% penalty term - it is crucial to fine tune it
pen = -sum(der(der<0))*10*lsqe;
e = sqe + pen;
end
end
Gradient free methods might be used to solve the problem by exactly enforcing the derivative constrain, for example:
[p3, fval] = fminsearch(#error_fun, p_ini);
%--------------------------------------------------------------------------
% NESTED FUNCTION
%--------------------------------------------------------------------------
function e = error_fun(p)
% squared error
sqe = sum((C*p - d).^2);
der = A*p;
if any(der<0)
pen = Inf;
else
pen = 0;
end
e = sqe + pen;
end
fmincon with non-linear constraint might be a better choice.
I let you to work out the details and to tune the algorithms. I hope that it is sufficient.