MATLAB: polyval function for N greater than 1 - matlab

I am trying trying to graph the polynomial fit of a 2D dataset in Matlab.
This is what I tried:
rawTable = readtable('Test_data.xlsx','Sheet','Sheet1');
x = rawTable.A;
y = rawTable.B;
figure(1)
scatter(x,y)
c = polyfit(x,y,2);
y_fitted = polyval(c,x);
hold on
plot(x,y_fitted,'r','LineWidth',2)
rawTable.A and rawTable.A are randomly generated numbers. (i.e. the x dataset cannot be represented in the following form : x=0:0.1:100)
The result:
second-order polynomial
But the result I expect looks like this (generated in Excel):
enter image description here
How can I graph the second-order polynomial fit in MATLAB?

I sense some confusion regarding what the output of each of those Matlab function mean. So I'll clarify. And I think we need some details as well. So expect some verbosity. A quick answer, however, is available at the end.
c = polyfit(x,y,2) gives the coefficient vectors of the polynomial fit. You can get the fit information such as error estimate following the documentation.
Name this polynomial as P. P in Matlab is actually the function P=#(x)c(1)*x.^2+c(2)*x+c(3).
Suppose you have a single point X, then polyval(c,X) outputs the value of P(X). And if x is a vector, polyval(c,x) is a vector corresponding to [P(x(1)), P(x(2)),...].
Now that does not represent what the fit is. Just as a quick hack to see something visually, you can try plot(sort(x),polyval(c,sort(x)),'r','LineWidth',2), ie. you can first sort your data and try plotting on those x-values.
However, it is only a hack because a) your data set may be so irregularly spaced that the spline doesn't represent function or b) evaluating on the whole of your data set is unnecessary and inefficient.
The robust and 'standard' way to plot a 2D function of known analytical form in Matlab is as follows:
Define some evenly-spaced x-values over the interval you want to plot the function. For example, x=1:0.1:10. For example, x=linspace(0,1,100).
Evaluate the function on these x-values
Put the above two components into plot(). plot() can either plot the function as sampled points, or connect the points with automatic spline, which is the default.
(For step 1, quadrature is ambiguous but specific enough of a term to describe this process if you wish to communicate with a single word.)
So, instead of using the x in your original data set, you should do something like:
t=linspace(min(x),max(x),100);
plot(t,polyval(c,t),'r','LineWidth',2)

Related

Levenberg-Marquardt algorithm for 3D data

I have a point cloud in 3D whose coordinates are stored in a 3D vector, and I would like to fit a nonlinear function to the point cloud.
Do you know if the lsqcurvefit algorithm implemented in MATLAB works for 3D data as well?
Do you have any example data that uses 'levenberg-marquardt' for 3D data using MATLAB?
options = optimoptions('lsqcurvefit','Algorithm','levenberg-marquardt');
Yes, you can still use lsqcurvefit in 3D, but if you want to keep your code as simple as possible (see edit) I suggest the lsqnonlin function for multivariate nonlinear data fitting. The linked documentation page shows several examples, one of which uses the Levenberg-Marquardt algorithm in 2D. In 3D or higher, the usage is similar.
For instance, suppose you have a cloud of points in 3D whose coordinates are stored in the arrays x, y and z. Suppose you are looking for a fitting surface (no longer a curve because you are in 3D) of the form z = exp(r1*x + r2*y), where r1 and r2 are coefficients to be found. You start by defining the following inline function
fun = #(r) exp(r(1)*x + r(2)*y) - z;
where r is an unknown 1x2 array whose entries will be your unknown coefficients (r1 and r2). We are ready to solve the problem:
r0 = [1,1]; % Initial guess for r
options = optimoptions(#lsqnonlin, 'Algorithm', 'levenberg-marquardt');
lsqnonlin(fun, r0, [], [], options)
You will get the output on the command window.
Tested on MATLAB 2018a.
Hope that helps.
EDIT: lsqcurvefit vs lsqnonlin
lsqcurvefit is basically a special case of lsqnonlin. The discussion on whichever is better in terms of speed and accuracy is wide and is beyond the scope of this post. There are two reasons why I have suggested lsqnonlin:
You are free to take x, y, z as matrices instead of column vectors, just make sure that the dimensions match. In fact, if you use lsqcurvefit, your fun must have an additional argument xdata defined as [x, y] where x and y are taken in column form.
You are free to choose your fitting function fun to be implicit, that is of the form f(x,y,z)=0.

matlab: cdfplot of relative error

The figure shown above is the plot of cumulative distribution function (cdf) plot for relative error (attached together the code used to generate the plot). The relative error is defined as abs(measured-predicted)/(measured). May I know the possible error/interpretation as the plot is supposed to be a smooth curve.
X = load('measured.txt');
Xhat = load('predicted.txt');
idx = find(X>0);
x = X(idx);
xhat = Xhat(idx);
relativeError = abs(x-xhat)./(x);
cdfplot(relativeError);
The input data file is a 4x4 matrix with zeros on the diagonal and some unmeasured entries (represent with 0). Appreciate for your kind help. Thanks!
The plot should be a discontinuous one because you are using discrete data. You are not plotting an analytic function which has an explicit (or implicit) function that maps, say, x to y. Instead, all you have is at most 16 points that relates x and y.
The CDF only "grows" when new samples are counted; otherwise its value remains steady, just because there isn't any satisfying sample that could increase the "frequency".
You can check the example in Mathworks' `cdfplot1 documentation to understand the concept of "empirical cdf". Again, only when you observe a sample can you increase the cdf.
If you really want to "get" a smooth curve, either 1) add more points so that the discontinuous line looks smoother, or 2) find any statistical model of whatever you are working on, and plot the analytic function instead.

Matlab plot function defined on a complex coordinate

I would like to plot some figures like this one:
-axis being real and imag part of some complex valued vector(usually either pure real or imag)
-have some 3D visualization like in the given case
First, define your complex function as a function of (Re(x), Im(x)). In complex analysis, you can decompose any complex function into its real parts and imaginary parts. In other words:
F(x) = Re(x) + i*Im(x)
In the case of a two-dimensional grid, you can obviously extend to defining the function in terms of (x,y). In other words:
F(x,y) = Re(x,y) + i*Im(x,y)
In your case, I'm assuming you'd want the 2D approach. As such, let's use I and J to represent the real parts and imaginary parts separately. Also, let's start off with a simple example, like cos(x) + i*sin(y) which is based on the very popular Euler exponential function. It isn't exact, but I modified it slightly as the plot looks nice.
Here are the steps you would do in MATLAB:
Define your function in terms of I and J
Make a set of points in both domains - something like meshgrid will work
Use a 3D visualization plot - You can plot the individual points, or plot it on a surface (like surf, or mesh).
NB: Because this is a complex valued function, let's plot the magnitude of the output. You were pretty ambiguous with your details, so let's assume we are plotting the magnitude.
Let's do this in code line by line:
% // Step #1
F = #(I,J) cos(I) + i*sin(J);
% // Step #2
[I,J] = meshgrid(-4:0.01:4, -4:0.01:4);
% // Step #3
K = F(I,J);
% // Let's make it look nice!
mesh(I,J,abs(K));
xlabel('Real');
ylabel('Imaginary');
zlabel('Magnitude');
colorbar;
This is the resultant plot that you get:
Let's step through this code slowly. Step #1 is an anonymous function that is defined in terms of I and J. Step #2 defines I and J as matrices where each location in I and J gives you the real and imaginary co-ordinates at their matching spatial locations to be evaluated in the complex function. I have defined both of the domains to be between [-4,4]. The first parameter spans the real axis while the second parameter spans the imaginary axis. Obviously change the limits as you see fit. Make sure the step size is small enough so that the plot is smooth. Step #3 will take each complex value and evaluate what the resultant is. After, you create a 3D mesh plot that will plot the real and imaginary axis in the first two dimensions and the magnitude of the complex number in the third dimension. abs() takes the absolute value in MATLAB. If the contents within the matrix are real, then it simply returns the positive of the number. If the contents within the matrix are complex, then it returns the magnitude / length of the complex value.
I have labeled the axes as well as placed a colorbar on the side to visualize the heights of the surface plot as colours. It also gives you an idea of how high and how long the values are in a more pleasing and visual way.
As a gentle push in your direction, let's take a slice out of this complex function. Let's make the real component equal to 0, while the imaginary components span between [-4,4]. Instead of using mesh or surf, you can use plot3 to plot your points. As such, try something like this:
F = #(I,J) cos(I) + i*sin(J);
J = -4:0.01:4;
I = zeros(1,length(J));
K = F(I,J);
plot3(I, J, abs(K));
xlabel('Real');
ylabel('Imaginary');
zlabel('Magnitude');
grid;
plot3 does not provide a grid by default, which is why the grid command is there. This is what I get:
As expected, if the function is purely imaginary, there should only be a sinusoidal contribution (i*sin(y)).
You can play around with this and add more traces if you need to.
Hope this helps!

Find approximation of sine using least squares

I am doing a project where i find an approximation of the Sine function, using the Least Squares method. Also i can use 12 values of my own choice.Since i couldn't figure out how to solve it i thought of using Taylor's series for Sine and then solving it as a polynomial of order 5. Here is my code :
%% Find the sine of the 12 known values
x=[0,pi/8,pi/4,7*pi/2,3*pi/4,pi,4*pi/11,3*pi/2,2*pi,5*pi/4,3*pi/8,12*pi/20];
y=zeros(12,1);
for i=1:12
y=sin(x);
end
n=12;
j=5;
%% Find the sums to populate the matrix A and matrix B
s1=sum(x);s2=sum(x.^2);
s3=sum(x.^3);s4=sum(x.^4);
s5=sum(x.^5);s6=sum(x.^6);
s7=sum(x.^7);s8=sum(x.^8);
s9=sum(x.^9);s10=sum(x.^10);
sy=sum(y);
sxy=sum(x.*y);
sxy2=sum( (x.^2).*y);
sxy3=sum( (x.^3).*y);
sxy4=sum( (x.^4).*y);
sxy5=sum( (x.^5).*y);
A=[n,s1,s2,s3,s4,s5;s1,s2,s3,s4,s5,s6;s2,s3,s4,s5,s6,s7;
s3,s4,s5,s6,s7,s8;s4,s5,s6,s7,s8,s9;s5,s6,s7,s8,s9,s10];
B=[sy;sxy;sxy2;sxy3;sxy4;sxy5];
Then at matlab i get this result
>> a=A^-1*B
a =
-0.0248
1.2203
-0.2351
-0.1408
0.0364
-0.0021
However when i try to replace the values of a in the taylor series and solve f.e t=pi/2 i get wrong results
>> t=pi/2;
fun=t-t^3*a(4)+a(6)*t^5
fun =
2.0967
I am doing something wrong when i replace the values of a matrix in the Taylor series or is my initial thought flawed ?
Note: i can't use any built-in function
If you need a least-squares approximation, simply decide on a fixed interval that you want to approximate on and generate some x abscissae on that interval (possibly equally spaced abscissae using linspace - or non-uniformly spaced as you have in your example). Then evaluate your sine function at each point such that you have
y = sin(x)
Then simply use the polyfit function (documented here) to obtain least squares parameters
b = polyfit(x,y,n)
where n is the degree of the polynomial you want to approximate. You can then use polyval (documented here) to obtain the values of your approximation at other values of x.
EDIT: As you can't use polyfit you can generate the Vandermonde matrix for the least-squares approximation directly (the below assumes x is a row vector).
A = ones(length(x),1);
x = x';
for i=1:n
A = [A x.^i];
end
then simply obtain the least squares parameters using
b = A\y;
You can clearly optimise the clumsy Vandermonde generation loop above I have just written to illustrate the concept. For better numerical stability you would also be better to use a nice orthogonal polynomial system like Chebyshev polynomials of the first kind. If you are not even allowed to use the matrix divide \ function then you will need to code up your own implementation of a QR factorisation and solve the system that way (or some other numerically stable method).

matlab interpolation

Starting from the plot of one curve, it is possible to obtain the parametric equation of that curve?
In particular, say x={1 2 3 4 5 6....} the x axis, and y = {a b c d e f....} the corresponding y axis. I have the plot(x,y).
Now, how i can obtain the equation that describe the plotted curve? it is possible to display the parametric equation starting from the spline interpolation?
Thank you
If you want to display a polynomial fit function alongside your graph, the following example should help:
x=-3:.1:3;
y=4*x.^3-5*x.^2-7.*x+2+10*rand(1,61);
p=polyfit(x,y,3); %# third order polynomial fit, p=[a,b,c,d] of ax^3+bx^2+cx+d
yfit=polyval(p,x); %# evaluate the curve fit over x
plot(x,y,'.')
hold on
plot(x,yfit,'-g')
equation=sprintf('y=%2.2gx^3+%2.2gx^2+%2.2gx+%2.2g',p); %# format string for equation
equation=strrep(equation,'+-','-'); %# replace any redundant signs
text(-1,-80,equation) %# place equation string on graph
legend('Data','Fit','Location','northwest')
Last year, I wrote up a set of three blogs for Loren, on the topic of modeling/interpolationg a curve. They may cover some of your questions, although I never did find the time to add another 3 blogs to finish the topic to my satisfaction. Perhaps one day I will get that done.
The problem is to recognize there are infinitely many curves that will interpolate a set of data points. A spline is a nice choice, because it can be made well behaved. However, that spline has no simple "equation" to write down. Instead, it has many polynomial segments, pieced together to be well behaved.
You're asking for the function/mapping between two data sets. Knowing the physics involved, the function can be derived by modeling the system. Write down the differential equations and solve it.
Left alone with just two data series, an input and an output with a 'black box' in between you may approximate the series with an arbitrary function. You may start with a polynomial function
y = a*x^2 + b*x + c
Given your input vector x and your output vector y, parameters a,b,c must be determined applying a fitness function.
There is an example of Polynomial Curve Fitting in the MathWorks documentation.
Curve Fitting Tool provides a flexible graphical user interfacewhere you can interactively fit curves and surfaces to data and viewplots. You can:
Create, plot, and compare multiple fits
Use linear or nonlinear regression, interpolation,local smoothing regression, or custom equations
View goodness-of-fit statistics, display confidenceintervals and residuals, remove outliers and assess fits with validationdata
Automatically generate code for fitting and plottingsurfaces, or export fits to workspace for further analysis