In my experiment, I need to approximate or fitting a measurement y = f_m(x) with n linear segments. Value of n can be selected to be 1, 2, 3, 4, 5... and probably less than 10. For clarity, it's good if the errors from different cases can be compared to find the one with smallest error.
I've found one example using MATLAB (link):
% Random data...
xdata = linspace(-2,3,101);
ydata = log(abs(10./(10+1i*10.^xdata))) + 0.5*randn(size(xdata));
plot(xdata,ydata)
F = #(B,xdata) min(B(1),B(2)+B(3)*xdata); %Form of the equation
IC = [max(ydata) max(ydata) 0]; %Initial guess
B = lsqcurvefit( F,IC,xdata,ydata,[min(ydata) -inf -inf],[max(xdata) inf 0]);
hold all;
plot(xdata,F(B,xdata));
a = (B(1) - B(2)) / B(3)
cte = B(1)
c = B(2)
d = B(3)
This is similar to what I'm looking for in the case of 2 segments. I tried to modify this function to suit my need with changing the function handle:
F = #(B,xdata) min(B(1),B(2)+B(3)*xdata); %Form of the equation
to
F = #(B,xdata) min(B(1)+B(2)*xdata,B(3)+B(4)*xdata);
but it seems my modification results in 2 segments on the same line.
I don't know much about MATLAB function handle. Especially here, there is "min" function in it. Moreover, how should I do to extend this example to several linear segments?
Thank you in advance!!
Edit01:
Thanks!! your answer has made my code run as desired. But, may I ask a little on the question here. As previously mentioned, I originally want to extend the approximation to several linear segments. So, I go like:
F = #(B,xdata) min(B(1)+B(2)*xdata, B(3)+B(4)*xdata, B(5)+B(6)*xdata); %Form of the equation
IC = [max(ydata) max(ydata) max(ydata) max(ydata) max(ydata) 0]; %Initial guess
B = lsqcurvefit( F,IC,xdata,ydata,[min(ydata) -inf -inf -inf -inf -inf],[max(xdata) inf inf inf inf 0]);
but MATLAB response with I.C. error:
Failure in initial user-supplied objective function evaluation
Can you help me shortly with the I.C here? and what's with the "min" function in function handle?
I get the following error when running your code :
??? Error using ==> lsqncommon at 101
LSQCURVEFIT cannot continue because user supplied objective function failed with the following error:
Attempted to access B(4); index out of bounds because numel(B)=3.
Hence, this means that there is nothing in B(4). I would try to modify IC, lb and ub to have 4 elements.
So try to put these two lines in your code :
IC = [max(ydata) max(ydata) max(ydata) 0]; %Initial guess
B = lsqcurvefit( F,IC,xdata,ydata,[min(ydata) -inf -inf -inf],[max(xdata) inf inf 0]);
Related
Assume we want to determine the coefficients of a polynomial equation that is approximating the tangent function between 0 to 1, as follow:
-A is m×n vandermonde matrix. The entries are populated using m value between 0 to 11(given as input).
-The corresponding vector b is calculated using tangent function.
-x is calculated by typing x= A\b in MATLAB.
Now, using MATLAB, the computed x are subsittued in Ax. The result is plotted and it is pretty close to tangent function. But if I use polyval function of n−1 degree (in MATLAB) to calculate b, the resulting plot is significantly different from the original b. I cannot understand the reason for such a significant difference between the results of these two methods.
Here is the code:
clear all;
format long;
m = 60;
n = 11;
t = linspace(0,1,m);
A= fliplr(vander(t));
A=A(:,1:n);
b=tan(t');
x= A\b;
y=polyval(x, t);
plot(t,y,'r')
y2= A*x
hold on;
plot(t,y2,'g.');
hold on;
plot(t,tan(t),'--b');
Any insight would be appreciated. Thank you.
After A= fliplr(vander(t)) the A matrix is equal to
1 t(1) t(1)^2 ...
1 t(2) t(2)^2 ...
...
1 t(m) t(m)^2 ...
It is not correct because polyval accepts the coefficients in descending powers. You don't need to flip the columns of A:
A= vander(t);
A= A(:,end-n+1:end);
I just tried to solve a simple linear programming problem using matlab, It is pretty easy:
Find x that minimizes f(x) = –5x1 – 4x2 –6x3, subject to
x1 – x2 + x3 ≤ 20
3x1 + 2x2 + 4x3 ≤ 42
3x1 + 2x2 ≤ 30
0 ≤ x1, 0 ≤ x2, 0 ≤ x3.
%mfile: First, enter the coefficient
clc;
clear all;
close all;
f = [-5; -4; -6];
A = [1 -1 1
3 2 4
3 2 0];
b = [20; 42; 30];
lb=zeros(3,1);
x = linprog(f,A,b,[],[],lb);
when I run this program it doesn't return x values, and returns this error:
Error in linprog1 (line 10)
x = linprog(f,A,b,[],[],lb);
what's the problem, my matlab have optimization toolbox, why it doesn't know linprog? what should I do now?
Thank you all
-Maryam
After some investigation (see comments to the original question for details) it transpired that as well as the MATLAB Optimization Toolbox's linprog, the questioner also had on her computer a copy of something like this linprog.m -- which I suspect is an ancestor of what's now in the Optimization Toolbox, but takes its arguments in a different order.
Either the "old" linprog.m or the one in the OT is capable of solving the questioner's linear program. So the options are:
Use the "old" one, adjusting the code appropriately.
Remove (or move elsewhere, or rename) the "old" one and use the one in the Optimization Toolbox.
I have a problem similar to this question:
How to solve an overdetermined set of equations using non-linear lest squares in Matlab
I have 9 equations with 2 unknowns x and y, as follows:
A11=(sin(x))*(cos(y))*(sin(x))*(cos(y));
A12=(sin(x))*(cos(y))*(sin(x))*(sin(y));
A13=(sin(x))*(cos(y))*(cos(x));
A21=(sin(x))*(sin(y))*(sin(x))*(cos(y));
A22=(sin(x))*(sin(y))*(sin(x))*(sin(y));
A23=(sin(x))*(sin(y))*(cos(x));
A31=(sin(x))*(cos(y))*(cos(x));
A32=(sin(x))*(sin(y))*(cos(x));
A33=(cos(x))*(cos(x));
I know the values for each A_ij and want to calculate x and y.
I tried to realize that by using lsqcurvefit like this:
ydata=[0, 0, 0, 0, 1, 0, 0, 0, 0]; % this is one set of A_ij
lb=[0,0];
ub=[pi,2*pi];
x0=[pi/2;pi];
p=zeros(2,1);
p = lsqcurvefit( myfun,x0,xdata,ydata,lb,ub)
I don't have any values for xdata, so is there any way to still make it run like this?
I defined the function myfun as:
function r = myfun(p)
x=p(1);
y=p(2);
The 9 equations;
r=[A11, A12, A13, A21, A22, A23, A31, A32, A33];
end
Now, whenever I run lsqcurvefit I get the error "Not enough input arguments." And the error occurs in the line of x=p(1);
I don't know what is missing or better, I don't know how to handle the fact that I don't have xdata input.
I hope somebody can help me getting this to work.
Thank you very much in advance.
Fabian
After some tinkering I got it to work with lsqcurvefit. It was basically a problem of how I addressed the xdata and ydata to the function handle.
This is how my code looked like in the end:
ydata = [0.12; 0.28; 0.16; 0.66; 0.38; 0.22];
xdata = [0; 0; 0; 0; 0; 0]; % just as a dummy, lsqcurvefit wants the input
x0=[0,0];
lb=[0,0];
ub=[180,180];
[x,resnorm,residual,exitflag,output] = lsqcurvefit(#fun,x0,xdata,ydata,lb,ub);
% only x is needed, but I used the other information for other parts of my code
% as a separate function.m file
function F = fun(x,xdata) % again, xdata is just a dummy and not used
p1=cosd(x(1));
p2=sind(x(1))*sind(x(2));
p3=sind(x(1))*cosd(x(2));
% I didn't need all 9 entries, since the 3x3 matrix is symmetric
F =[p1*p1;...
p1*p2;...
p1*p3;...
p2*p2;...
p2*p3;...
p3*p3];
end
I hope, somebody gets use from this information.
I have a set of odes written in matrix form as $X' = AX$; I also have a desired value of the states $X_des$. $X$ is a five dimensional vector. I want to stop the integration after all the states reach their desired values (or atleast close to them by $1e{-3}$). How do I use event function in matlab to do this? (All the help I have seen are about 1 dimensional states)
PS: I know for sure that all the states approach their desired values after long time. I just want to stop the integration once they are $1e{-3}$ within the desired values.
First, I presume that you're aware that you can use the matrix exponential (expm in Matlab) to solve your system of linear differential equations directly.
There are many ways to accomplish what you're trying to do. They all depend a bit on your system, how it behaves, and the particular event you want to capture. Here's a small example for a 2-by-2 system of linear differential equations:
function multipleeventsdemo
A = [-1 1;1 -2]; % Example A matrix
tspan = [0 50]; % Initial and final time
x0 = [1;1]; % Initial conditions
f = #(t,y)A*y; % ODE function
thresh = 0; % Threshold value
tol = 1e-3; % Tolerance on threshold
opts = odeset('Events',#(t,y)events(t,y,thresh,tol)); % Create events function
[t,y] = ode45(f,tspan,x0,opts); % Integrate with options
figure;
plot(t,y);
function [value,isterminal,direction] = events(t,y,thresh,tol)
value = y-thresh-tol;
isterminal = all(y-thresh-tol<=0)+zeros(size(y)); % Change termination condition
direction = -1;
Integration is stopped when both states are within tol of thresh. This is accomplished by adjusting the isterminal output of the events function. Note that separate tolerance and threshold variables isn't really necessary – you simply need to define the crossing value.
If your system oscillates as it approaches it's steady state (if A has complex eigenvalues), then you'll need to do more work. But you questions doesn't indicate this. And again, numerical integration may not be the easiest/best way to solve your problem which such a system. Here is how you could use expm in conjunction with a bit of symbolic math:
A = [-1 1;1 -2];
x0 = [1;1];
tol = 1e-3;
syms t_sym
y = simplify(expm(A*t_sym)*x0) % Y as a function of t
t0 = NaN(1,length(x0));
for i = 1:length(x0)
sol = double(solve(y(i)==tol,t_sym)) % Solve for t when y(i) equal to tol
if ~isempty(sol) % Could be no solution, then NaN
t0(i) = max(sol); % Or more than one solution, take largest
end
end
f = matlabFunction(y); % Create vectorized function of t
t_vec = linspace(0,max(t0),1e2); % Time vector
figure;
plot(t_vec,f(t_vec));
This will only work for fairly small A, however, because of the symbolic math. Numerical approaches using expm are also possible and likely more scalable.
I am using Matlab to find the spectral radius of the Jacobi iteration matrix where A=[4 2 1;1 3 1;1 1 4].
I can't seem to input the correct commands to get the size of the error after 5 iterations. Can someone help me?
Here are a list of commands that I put into Matlab so far:
A=[4 2 1;1 3 1;1 1 4]
A =
4 2 1
1 3 1
1 1 4
D=diagonal(diagonal(A));L=(A,-1);U=(A,1);
b=([3 -1 4])
x0j=zeros([0 0 0]);
x=D\(-(U+L)*x0j+b);r=b-A*x %Jacobi iteration.
------------------------------------------------------------------------------
Error using *
Inputs must be 2-D, o enter code here r at least one input must be scalar.
To compute element wise TIMES, use TIMES (.*) instead.
The spectral radius of a matrix is the maximum of the modulus of its eigenvalues. It can be simply computed using max(abs(eig(·))).
However, as others have noticed, your whole code seems pretty mixed up and not actually valid Matlab code, so your problem is not really to compute the spectral radius, is it? The algorithm is very straightforward and easy to implement:
% diagonal part of A and rest
D = diag(diag(A));
R = A - D;
% iteration matrix and offset
T = - inv(D) * R;
C = inv(D) * b;
% spectral radius condition
rho = max(abs(eig(T)));
if rho >= 1
error('no convergence')
end
% initial guess
x = randn(size(b));
% iteration
while norm(A * x - b) > 1e-15
x = T * x + C;
end
Note that I used inv(D) to directly follow the description in Wikipedia, but the inverse of a diagonal matrix can be easily computed using diag(1 ./ diag(D)).
I don't really see why one would need to separate R into an upper and lower part. I suppose it has to do with numerical efficiency, but then, Matlab is a very efficient high-level language for matrix computations already. So actually there is no need to implement the Jacobi algorithm in it explicitly when you can simply write A \ b – except for educational purposes I guess.