Matlab fmincon sum of difference squared - matlab

I'm a beginner to matlab coding so any help would be appreciated.
I'm trying to minimize difference of summation squared problem SUM((a-b)^2) for 2 variables. I've already coded it up in Excel's Solver like this:
Goal= Sum[{i, 9}, ( Y[i]- (X[i]*m+b) )^2 ]
using nonlinear methods.
where Y and X and arrays, and m and b are the variables we are trying to find by minimizing the sum. How would go do this same thing in Matlab?
thanks.

Here is an example. I've set the bounds by using fmincon.
x=0:10;
y=x*randi(10)-randi(10)+rand(size(x)); % Create data y
f=#(A) sum((y-(A(1)*x+A(2))).^2) % Test function that we wish to minimise
R=fmincon(f,[1 1],[],[],[],[],[0 0],[Inf Inf]) % Run the minimisation R(1)=m, R(2)=b
plot(x,y,x,R(1)*x+R(2)) % Plot the results

Related

How to solve equations with complex coefficients using ode45 in MATLAB?

I am trying to solve two equations with complex coefficients using ode45.
But iam getting an error message as "Inputs must be floats, namely single or
double."
X = sym(['[',sprintf('X(%d) ',1:2),']']);
Eqns=[-(X(1)*23788605396486326904946699391889*1i)/38685626227668133590597632 + (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632; (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632 + X(1)*(- 2500000 + (5223289665997855453060886952725538686654593059791*1i)/324518553658426726783156020576256)] ;
f=#(t,X)[Eqns];
[t,Xabc]=ode45(f,[0 300*10^-6],[0 1])
How can i fix this ? Can somebody can help me ?
Per the MathWorks Support Team, the "ODE solvers in MATLAB 5 (R12) and later releases properly handle complex valued systems." So the complex numbers are the not the issue.
The error "Inputs must be floats, namely single or double." stems from your definition of f using Symbolic Variables that are, unlike complex numbers, not floats. The easiest way to get around this is to not use the Symbolic Toolbox at all; just makes Eqns an anonymous function:
Eqns= #(t,X) [-(X(1)*23788605396486326904946699391889*1i)/38685626227668133590597632 + (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632; (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632 + X(1)*(- 2500000 + (5223289665997855453060886952725538686654593059791*1i)/324518553658426726783156020576256)] ;
[t,Xabc]=ode45(Eqns,[0 300*10^-6],[0 1]);
That being said, I'd like to point out that numerically time integrating this system over 300 microseconds (I assume without units given) will take a long time since your coefficient matrix has imaginary eigenvalues on the order of 10E+10. The extremely short wavelength of those oscillations will more than likely be resolved by Matlab's adaptive methods, and that will take a while to solve for a time span just a few orders greater than the wavelength.
I'd, therefore, suggest an analytical approach to this problem; unless it is a stepping stone another problem that is non-analytically solvable.
Systems of ordinary differential equations of the form
,
which is a linear, homogenous system with a constant coefficient matrix, has the general solution
,
where the m-subscripted exponential function is the matrix exponential.
Therefore, the analytical solution to the system can be calculated exactly assuming the matrix exponential can be calculated.
In Matlab, the matrix exponential is calculate via the expm function.
The following code computes the analytical solution and compares it to the numerical one for a short time span:
% Set-up
A = [-23788605396486326904946699391889i/38685626227668133590597632,23788605396486326904946699391889i/38685626227668133590597632;...
-2500000+5223289665997855453060886952725538686654593059791i/324518553658426726783156020576256,23788605396486326904946699391889i/38685626227668133590597632];
Eqns = #(t,X) A*X;
X0 = [0;1];
% Numerical
options = odeset('RelTol',1E-8,'AbsTol',1E-8);
[t,Xabc]=ode45(Eqns,[0 1E-9],X0,options);
% Analytical
Xana = cell2mat(arrayfun(#(tk) expm(A*tk)*X0,t,'UniformOutput',false)')';
k = 1;
% Plots
figure(1);
subplot(3,1,1)
plot(t,abs(Xana(:,k)),t,abs(Xabc(:,k)),'--');
title('Magnitude');
subplot(3,1,2)
plot(t,real(Xana(:,k)),t,real(Xabc(:,k)),'--');
title('Real');
ylabel('Values');
subplot(3,1,3)
plot(t,imag(Xana(:,k)),t,imag(Xabc(:,k)),'--');
title('Imaginary');
xlabel('Time');
The comparison plot is:
The output of ode45 matches the magnitude and real parts of the solution very well, but the imaginary portion is out-of-phase by exactly π.
However, since ode45's error estimator only looks at norms, the phase difference is not noticed which may lead to problems depending on the application.
It will be noted that while the matrix exponential solution is far more costly than ode45 for the same number of time vector elements, the analytical solution will produce the exact solution for any time vector of any density given to it. So for long time solutions, the matrix exponential can be viewed as an improvement in some sense.

Curve Fitting for equation with two parameters

I have two arrays:
E= [6656400;
13322500;
19980900;
26625600;
33292900;
39942400;
46648900;
53290000]
and
J=[0.0000000021;
0.0000000047;
0.0000000128;
0.0000000201;
0.0000000659;
0.0000000748;
0.0000001143;
0.0000001397]
I want to find the appropriate curve fitting for the above data by applying this equation:
J=A0.*(298).^2.*exp(-(W-((((1.6e-19)^3)/(4*pi*2.3*8.854e-12))^0.5).*E.^0.5)./((1.38e-23).*298))
I want to select the starting value of W from 1e-19
I have tried the curve fitting tools but it is not helping me to solve it!
Then, I selected some random values of A0=1.2e9 and W=2.243e-19, it gave me a better results. But I want to find the right values by using the code (not the curve fitting Apps)
Can you help me please?
A quick (and potentially easy) solution method would be to pose the curve fit as a minimization problem.
Define a correlation function that takes the fit parameters as an argument:
% x(1) == A0; x(2) == W
Jfunc = #(x) x(1).*(298).^2.*exp(-(x(2)-((((1.6e-19)^3)/(4*pi*2.3*8.854e-12))^0.5).*E.^0.5)./((1.38e-23).*298));
Then a objective function to minimize. Since you have data J we'll minimize the sum-of-squares of the difference between the data and the correlation:
Objective = #(x) sum((Jfunc(x) - J).^2);
And then attempt to minimize the objective using fminsearch:
x0 = [1.2E9;2.243E-19];
sol = fminsearch(Objective,x0);
I used the guesses you gave. For nonlinear solutions, a good first guess is often important for convergence.
If you have the Optimization Toolbox, you can also try lsqcurvefit or lsqnonlin (fminsearch is vanilla MATLAB).

Quantile regression with linprog in Matlab

I am trying to implement the quantile regression process with a simple setup in Matlab. This page contains a description of the quantile regression as a linear program, and displays the appropriate matrices and vectors. I've tried to implement it in Matlab, but I do not get the correct last element of the bhat vector. It should be around 1 but I get a very low value (<1e-10). Using another algorithm I have, I get a value of 1.0675. Where did I go wrong? I'm guessing A, b or f are wrong.
I have tried playing with optimset, but I don't think that is the problem. I think I've made a conversion mistake when going from math to code, I just can't see where.
% set seed
rng(1);
% set parameters
n=30;
tau=0.5;
% create regressor and regressand
x=rand(n,1);
y=x+rand(n,1)/10;
% number of regressors (1)
m=size(x,2);
% vektors and matrices for linprog
f=[tau*ones(n,1);(1-tau)*ones(n,1);zeros(m,1)];
A=[eye(n),-eye(n),x;
-eye(n),eye(n),-x;
-eye(n),zeros(n),zeros(n,m);
zeros(n),-eye(n),zeros(n,m)];
b=[y;
y
zeros(n,1);
zeros(n,1)];
% get solution bhat=[u,v,beta] and exitflag (1=succes)
[bhat,~,exflag]=linprog(f',A,b);
I solved my problem by using the formulation (in the pdf) above the one I tried to implement in the question. I've put it in a Matlab-function if you're interested in the code.
function [ bhat ] = qregressMatlab( y, x, tau )
% bhat are the estimates
% y is a vector of outcomes
% x is a matrix with columns of explanatory variables
% tau is a scalar for choosing the conditional quantile to be estimated
n=size(x,1);
m=size(x,2);
% vectors and matrices for linprog
f=[tau*ones(n,1);(1-tau)*ones(n,1);zeros(m,1)];
Aeq=[eye(n),-eye(n),x];
beq=y;
lb=[zeros(n,1);zeros(n,1);-inf*ones(m,1)];
ub=inf*ones(m+2*n,1);
% Solve the linear programme
[bhat,~,~]=linprog(f,[],[],Aeq,beq,lb,ub);
% Pick out betas from (u,v,beta)-vector.
bhat=bhat(end-m+1:end);
end

Find approximation of sine using least squares

I am doing a project where i find an approximation of the Sine function, using the Least Squares method. Also i can use 12 values of my own choice.Since i couldn't figure out how to solve it i thought of using Taylor's series for Sine and then solving it as a polynomial of order 5. Here is my code :
%% Find the sine of the 12 known values
x=[0,pi/8,pi/4,7*pi/2,3*pi/4,pi,4*pi/11,3*pi/2,2*pi,5*pi/4,3*pi/8,12*pi/20];
y=zeros(12,1);
for i=1:12
y=sin(x);
end
n=12;
j=5;
%% Find the sums to populate the matrix A and matrix B
s1=sum(x);s2=sum(x.^2);
s3=sum(x.^3);s4=sum(x.^4);
s5=sum(x.^5);s6=sum(x.^6);
s7=sum(x.^7);s8=sum(x.^8);
s9=sum(x.^9);s10=sum(x.^10);
sy=sum(y);
sxy=sum(x.*y);
sxy2=sum( (x.^2).*y);
sxy3=sum( (x.^3).*y);
sxy4=sum( (x.^4).*y);
sxy5=sum( (x.^5).*y);
A=[n,s1,s2,s3,s4,s5;s1,s2,s3,s4,s5,s6;s2,s3,s4,s5,s6,s7;
s3,s4,s5,s6,s7,s8;s4,s5,s6,s7,s8,s9;s5,s6,s7,s8,s9,s10];
B=[sy;sxy;sxy2;sxy3;sxy4;sxy5];
Then at matlab i get this result
>> a=A^-1*B
a =
-0.0248
1.2203
-0.2351
-0.1408
0.0364
-0.0021
However when i try to replace the values of a in the taylor series and solve f.e t=pi/2 i get wrong results
>> t=pi/2;
fun=t-t^3*a(4)+a(6)*t^5
fun =
2.0967
I am doing something wrong when i replace the values of a matrix in the Taylor series or is my initial thought flawed ?
Note: i can't use any built-in function
If you need a least-squares approximation, simply decide on a fixed interval that you want to approximate on and generate some x abscissae on that interval (possibly equally spaced abscissae using linspace - or non-uniformly spaced as you have in your example). Then evaluate your sine function at each point such that you have
y = sin(x)
Then simply use the polyfit function (documented here) to obtain least squares parameters
b = polyfit(x,y,n)
where n is the degree of the polynomial you want to approximate. You can then use polyval (documented here) to obtain the values of your approximation at other values of x.
EDIT: As you can't use polyfit you can generate the Vandermonde matrix for the least-squares approximation directly (the below assumes x is a row vector).
A = ones(length(x),1);
x = x';
for i=1:n
A = [A x.^i];
end
then simply obtain the least squares parameters using
b = A\y;
You can clearly optimise the clumsy Vandermonde generation loop above I have just written to illustrate the concept. For better numerical stability you would also be better to use a nice orthogonal polynomial system like Chebyshev polynomials of the first kind. If you are not even allowed to use the matrix divide \ function then you will need to code up your own implementation of a QR factorisation and solve the system that way (or some other numerically stable method).

Plot Integral with variable limits in Matlab

I'm having some trouble trying to solve and plot an integral in matlab. In fact, I know that if a solve one, I'll solve all the integrals that I need now.
I have plot in x axis a value of a variable "d" and in y axis the value of a integral of a normalized gaussian function from -inf to ((40*log10(d)-112)/36) and I'm not finding out a way to do it correctly. D is between 0 and 1600
Can anybody here please help me?
In Matlab you can use the integral-function to evaluate integrals:
q = integral(fun,xmin,xmax)
fun needs to be a function handle, also called function functions, like these two examples:
square = #(x) x.^2;
plusone = #(x) x+1;