Matlab code to minimize multivariate function - matlab

I have a function of two variables in Matlab and want to minimize the function over those two variables.
I have three datasets, ds1, ds2, and ds3. I define new variables that are indicator functions of a 2-element vector x, and other variables that appear in all of the datasets, varA and varB.
newvar1 = #(x)double(((ds1.varA + 1000).^0.5)/3 - x(1) - x(2)*(ds1.varB).^2 > (ds1.varA.^0.5)/3);
newvar2 = #(x)double(((ds2.varA + 1000).^0.5)/3 - x(1) - x(2)*(ds2.varB).^2 > (ds2.varA.^0.5)/3);
newvar3 = #(x)double(((ds3.varA + 1000).^0.5)/3 - x(1) - x(2)*(ds3.varB).^2 > (ds3.varA.^0.5)/3);
I want to minimize the following function with respect to x. It is the sum of squares of the difference between each new variable and the existing varC in each dataset:
Fcn = #(x) ((mean(ds1.varC, 1) - mean(newvar1(x), 1))^2 + (mean(ds2.varC, 1) - mean(newvar2(x), 1))^2 + (mean(ds3.varC, 1) - mean(newvar3(x), 1))^2);
I use fminsearch to minimize the above function
fminsearch(Fcn, [0.5 3])
but it returns the input values, no matter what I use. Is there something I'm doing that's preventing it from minimizing over all possible values of x?

Related

Matlab gives no result when I use the integral function

When trying to calculate the Integral of two variables, I got nothing!
Here is the code:
syms x;
a=0.4;
theta=(9 - 4*x*(5*x - 141/25))^(1/2)/2 - 3/2;
theta_prime=-(40*x - 564/25)/(4*(9 - 4*x*(5*x - 141/25))^(1/2));
g=(1/theta)*theta_prime
s=(1+a*(theta-1))*(g)^2
sgen = int(s,x,0.1,1) %x=0.1:0.1: 1
What was my mistake?
It supposed to be one value, such as '4.86'.
Please advise.
sgen is a symbolic object representing the integral of your function s. You can cast it to double to obtain a numerical value for your integral:
syms x;
a=0.4;
theta=(9 - 4*x*(5*x - 141/25))^(1/2)/2 - 3/2;
theta_prime=-(40*x - 564/25)/(4*(9 - 4*x*(5*x - 141/25))^(1/2));
g=(1/theta)*theta_prime;
s=(1+a*(theta-1))*(g)^2;
sgen = double(int(s,x,0.1,1)) % returns 4.8694
But if you're not interested in the symbolic equation for the integral, there really is no point in using the symbolic toolbox for this. It is much faster to compute the integral numerically. One way to do so is to create a function s(x) and then use integral to find the numerical integration. Do note that s(x) must be vectorized on the x variable for this to work (integral will call it with a vector of x values to save time). For vectorized computation, it is necessary to add dots in front of some of the *, / and ^ operators. This is the result:
a = 0.4;
theta = #(x) (9 - 4*x.*(5*x - 141/25)).^(1/2)/2 - 3/2;
theta_prime = #(x) -(40*x - 564/25)./(4*(9 - 4*x.*(5*x - 141/25)).^(1/2));
g = #(x) (1./theta(x)).*theta_prime(x);
s = #(x) (1+a*(theta(x)-1)).*g(x).^2;
sgen = integral(s,0.1,1.0) % returns 4.8694

Taylor series for (exp(x) - exp(-x))/(2*x)

I've been asked to write a function that calculates the Taylor series for (exp(x) - exp(-x))/(2*x) until the absolute error is smaller than the eps of the machine.
function k = tayser(xo)
f = #(x) (exp(x) - exp(-x))/(2*x);
abserror = 1;
sum = 1;
n=2;
while abserror > eps
sum = sum + (xo^n)/(factorial(n+1));
n=n+2;
abserror = abs(sum-f(xo));
disp(abserror);
end
k=sum;
My issue is that the abserror never goes below the eps of the machine which results to an infinite loop.
The problem is expression you're using. For small numbers exp(x) and exp(-x) are approximately equal, so exp(x)-exp(-x) is close to zero and definitely below 1. Since you start with 1 and only add positive numbers, you'll never reach the function value.
Rewriting the expression as
f = #(x) sinh(x)/x;
will work, because it's more stable for these small values.
You can also see this by plotting both functions:
x = -1e-14:1e-18:1e-14;
plot(x,(exp(x) - exp(-x))./(2*x),x,sinh(x)./x)
legend('(exp(x) - exp(-x))/(2*x)','sinh(x)/x')
gives

How can I get all solutions to this equation in MATLAB?

I would like to solve the following equation: tan(x) = 1/x
What I did:
syms x
eq = tan(x) == 1/x;
sol = solve(eq,x)
But this gives me only one numerical approximation of the solution. After that I read about the following:
[sol, params, conds] = solve(eq, x, 'ReturnConditions', true)
But this tells me that it can't find an explicit solution.
How can I find numerical solutions to this equation within some given range?
I've never liked using solvers "blindly", that is, without some sort of decent initial value selection scheme. In my experience, the values you will find when doing things blindly, will be without context as well. Meaning, you'll often miss solutions, think something is a solution while in reality the solver exploded, etc.
For this particular case, it is important to realize that fzero uses numerical derivatives to find increasingly better approximations. But, derivatives for f(x) = x · tan(x) - 1 get increasingly difficult to accurately compute for increasing x:
As you can see, the larger x becomes, the better f(x) approximates a vertical line; fzero will simply explode! Therefore it is imperative to get an estimate as closely to the solution as possible before even entering fzero.
So, here's a way to get good initial values.
Consider the function
f(x) = x · tan(x) - 1
Knowing that tan(x) has Taylor expansion:
tan(x) ≈ x + (1/3)·x³ + (2/15)·x⁵ + (7/315)·x⁷ + ...
we can use that to approximate the function f(x). Truncating after the second term, we can write:
f(x) ≈ x · (x + (1/3)·x³) - 1
Now, key to realize is that tan(x) repeats with period π. Therefore, it is most useful to consider the family of functions:
fₙ(x) ≈ x · ( (x - n·π) + (1/3)·(x - n·π)³) - 1
Evaluating this for a couple of multiples and collecting terms gives the following generalization:
f₀(x) = x⁴/3 - 0π·x³ + ( 0π² + 1)x² - (0π + (0π³)/3)·x - 1
f₁(x) = x⁴/3 - 1π·x³ + ( 1π² + 1)x² - (1π + (1π³)/3)·x - 1
f₂(x) = x⁴/3 - 2π·x³ + ( 4π² + 1)x² - (2π + (8π³)/3)·x - 1
f₃(x) = x⁴/3 - 3π·x³ + ( 9π² + 1)x² - (3π + (27π³)/3)·x - 1
f₄(x) = x⁴/3 - 4π·x³ + (16π² + 1)x² - (4π + (64π³)/3)·x - 1
⋮
fₙ(x) = x⁴/3 - nπ·x³ + (n²π² + 1)x² - (nπ + (n³π³)/3)·x - 1
Implementing all this in a simple MATLAB test:
% Replace this with the whole number of pi's you want to
% use as offset
n = 5;
% The coefficients of the approximating polynomial for this offset
C = #(npi) [1/3
-npi
npi^2 + 1
-npi - npi^3/3
-1];
% Find the real, positive polynomial roots
R = roots(C(n*pi));
R = R(imag(R)==0);
R = R(R > 0);
% And use these as initial values for fzero()
x_npi = fzero(#(x) x.*tan(x) - 1, R)
In a loop, this can produce the following table:
% Estimate (polynomial) Solution (fzero)
0.889543617524132 0.860333589019380 0·π
3.425836967935954 3.425618459481728 1·π
6.437309348195653 6.437298179171947 2·π
9.529336042900365 9.529334405361963 3·π
12.645287627956868 12.645287223856643
15.771285009691695 15.771284874815882
18.902410011613000 18.902409956860023
22.036496753426441 22.036496727938566 ⋮
25.172446339768143 25.172446326646664
28.309642861751708 28.309642854452012
31.447714641852869 31.447714637546234
34.586424217960058 34.586424215288922 11·π
As you can see, the approximant is basically equal to the solution. Corresponding plot:
To find a numerical solution to a function within some range, you can use fzero like this:
fun = #(x)x*tan(x)-1; % Multiplied by x so fzero has no issue evaluating it at x=0.
range = [0 pi/2];
sol = fzero(fun,range);
The above would return just one solution (0.8603). If you want additional solutions, you will have to call fzero more times. This can be done, for example, in a loop:
fun = #(x)tan(x)-1/x;
RANGE_START = 0;
RANGE_END = 3*pi;
RANGE_STEP = pi/2;
intervals = repelem(RANGE_START:RANGE_STEP:RANGE_END,2);
intervals = reshape(intervals(2:end-1),2,[]).';
sol = NaN(size(intervals,1),1);
for ind1 = 1:numel(sol)
sol(ind1) = fzero(fun, mean(intervals(ind1,:)));
end
sol = sol(~isnan(sol)); % In case you specified more intervals than solutions.
Which gives:
[0.86033358901938;
1.57079632679490; % Wrong
3.42561845948173;
4.71238898038469; % Wrong
6.43729817917195;
7.85398163397449] % Wrong
Note that:
The function is symmetric, and so are its roots. This means you can solve on just the positive interval (for example) and get the negative roots "for free".
Every other entry in sol is wrong because this is where we have asymptotic discontinuities (tan transitions from +Inf to -Inf), which is mistakenly recognized by MATLAB as a solution. So you can just ignore them (i.e. sol = sol(1:2:end);.
Multiply the equation by x and cos(x) to avoid any denominators that can have the value 0,
f(x)=x*sin(x)-cos(x)==0
Consider the normalized function
h(x)=(x*sin(x)-cos(x)) / (abs(x)+1)
For large x this will be increasingly close to sin(x) (or -sin(x) for large negative x). Indeed, plotting this this is already visually true, up to an amplitude factor, for x>pi.
For the first root in [0,pi/2] use the Taylor approximation at x=0 of second degree x^2-(1-0.5x^2)==0 to get x[0]=sqrt(2.0/3) as root approximation, for the higher ones take the sine roots x[n]=n*pi, n=1,2,3,... as initial approximations in the Newton iteration xnext = x - f(x)/f'(x) to get
n initial 1. Newton limit of Newton
0 0.816496580927726 0.863034004302817 0.860333589019380
1 3.141592653589793 3.336084918413964 3.425618459480901
2 6.283185307179586 6.403911810682199 6.437298179171945
3 9.424777960769379 9.512307014150883 9.529334405361963
4 12.566370614359172 12.635021895208379 12.645287223856643
5 15.707963267948966 15.764435036320542 15.771284874815882
6 18.849555921538759 18.897518573777646 18.902409956860023
7 21.991148575128552 22.032830614521892 22.036496727938566
8 25.132741228718345 25.169597069842926 25.172446326646664
9 28.274333882308138 28.307365162331923 28.309642854452012
10 31.415926535897931 31.445852385744583 31.447714637546234
11 34.557519189487721 34.584873343220551 34.586424215288922

Use of (ilaplace) give different result through sym variable and directly

I have a symbolic variable, which contain, for example:
p =
(9311.0*s + 6.12e9)/(s^2 + 8500.0*s + 3.61e11)
where s - also symbolic.
Then, if I use inverse laplace through variable p, then result is
>>result=vpa(ilaplace(p,s,n),3)
result =
exp(n*(- 4255.0 + 6.01e5*i))*(4666.0 - 5066.0*i) + exp(n*(- 4255.0 - 6.01e5*i))*(4666.0 + 5066.0*i)
But if I put expression directly, I will get what I expect (by formula in Korn's book or by definition):
vpa(ilaplace((9311.0*s + 6.12e9)/(s^2 + 8500.0*s + 3.61e11),s,n),3)
ans =
9311.0*exp(-4255.0*n)*(cos(6.01e5*n) + 1.09*sin(6.01e5*n))
Why? And what should I do to get right answer through variable?
P.S. vpa - is not influenced on main goal. It only left 3 digits in this case after point.
P.S.S. Added more code:
t = tf(linsys1) %linsys1 - from simulink circuit
%get coefficients from transfer function t
[num,den] = tfdata(t);
syms s n real
% convert transfer function to symbolic
t_sym = poly2sym(cell2mat(num),s)/poly2sym(cell2mat(den),s);
functionInMuPad=['partfrac(',char(t_sym),',s,Domain = R_)']; %collect expression in string format
simpleFraction=evalin(symengine,functionInMuPad); % sum of simple fractions (only MuPad allows get denominator of 2nd order)
functionInMuPad2=['op(',char(simpleFraction),')']; %collect expression in string format
vectorOfOperand=evalin(symengine,functionInMuPad2); % vector of simple fractions
for k=1:length(vectorOfOperand)-1
z(k,1)=ilaplace(vectorOfOperand(k),s,n);
end
So, something wrong with vectorOfOperand.
ilaplace(vectorOfOperand(1)) gives complex result, but if copy (ctrl+c) value of vectorOfOperand(1) and make newVariable=Ctrl+V, then ilaplace(newVariable) - it's ok either in command window or in m-file:
bbb =(9313.8202564498255348020392071286*s + 6122529964.4040716985063406588769)/(8500.4056471180697831891961467533*s + s^2 + 360665607284.96754103451618653904);
ilaplace(bbb,s,n)
ans=9311.0*exp(-4255.0*n)*(cos(6.01e5*n) + 1.09*sin(6.01e5*n)) %after vpa
Kind of magic anyway. vectorOfOperand - is sym. I even made this:
vectorOfOperand=char(vectorOfOperand);
vectorOfOperand=sym(vectorOfOperand); it doesn't help..

Use bsxfun function to find Max entry of matrix

For a given set of q and r, I want to find the maximum of Tp=x * log(1 + (q* r (1 - 1/y)* (2/x - y))/(1 + r* (1 - 1/y) + q* (2/x - y))) for x in (0,1) and y in (1,2).
I can calculate them using two for loops, but when I use really small steps sizes for x and y, e.g., 0.00001, this takes a long time. But I know that if I get Tp as a matrix for all x and y, i.e., Tp is matrix of size length(x) x length(y), it may easier and faster. As I read, bsxfun(#times,..) may be helped, but I don't know how I can apply it in my problem.
Here is what I have tried, but it doesn't give correct output. Here I used larger step size for understanding. Can someone fix this issue in my code?
function maxTp
hvar=0.1:0.2:1;
hl=length(hvar);
q=hvar; r=hvar;
stepx=0.2;stepy=0.1;
y0=1.1; x0=0.1;
x=x0:stepx:1; y=y0:stepy:2;
ox = zeros(hl,1); oy = zeros(hl,1);
MaxTp = zeros(hl,1);
for k=1:hl
Tp = bsxfun(#times,log(1 + (q(k)*r(k)*(1 - 1./y).*(2./x - y))./(1 + r(k)*(1 - 1./y) + q(k)*(2./x - y))).',x);
MaxTp(k,1)=max(max(Tp));
[p, q] = ind2sub(size(Tp),find(Tp==MaxTp(k,1)));
ox(k,1)=x0+(p-1)*stepx;
oy(k,1)=y0+(q-1)*stepy;
end
Try this inside your for loop:
Tp = bsxfun(#(x,y) log(1+(q(k)*r(k)*(1 - 1./y).*(2./x - y))./(1 + r(k)*(1 - 1./y) + q(k)*(2./x - y)))*x,x,y.'); %\\'
MaxTp(k,1)=max(max(Tp));
[p2, q2] = ind2sub(size(Tp),find(Tp==MaxTp(k,1)));
ox(k,1)=x0+(p2-1)*stepx;
oy(k,1)=y0+(q2-1)*stepy;
I changed the bsxfun to do the calculation in the function part rather than the vector inputs, and you were also overwriting p and q as the results of ind2sub.
You can also use fmincon (be aware the maximisation means we need to minimise the negative of the function). The following code goes inside the for loop:
f=#(x,y) log(1+(q(k)*r(k)*(1 - 1./y).*(2./x - y))./...
(1 + r(k)*(1 - 1./y) + q(k)*(2./x - y)))*x;
o(:,k)=fmincon(#(x) -f(x(1),x(2)),[0.5;0.5],[],[],[],[],[0;1],[1;2]);
o(:,k) gives the x and y coordinates of the maximum, I think it's different to your ox and oy variables though.