How do I solve these sets of equations and can matlab find a solution? I'm solving for x1,x2,x3,x4,c1,c2,c3,c4.
syms c1 c2 c3 c4 x1 x2 x3 x4;
eqn1 = c1 + c2 + c3 + c4 == 2;
eqn2 = c1*x1 + c2*x2 + c3*x3 + c4*x4 == 0;
eqn3 = c1*x1^2 + c2*x2^2 + c3*x3^2 + c4*x4^2 == 2/3;
eqn4 = c1*x1^3 + c2*x2^3 + c3*x3^3 + c4*x4^3 == 0;
eqn5 = c1*x1^4 + c2*x2^4 + c3*x3^4 + c4*x4^4 == 2/5;
eqn6 = c1*x1^5 + c2*x2^5 + c3*x3^5 + c4*x4^5 == 0;
eqn7 = c1*x1^6 + c2*x2^6 + c3*x3^6 + c4*x4^6 == 2/7;
eqn8 = c1*x1^7 + c2*x2^7 + c3*x3^7 + c4*x4^7 == 0;
From what I understand, matlab has fsolve, solve, and linsolve, but I'm uncertain how to use them.
You have a system of non-linear equations, so you can use fsolve to find a solution.
First of all you need to create a function, say fcn, of a variable x, where x is a vector with your initial point. The function defines an output vector, depending on the current vector x.
You have eight variables, so your vector x will consist of eight elements. Let's rename your variables in this way:
%x1 x(1) %c1 x(5)
%x2 x(2) %c2 x(6)
%x3 x(3) %c3 x(7)
%x4 x(4) %c4 x(8)
Your function will look like this:
function F = fcn(x)
F=[x(5) + x(6) + x(7) + x(8) - 2 ;
x(5)*x(1) + x(6)*x(2) + x(7)*x(3) + x(8)*x(4) ;
x(5)*x(1)^2 + x(6)*x(2)^2 + x(7)*x(3)^2 + x(8)*x(4)^2 - 2/3 ;
x(5)*x(1)^3 + x(6)*x(2)^3 + x(7)*x(3)^3 + x(8)*x(4)^3 ;
x(5)*x(1)^4 + x(6)*x(2)^4 + x(7)*x(3)^4 + x(8)*x(4)^4 - 2/5 ;
x(5)*x(1)^5 + x(6)*x(2)^5 + x(7)*x(3)^5 + x(8)*x(4)^5 ;
x(5)*x(1)^6 + x(6)*x(2)^6 + x(7)*x(3)^6 + x(8)*x(4)^6 - 2/7 ;
x(5)*x(1)^7 + x(6)*x(2)^7 + x(7)*x(3)^7 + x(8)*x(4)^7
];
end
You can evaluate your function with some initial value of x:
x0 = [1; 1; 1; 1; 1; 1; 1; 1];
F0 = fcn(x0);
Using x0 as initial point your function returns:
F0 =
2.0000
4.0000
3.3333
4.0000
3.6000
4.0000
3.7143
4.0000
Now you can start fsolve which will try to find some vector x, such as your function returns all zeros:
[x,fval]=fsolve(#fcn, x0);
You will get something like this:
x =
0.7224
0.7224
-0.1100
-0.7589
0.3599
0.3599
0.6794
0.5768
fval =
-0.0240
0.0075
0.0493
0.0183
-0.0126
-0.0036
-0.0733
-0.0097
As you can see, the function values are really close to zeros, but you probably noticed that the optimization algorithm was stopped because of the limited count of the function evaluation steps stored in options.MaxFunEvals (by default 800). Another possible reason is the limited number of iterations stored in MaxIter (by default 400).
Redefine these value using the parameter options:
options = optimset('MaxFunEvals',2000, 'MaxIter', 1000);
[x,fval]=fsolve(#fcn, x0, options);
Now your output is much better:
x =
0.7963
0.7963
-0.0049
-0.7987
0.2619
0.2619
0.9592
0.5165
fval =
-0.0005
-0.0000
-0.0050
0.0014
0.0208
-0.0001
-0.0181
-0.0007
Just play with different parameter values, in order to achieve a tolerable precision level for your problem.
Related
I have a non-linear objective function with non-linear constraints as below:
fun = #(x)x(11)^2 + x(12)^2 + x(13)^2 + x(8)^2 + x(15)^2 + x(16)^2 + x(17)^2 + x(18)^2 +
x(19)^2 + x(20)^2 + x(21)^2 + x(14)^2;
A = [];
b = [];
Aeq = [];
beq = [];
lb = [0,0,0,0,0,0,0,-Inf,0,0,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf];
ub = [Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf];
x0(1:21) = 0;
nonlcon = #consts;
options = optimoptions('fmincon','Display','iter','Algorithm','interior-point');
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options);
function [c,ceq] = consts(x)
c = x(1) + x(2) + x(3) + x(4) + x(5) + x(6) - 1;
ceq(1) = x(7) + x(8) - 0;
ceq(2) = x(5)*x(9) + x(3)*x(10) + x(7) + x(11) - 0;
ceq(3) = x(7) + x(5)*x(9) + x(12) - 0;
ceq(4) = x(7) + x(3)*x(10) + x(13) - 0;
ceq(5) = x(10) + x(14) - 9.666666666666666;
ceq(6) = x(6)*x(9) + x(1)*x(7) + x(10) + x(15) - 17;
ceq(7) = x(10) + x(6)*x(9) + x(16) - 0;
ceq(8) = x(10) + x(1)*x(7) + x(17) - 2.211764705882353;
ceq(9) = x(9) + x(18) - 0;
ceq(10) = x(4)*x(10) + x(2)*x(7) + x(9) + x(19) - 0;
ceq(11) = x(9) + x(4)*x(10) + x(20) - 0;
ceq(12) = x(9) + x(2)*x(7) + x(21) - 0;
end
'fun' is the objective function and 'consts' is the constraints.
I solve my problem with 'interior-point' however, I want to solve this problem with Frank_Wolfe and I cannot find where can I define the constraints.
Based on the documentation:
function [X,fval,i]=frank_wolfe(f,X0,e,A,b,Aeq,beq,lb,ub)
%
% Yinxiao Li
%
% function function [X,fval,i]=frank_wolfe(f,X0,e,A,b,Aeq,beq,lb,ub)
%
% order = 3;
%
% Input: f cost function
% X0 starting feasible point
% e stopping criteria
% A defined in linprog
% b
% Aeq
% beq
% lb
% ub defined in linprog
%
% Output: X optimal point
% fval cost at the optimal point
% i iterations
X=X0;
syms x1 x2 x3 lamida;
df_dx1=diff(f,x1);
df_dx2=diff(f,x2);
df_dx3=diff(f,x3);
x=[100;100;100];
g_f=[100;100;100];
i=0;
while(abs(g_f'*(x-X))>e)%stopping criteria
g_f1=subs(df_dx1,{x1,x2,x3},X);
g_f2=subs(df_dx2,{x1,x2,x3},X);
g_f3=subs(df_dx3,{x1,x2,x3},X);
g_f=[double(g_f1);double(g_f2);double(g_f3)];
[x,feval]=linprog(g_f,A,b,Aeq,beq,lb,ub);
f0=subs(f,{x1,x2,x3},X+lamida*(x-X));
f1=inline(char(f0));
[lamida,fval]=fminbnd(f1,0,1);
X1=X;
X=X+lamida*(x-X);
clear lamida;
syms lamida;
i=i+1;
end
fval=subs(f,{x1,x2,x3},X);
I am not sure:
1- where to define constraints?
2- what should I provide in "linprog" cause as far as I know it is for linear programming.
3- Does Frank wolfe find a global optima or local optima?
I am trying to implement my own function that gives the same results as Matlab spectogram function.
So far I have accomplished function like this:
function out = manulaSpectogram(x, win, noverlap, nfft)
x = x(:);
n = length(x);
wlen = length(win);
nUnique = ceil((1+nfft)/2); % number of uniqure points
L = fix((n-noverlap)/(wlen-noverlap)); % number of signal frames
out = zeros(L, nUnique);
index = 1:wlen;
for i = 0:L-1
xw = win.*x(index);
X = fft(xw, nfft);
out(i+1, :) = X(1:nUnique);
index = index + (wlen - noverlap);
end
end
In my tests it works perfectly and gives the same results like spectogram function when parameter nfft is greater or equal to length of window.
% first test (nnft = window length):
A = [1,2,3,4,5,6];
window = 6;
overlap = 2;
nfft = 6;
s = spectrogram(A, hamming(window), overlap, nfft)'
s2 = manulaSpectogram(A, hamming(window), overlap, nfft)
% results:
s =
9.7300 + 0.0000i -5.2936 + 0.9205i 0.7279 - 0.3737i -0.1186 + 0.0000i
s2 =
9.7300 + 0.0000i -5.2936 - 0.9205i 0.7279 + 0.3737i -0.1186 + 0.0000i
% second test (nfft > window length):
A = [1,2,3,4,5,6];
window = 3;
overlap = 2;
nfft = 6;
s = spectrogram(A, hamming(window), overlap, nfft)'
s2 = manulaSpectogram(A, hamming(window), overlap, nfft)
% results:
s =
2.3200 + 0.0000i 0.9600 + 1.9399i -1.0400 + 1.5242i -1.6800 + 0.0000i
3.4800 + 0.0000i 1.5000 + 2.8752i -1.5000 + 2.3209i -2.5200 + 0.0000i
4.6400 + 0.0000i 2.0400 + 3.8105i -1.9600 + 3.1177i -3.3600 + 0.0000i
5.8000 + 0.0000i 2.5800 + 4.7458i -2.4200 + 3.9144i -4.2000 + 0.0000i
s2 =
2.3200 + 0.0000i 0.9600 - 1.9399i -1.0400 - 1.5242i -1.6800 + 0.0000i
3.4800 + 0.0000i 1.5000 - 2.8752i -1.5000 - 2.3209i -2.5200 + 0.0000i
4.6400 + 0.0000i 2.0400 - 3.8105i -1.9600 - 3.1177i -3.3600 + 0.0000i
5.8000 + 0.0000i 2.5800 - 4.7458i -2.4200 - 3.9144i -4.2000 + 0.0000i
In the case when length of window is less than nfft than the results are different.
% third test (nfft < window length):
A = [1,2,3,4,5,6];
window = 6;
overlap = 2;
nfft = 3;
s = spectrogram(A, hamming(window), overlap, nfft)'
s2 = manulaSpectogram(A, hamming(window), overlap, nfft)
% results:
s =
9.7300 + 0.0000i 0.7279 - 0.3737i
s2 =
3.6121 + 0.0000i -1.6861 + 1.6807i
So how can I improve my function to recieve the same results even in the case when nnft is less than window length? How Matlab's spectogram calculates this case?
I am trying to implement my own function because a spectogram function is a part of a large algorithm which I need to implement from Matlab to C# language so I would like to know what spectogram "black box" does..
I noticed that when window size is greater than nfft scalar number, the data has to be transformed somehow. Finally I found an inner Matlab function that probably is called in the original spectogram Matlab function. It is named datawrap and wraps input data modulo nfft.
So in my function I had to transform data segment (in the same way how datawrap function does it) before calling fft.
Improved function:
function out = manulaSpectogram(x, win, noverlap, nfft)
x = x(:);
n = length(x);
wlen = length(win);
nUnique = ceil((1+nfft)/2); % number of uniqure points
L = fix((n-noverlap)/(wlen-noverlap)); % number of signal frames
out = zeros(L, nUnique);
index = 1:wlen;
for i = 0:L-1
xw = win.*x(index);
% added transformation
if length(xw) > nfft
xw = sum(buffer(xw, nfft), 2);
end
% end of added transformation
X = fft(xw, nfft);
out(i+1, :) = X(1:nUnique);
index = index + (wlen - noverlap);
end
end
I believe it works properly because it gives the same results as Matlab spectogram function.
With the window you are calculating the ends of the block to zero. This needs as mandatory, that the vector lengths of the window and the FFT-Block are the same. Matlab allows with the default parameters unequal values: nsc is not equal to nff (see Matlab docu spectrogram with default). An error message would be necessary if different block length for the window and nff occur. Matlab spectrogram with default parameter for my opinion is wrong. Compare with LabView, Dasylab, Hewlett-Packard and http://www.schmid-werren.ch/hanspeter/publications/2012fftnoise.pdf
How does one extract specific parts of an expression in Matlab/ Octave symbolic package? In XCAS, one can use indexing expressions, but I can't find anything similar in Octave/ Matlab.
For instance, with X = C*L*s**2 + C*R*s + 1, is there a way to get C*R*s by X(2) or the like?
It would be nice to do this with factors too. X = (alpha + s)*(beta**2 + s**2)*(C*R*s + 1), and have X(2) give (beta**2 + s**2).
Thanks!
children (MATLAB doc, Octave doc) does this but the order in which you write the expressions will not necessarily be the same. The order is also different in MATLAB and Octave.
Expanded Expression:
syms R L C s;
X1 = C*L*s^2 + C*R*s + 1;
partsX1 = children(X1);
In MATLAB:
>> X1
X1 =
C*L*s^2 + C*R*s + 1
>> partsX1
partsX1 =
[ C*R*s, C*L*s^2, 1]
In Octave:
octave:1> X1
X1 = (sym)
2
C⋅L⋅s + C⋅R⋅s + 1
octave:2> partsX1
partsX1 = (sym 1×3 matrix)
⎡ 2 ⎤
⎣1 C⋅L⋅s C⋅R⋅s⎦
Factorised Expression:
syms R C a beta s; %alpha is also a MATLAB function so don't shadow it with your variable
X2 = (a + s) * (beta^2 + s^2) * (C*R*s + 1);
partsX2 = children(X2);
In MATLAB:
>> X2
X2 =
(a + s)*(C*R*s + 1)*(beta^2 + s^2)
>> partsX2
partsX2 =
[ a + s, C*R*s + 1, beta^2 + s^2]
In Octave:
octave:3> X2
X2 = (sym)
⎛ 2 2⎞
(a + s)⋅⎝β + s ⎠⋅(C⋅R⋅s + 1)
octave:4> partsX2
partsX2 = (sym 1×3 matrix)
⎡ 2 2⎤
⎣C⋅R⋅s + 1 a + s β + s ⎦
This question is related to the post below:
Matlab: Nonlinear equation solver
With 8 variables x0-x8, I got great results. However, when I increase to solving 10 variables, the results aren't so good. Even if my "guess" is close to the actual value and change the max iteration to 100000, the results are still poor. Is there anything else I can do?
Here is the code:
function F = fcn(x)
F=[x(6) + x(7) + x(8) + x(9) + x(10)-2 ;
x(6)*x(1) + x(7)*x(2) + x(8)*x(3) + x(9)*x(4) + x(10)*x(5) ;
x(6)*x(1)^2 + x(7)*x(2)^2 + x(8)*x(3)^2 + x(9)*x(4)^2 + x(10)*x(5)-2/3 ;
x(6)*x(1)^3 + x(7)*x(2)^3 + x(8)*x(3)^3 + x(9)*x(4)^3 + x(10)*x(5) ;
x(6)*x(1)^4 + x(7)*x(2)^4 + x(8)*x(3)^4 + x(9)*x(4)^4 + x(10)*x(5)-2/5 ;
x(6)*x(1)^5 + x(7)*x(2)^5 + x(8)*x(3)^5 + x(9)*x(4)^5 + x(10)*x(5) ;
x(6)*x(1)^6 + x(7)*x(2)^6 + x(8)*x(3)^6 + x(9)*x(4)^6 + x(10)*x(5)-2/7 ;
x(6)*x(1)^7 + x(7)*x(2)^7 + x(8)*x(3)^7 + x(9)*x(4)^7 + x(10)*x(5) ;
x(6)*x(1)^8 + x(7)*x(2)^8 + x(8)*x(3)^8 + x(9)*x(4)^8 + x(10)*x(5)-2/9 ;
x(6)*x(1)^9 + x(7)*x(2)^9 + x(8)*x(3)^9 + x(9)*x(4)^9 + x(10)*x(5)
];
end
clc
clear all;
format long
x0 = [0.90; 0.53; 0; -0.53; -0.90; 0.23; 0.47; 0.56; 0.47; 0.23]; %Guess
F0 = fcn(x0);
[x,fval]=fsolve(#fcn, x0) %solve without optimization
options = optimset('MaxFunEvals',100000, 'MaxIter', 100000); %optimization criteria
[x,fval]=fsolve(#fcn, x0, options) %solve with optimization
Here are the actual values I'm trying to get:
x1 = 0.906179
x2 = 0.538469
x3 = 0.000000
x4 = -0.53846
x5 = -0.906179
x6 = 0.236926
x7 = 0.478628
x8 = 0.568888
x9 = 0.478628
x10 = 0.236926
The result of such optimization functions like fsolve depends on the initial point very much. A non-linear function like yours can have a lot of local minima and your option is to randomly dice the initial point and hope it will lead the optimization to a better minimum than before.
You can do like this:
clear;
options = optimset('MaxFunEvals',2000, 'MaxIter', 1000, 'Display', 'off');
n = 200; %how many times calculate f with different initial points
z_min = 10000; %the current minimum Euclidian distance between fval and zeros
for i=1:n
x0 = rand(10, 1);
[x,fval]=fsolve(#fcn, x0, options);
z = norm(fval);
if (z < z_min)
z_min = z;
x_best = x;
f_best = fval;
display(['i = ', num2str(i), '; z_min = ', num2str(z_min)]);
display(['x = ', num2str(x_best')]);
display(['f = ', num2str(f_best')]);
fprintf('\n')
end
end
Change the maximum number of the optimization loops and look at the z value. It shows how close your function is to a zero vector.
The best solution I've got so far:
x_best =
0.9062
-0.9062
-0.5385
0.5385
0.0000
0.2369
0.2369
0.4786
0.4786
0.5689
f_best =
1.0e-08 * %these are very small numbers :)
0
0.9722
0.9170
0.8740
0.8416
0.8183
0.8025
0.7929
0.7883
0.7878
For this solution z_min is 2.5382e-08.
I have a general equation
t=tr+(ts-tr)/(1+(a*h)^n)^(1-1/n)
for (h=0, 1, 2, 3), I have t=2.000, 1.6300, 1.2311, 1.1084. therefor there are 4 equations with 4 unknowns tr, ts, a, n
I used "solve" function in matlab
s=solve('tr+(ts-tr)/(1+(a*0)^n)^(1-1/n)=2','tr+(ts-tr)/(1+(a*1)^n)^(1-1/n)=1.63','tr+(ts-tr)/(1+(a*2)^n)^(1-1/n)=1.2311','tr+(ts-tr)/(1+(a*3)^n)^(1-1/n)=1.1084')
and error is
??? Error using ==> mupadmex
Error in MuPAD command: Singularity [ln];
during evaluation of 'numeric::fsolve'
Error in ==> sym.sym>sym.mupadmexnout at 2018
out = mupadmex(fcn,args{:});
Error in ==> solve at 76
[symvars,R] = mupadmexnout('symobj::solvefull',eqns,vars);
What should I do?
The problem appears with you using the solve function. That only works for simple equations, it is better to use the fsolve function. Due to the fact that I am worried that I am doing an assignment for you, I am only going to show you how to do another example using fsolve.
Suppose that you want to solve
1 = x_1
1 = x_1 + x_2
-1 = x_1 + x_2 + x_3
-1 = x_1 + x_2 + x_3 + x_4
then what you firstly need to do is write these as equations equal 0
0 = x_1 - 1
0 = x_1 + x_2 - 1
0 = x_1 + x_2 + x_3 + 1
0 = x_1 + x_2 + x_3 + x_4 + 1
then you need to write a function that takes in a vector x, the components of x will represent x_1, x_2, x_3 and x_4. The output of the function will also be a vector whose components should the outputs of the Right hand side of the above equations (see the function fun below). This function is going to be called by fSolve for it to provide it with guesses of the correct value of x, until it guess correct. When never actually run this function ourselves. That is why it is below the top function.
Then you create a function handle to this function by fHandle = #fun. You can think of fHandle as another name for fun, when we calculate fHandle([1; 2; 3; 4]) this is the same as calculating fun([1; 2; 3; 4]). After this you make an initial guess of the correct vector x, say we chose xGuess = [1; 1; 1; 1]. Finally we pass fHandle and xGuess to fSolve.
Here is the code
function Solve4Eq4Unknown()
fHandle = #fun;
xGuess = ones(4,1);
xSolution = fsolve(fHandle, xGuess)
end
function y = fun(x)
y = zeros(4,1); % This step is not necessary, but it is effecient
y(1) = x(1) - 1;
y(2) = x(1) + x(2) - 1;
y(3) = x(1) + x(2) + x(3) + 1;
y(4) = x(1) + x(2) + x(3) + x(4) + 1;
end