I have a fairly complex optimization problem set up that I've solved through fmincon by calling it like this
myfun = #(x5) 0.5 * (norm(C*x5 - d))^2 + 0.5 * (timeIntervalMeanGlobal * powerAbsMaxMaxGlobal * sum(x5(28:128),1))^2;
[x5, fval] = fmincon(myfun, initialGuess, -A, b, Aeq, beq, lb, []);
The components are far to long to print here, but here are the dimensions
C: 49 x 128
x5: 128 x 1
d: 49 x 1
timeIntervalMeanGlobal, powerAbsMaxMaxGlobal: constants
initialGuess: 128 x 1
A: 44541 x 128
b: 44541 x 1
Aeq: 24 x 128
beq: 24 x 1
lb: 128 x 1
This works in code, but I don't get results that I'm completely happy with. I'd like to compare it with the built in ga function in MATLAB, which is called in a similar way, but I get an error when I try to run it like this
[x5, fval] = ga(myfun, nvars, -A, b, Aeq, beq, lb, []);
where nvars = 128. There's a long list of about 8 errors starting with
??? Error using ==> mtimes
Inner matrix dimensions must agree.
and ending with
Caused by:
Failure in user-supplied fitness function evaluation. GA cannot continue.
Can someone please instruct me on how to call ga properly, and give insight on why this error might occur with the ga call when the same code doesn't cause an error with fmincon? I've tried all the MATLAB help files and examples with a few different permutations of this but no better luck. Thanks.
UPDATE: I think I found the problem but I don't know how to fix it. The ga documentation says "The fitness function should accept a row vector of length nvars". In my case, myfun is the fitness function, but x5 is a column vector (so is lb). So while mathematically I know that C*x5 = d is the same as x5'*C' = d' even for non-square matrices, I can't formulate the problem that way for the ga solver. I tried - it makes it past the fitness function but then I get the error
The number of rows in A must be the same as the length of b.
Any thoughts on how to get this problem in the right format for the solver? Thanks!
Got it! I just had to manipulate the fitness function to make it use x5 as a row vector even though it's a column vector in all the constraints
myfun = #(x5) 0.5 * (norm(x5 * C' - d'))^2 + 0.5 * (timeIntervalMeanGlobal * powerAbsMaxMaxGlobal * sum(x5(28:128)))^2;
Phew!
Related
I am trying to solve a variable in an equation (syms x), I've simplified the equation. I am trying to store the value in P_9, a 1x1000 matrix by converting from a symbol to a double and am getting the error below. It is giving me a symbol of 0x0, which is where I think my error lies.
Please help me troubleshoot my code. Many thanks!
number = 1000;
P_9 = zeros(1,number);
A_t=0.67;
A_e = linspace(0,10,number);
for n=1:number
%% find p9
syms x
eqn = x + 5 == A_t/A_e(n);
solx = solve(eqn,x);
P_9(n) = double(solx);
end
Warning: Explicit solution could not be found.
In solve at 179
In HW4 at 74
In an assignment A(I) = B, the number of elements in B and I must be the same.
Error in HW4 (line 76)
P_9(n) = double(solx);
You certainly have an equation, where x can't be isolated.
For example it is impossible to isolate x in tan(x) + x == 1. So if you try to solve this equation analyticaly, matlab will tell you that x can't be isolated and therefore there is no explicit analytical solution.
So instead of using an analytical method to solve your equation, you need to use a numerical method, it's less "sexy" but this time you will be able to solve your equation.
Life is well done, matlab already integrate a numerical solver: vpasolve.
So your code will look like:
for n=1:number
%% find p9
syms x
eqn = x + 5 == A_t/A_e(n);
solx = vpasolve(eqn,x);
P_9(n) = double(solx);
end
I have the equation 1 = ((π r2)n) / n! ∙ e(-π r2)
I want to solve it using MATLAB. Is the following the correct code for doing this? The answer isn't clear to me.
n= 500;
A= 1000000;
d= n / A;
f= factorial( n );
solve (' 1 = ( d * pi * r^2 )^n / f . exp(- d * pi * r^2) ' , 'r')
The answer I get is:
Warning: The solutions are parametrized by the symbols:
k = Z_ intersect Dom::Interval([-(PI/2 -
Im(log(`fexp(-PI*d*r^2)`)/n)/2)/(PI*Re(1/n))], (PI/2 +
Im(log(`fexp(-PI*d*r^2)`)/n)/2)/(PI*Re(1/n)))
> In solve at 190
ans =
(fexp(-PI*d*r^2)^(1/n))^(1/2)/(pi^(1/2)*d^(1/2)*exp((pi*k*(2*i))/n)^(1/2))
-(fexp(-PI*d*r^2)^(1/n))^(1/2)/(pi^(1/2)*d^(1/2)*exp((pi*k*(2*i))/n)^(1/2))
You have several issues with your code.
1. First, you're evaluating some parts in floating-point. This isn't always bad as long as you know the solution will be exact. However, factorial(500) overflows to Inf. In fact, for factorial, anything bigger than 170 will overflow and any input bigger than 21 is potentially inexact because the result will be larger than flintmax. This calculation should be preformed symbolically via sym/factorial:
n = sym(500);
f = factorial(n);
which returns an integer approximately equal to 1.22e1134 for f.
2. You're using a period ('.') to specify multiplication. In MuPAD, upon which most of the symbolic math functions are based, a period is shorthand for concatenation.
Additionally, as is stated in the R2015a documentation (and possibly earlier):
String inputs will be removed in a future release. Use syms to declare the variables instead, and pass them as a comma-separated list or vector.
If you had not used a string, I don't think that it would have been possible for your command to get misinterpreted and return such a confusing result. Here is how you could use solve with symbolic variables:
syms r;
n = sym(500);
A = sym(1000000);
d = n/A;
s = solve(1==(d*sym(pi)*r^2)^n/factorial(n)*exp(-d*sym(pi)*r^2),r)
which, after several minutes, returns a 1,000-by-1 vector of solutions, all of which are complex. As #BenVoigt suggests, you can try the 'Real' option for solve. However, in R2015a at least, the four solutions returned in terms of lambertw don't appear to actually be real.
A couple things to note:
MATLAB is not using the values of A, d, and f from your workspace.
f . exp is not doing at all what you wanted, which was multiplication. It's instead becoming an unknown function fexp
Passing additional options of 'Real', true to solve gets rid of most of these extraneous conditions.
You probably should avoid calling the version of solve which accepts a string, and use the Symbolic Toolbox instead (syms 'r')
I have a dataset comprising of 30 independent variables and I tried performing linear regression in MATLAB R2010b using the regress function.
I get a warning stating that my matrix X is rank deficient to within machine precision.
Now, the coefficients I get after executing this function don't match with the experimental one.
Can anyone please suggest me how to perform the regression analysis for this equation which is comprising of 30 variables?
Going with our discussion, the reason why you are getting that warning is because you have what is known as an underdetermined system. Basically, you have a set of constraints where you have more variables that you want to solve for than the data that is available. One example of an underdetermined system is something like:
x + y + z = 1
x + y + 2z = 3
There are an infinite number of combinations of (x,y,z) that can solve the above system. For example, (x, y, z) = (1, −2, 2), (2, −3, 2), and (3, −4, 2). What rank deficient means in your case is that there is more than one set of regression coefficients that would satisfy the governing equation that would describe the relationship between your input variables and output observations. This is probably why the output of regress isn't matching up with your ground truth regression coefficients. Though it isn't the same answer, do know that the output is one possible answer. By running through regress with your data, this is what I get if I define your observation matrix to be X and your output vector to be Y:
>> format long g;
>> B = regress(Y, X);
>> B
B =
0
0
28321.7264417536
0
35241.9719076362
899.386999172398
-95491.6154990829
-2879.96318251964
-31375.7038251919
5993.52959752106
0
18312.6649115112
0
0
8031.4391233753
27923.2569044728
7716.51932560781
-13621.1638587172
36721.8387047613
80622.0849069525
-114048.707780113
-70838.6034825939
-22843.7931997405
5345.06937207617
0
106542.307940305
-14178.0346010715
-20506.8096166108
-2498.51437396558
6783.3107243113
You can see that there are seven regression coefficients that are equal to 0, which corresponds to 30 - 23 = 7. We have 30 variables and 23 constraints to work with. Be advised that this is not the only possible solution. regress essentially computes the least squared error solution such that sum of residuals of Y - X*B has the least amount of error. This essentially simplifies to:
B = X^(*)*Y
X^(*) is what is known as the pseudo-inverse of the matrix. MATLAB has this available, and it is called pinv. Therefore, if we did:
B = pinv(X)*Y
We get:
B =
44741.6923363563
32972.479220139
-31055.2846404536
-22897.9685877566
28888.7558524005
1146.70695371731
-4002.86163441217
9161.6908044046
-22704.9986509788
5526.10730457192
9161.69080479427
2607.08283489226
2591.21062004404
-31631.9969765197
-5357.85253691504
6025.47661106009
5519.89341411127
-7356.00479046122
-15411.5144034056
49827.6984426955
-26352.0537850382
-11144.2988973666
-14835.9087945295
-121.889618144655
-32355.2405829636
53712.1245333841
-1941.40823106236
-10929.3953469692
-3817.40117809984
2732.64066796307
You see that there are no zero coefficients because pinv finds the solution using the L2-norm, which promotes the "spreading" out of the errors (for a lack of a better term). You can verify that these are correct regression coefficients by doing:
>> Y2 = X*B
Y2 =
16.1491563400241
16.1264219600856
16.525331600049
17.3170318001845
16.7481541301999
17.3266932502295
16.5465094100486
16.5184456100487
16.8428701100165
17.0749421099829
16.7393450000517
17.2993993099419
17.3925811702017
17.3347117202356
17.3362798302375
17.3184486799219
17.1169638102517
17.2813552099096
16.8792925100727
17.2557945601102
17.501873690151
17.6490477001922
17.7733493802508
Similarly, if we used the regression coefficients from regress, so B = regress(Y,X); then doing Y2 = X*B, we get:
Y2 =
16.1491563399927
16.1264219599996
16.5253315999987
17.3170317999969
16.7481541299967
17.3266932499992
16.5465094099978
16.5184456099983
16.8428701099975
17.0749421099985
16.7393449999981
17.2993993099983
17.3925811699993
17.3347117199991
17.3362798299967
17.3184486799987
17.1169638100025
17.281355209999
16.8792925099983
17.2557945599979
17.5018736899983
17.6490476999977
17.7733493799981
There are some slight computational differences, which is to be expected. Similarly, we can also find the answer by using mldivide:
B = X \ Y
B =
0
0
28321.726441712
0
35241.9719075889
899.386999170666
-95491.6154989513
-2879.96318251572
-31375.7038251485
5993.52959751295
0
18312.6649114859
0
0
8031.43912336425
27923.2569044349
7716.51932559712
-13621.1638586983
36721.8387047123
80622.0849068411
-114048.707779954
-70838.6034824987
-22843.7931997086
5345.06937206919
0
106542.307940158
-14178.0346010521
-20506.8096165825
-2498.51437396236
6783.31072430201
You can see that this curiously matches up with what regress gives you. That's because \ is a more smarter operator. Depending on how your matrix is structured, it finds the solution to the system by a different method. I'd like to defer you to the post by Amro that talks about what algorithms mldivide uses when examining the properties of the input matrix being operated on:
How to implement Matlab's mldivide (a.k.a. the backslash operator "\")
What you should take away from this answer is that you can certainly go ahead and use those regression coefficients and they will more or less give you the expected output for each value of Y with each set of inputs for X. However, be warned that those coefficients are not unique. This is apparent as you said that you have ground truth coefficients that don't match up with the output of regress. It isn't matching up because it generated another answer that satisfies the constraints you have provided.
There is more than one answer that can describe that relationship if you have an underdetermined system, as you have seen by my experiments shown above.
I have a model with linear constraints and a nonlinear objective function, and I'm trying to use "fmincon" toolbox of MATLAB to solve it. Actually, the Aineq is a 24*13 matrix, and the Aeq is a 24*13 matrix as well. But when I insert this command:
>> [x , lambda] = fmincon(#MP_ObjF,Aineq,bineq,Aeq,beq);
I encounter this error:
Warning: Trust-region-reflective method does not currently solve this type of
problem, using active-set (line search) instead.
In fmincon at 439??? Error using ==> fmincon at 692
Aeq must have 312 column(s).
What is probably wrong with it? Why should Aeq have 312 columns?!? I will appreciate any help. Thanks.
If you look at the documentation for fmincon (doc fmincon ) you'll see an input called opt.In this you can set the algorithm used by matlab to solve your minimization problem. If you run
Opt=optimset('fmincon');
Then you can modify the algorithm option using
Opt.algorithm="active-set";
Just send Opt to fmincon and then matlab wont have this problem anymore. Take a look inside Opt and you'll find a ton of options you can change to modify the optimization routine.
As for the number of columns. If you're using linear constraints then you input argument for MPobjF must be a column vector with n rows and 1 column. Then A must be m X n. Where M is the number of constraints and n is the number of variables. This is so that matrix multiplication is well defined.
I'm sorry if my first answer was ambiguous. Maybe it will help if I do an example, as I saw several suspicious things in your comments. Lets say we want to minimize x^2 + y^2 + (z-1)^2 subject to x + y + z = 1, 2x + 3y - 4z <= 5, x,y,z>=-5. The solution is obviously (0,0,1)...
We first have to make our objective function,
fun = #(vec) vec[1]^2 + vec[2]^2 + (vec[3]-1)^2;
For fmincon to work, there can only be one input to the function, but that input can be a vector. So here x = vec[1] and so on...I think your comments are indicating that your objective function has multiple inputs. If you need to pass some parameters that aren't being optimized there is documentation for this on Matlab's site (http://www.mathworks.com/help/optim/ug/passing-extra-parameters.html)
Then we can set the optimization settings
opt = optimset('fmincon');
opt.algorithm = 'active-set';
You may also have to modify the large-scale setting for the algorithm warning to go away, I can't remember...
Then we can set
Aeq = [1,1,1]; % equality constraint, if you had another eq constraint, it would be another row to Aeq
beq = 1; % equality constraint
A = [2,3,-4]; % inequality
b = 5; % inequality
lb = [-5;-5;-5]; % lower bound
x0 = [0.5;0.5;0]; % initial feasible guess, needs to be a column vector
[x,fval] = fmincon(fun,x0,A,b,Aeq,beq,lb,[],[],opt);
Then hopefully this finds x = [0;0;1]
let
n0 =
nx*cos(a) + nz*cos(b)*sin(a) + ny*sin(a)*sin(b)
ny*cos(b) - nz*sin(b)
nz*cos(a)*cos(b) - nx*sin(a) + ny*cos(a)*sin(b)
in a and b,with the ns fixed (but of course,not assigned) values.
if I do
[a,b]=solve(n0-[1 0 0]',a,b,'IgnoreAnalyticConstraints',true)
i get
Error using solve>assignOutputs (line 257)
3 variables does not match 2 outputs.
Error in solve (line 193)
varargout = assignOutputs(nargout,sol,sym(vars));
then I wonder ''3 variables''?
Then I try
>> [a,b,c]=solve(n0-[1 0 0]',a,b,'IgnoreAnalyticConstraints',true)
that's the response
a =
cos(a)/(cos(a)^2 + sin(a)^2)
b =
(sin(a)*sin(b))/((cos(a)^2 + sin(a)^2)*(cos(b)^2 + sin(b)^2))
c =
(cos(b)*sin(a))/((cos(a)^2 + sin(a)^2)*(cos(b)^2 + sin(b)^2))
what is it doing? what's in c? I suppose he's solving with respect to nx ny nz,but why?every time I try to solve a problem with n+k equation in n variables I get strange errors,even if the rank of the system is just n.
that means even a=2 b=3 a+b=5 gives me problems.
how can I fix that?
I also cannot replicate the "Error in solve" error. What version of Matlab are you using? Also, I think some of the error message is missing – always list the entire error message. In any case, R2013a, solve does not find any solutions. Mathematica 9's Solve also does not find any.
I suspect why #DanielR and I can't exactly reduce your issue in the second case is that you may have a mistake in one of your lines above – it should be:
[a,b,c] = solve(n0-[1 0 0]','IgnoreAnalyticConstraints',true)
that produces
a =
cos(a)/(cos(a)^2 + sin(a)^2)
b =
(sin(a)*sin(b))/((cos(a)^2 + sin(a)^2)*(cos(b)^2 + sin(b)^2))
c =
(cos(b)*sin(a))/((cos(a)^2 + sin(a)^2)*(cos(b)^2 + sin(b)^2))
What are the outputs a, b, and c (these simplify to cos(a), sin(a)*sin(b), and sin(a)*cos(b), by the way)? A big hint is that all of the solutions are in terms of your original variables a and b, but not nx, ny, or nz. When you don't specify which variables to solve for solve picks them. If you instead return the solutions in structure form, the nature of the output is made clear:
s = solve(n0-[1 0 0]','IgnoreAnalyticConstraints',true)
s =
nx: [1x1 sym]
ny: [1x1 sym]
nz: [1x1 sym]
But I think that you probably want to solve for a and b as a function of nx, ny, and nz, not the other way around. You're not correct about using solve to find solutions to overdetermined systems. Even when you have more equations then unknowns this is not always possible with nonlinear equations. If you can introduce some assumptions or even additional equations or specify numerical values for any of the nx, ny, or nz variables, solve may be able to separate and invert the equations.
And you shouldn't really use the term "rank" except for linear systems. In the case of the linear system example that you gave solve works fine:
[a,b] = solve([a==2 b==3 a+b==5],a,b)
or
[a,b] = solve(a==2,b==3,a+b==5,a,b)
or
[a,b] = solve([1 0;0 1;1 1]*[a;b]==[2;3;5],a,b)
returns
Warning: 3 equations in 2 variables.
> In /Applications/MATLAB_R2013a.app/toolbox/symbolic/symbolic/symengine.p>symengine at 56
In mupadengine.mupadengine>mupadengine.evalin at 97
In mupadengine.mupadengine>mupadengine.feval at 150
In solve at 170
a =
2
b =
3