I have been tinkering with the MATLAB solve function for a while, but cannot seem how it determines the order that it outputs the symbolic variables.
Specifically, I have a system of equations that I want to solve simultaneously.
a = f(a, b, c, d)
b = f(a, b, c, d)
c = f(a, b, c, d)
d = f(a, b, c, d)
and these equations are symbolic and have other symbolic variables (aside from a, b, c, and d). (so the solution outputs aren't numeric, but are symbolic).
For example, when I am solving the for the equations of motion for an inverted spring pendulum, I have two equations that are both dependent on phiDDot and lenDDot. I use the solve function to solve for phiDDot and lenDDot separately using this call:
[eom2, eom1] = solve(Lag(1)==0, Lag(2)==0, ddphi, ddlen);
The solution for ddphi corresponds to the second term of the matrix outputted, while ddlen corresponds to the first term of the matrix. I was wondering whether there was some way to tell MATLAB to output ddphi first and ddlen second, or at least determine what order they are outputted. Not knowing the order of the variables becomes a big problem when I am solving for more than 4 variables, and trying to solve the differential equations using ode45.
Any advice would be helpful!!
I believe that it's alphabetical based on the ASCII values of the variable names in your equations. As per the documentation for solve, sym/symvar is used to parse the equations in the case where you don't supply the names of output variables. The help for sym/symvar indicates that it returns variables in lexicographical order, i.e. alphabetical (symvar does the same, even though it doesn't say so, by making calls to setdiff). If you look at the actual code for solve.m (type edit solve in your command window) and examine the sub-function called assignOutputs (line 190 in R2012b) you'll see that it makes a call to sort and that there's a comment about lexicographical order.
In R2012b (and likely earlier) the documentation differs from that of R2013a in a way that seems relevant to your issue. In R2013a, this sentence is added:
If you explicitly specify independent variables vars, then the solver uses the same order
to return the solutions.
I'm still running R2012b, so I can't confirm this different behavior.
Related
I would like to find optimal hyperparamters for a specific function, I am using bayesopt routine in MATLAB.
I can set the variables to optimize like the following:
a = optimizableVariable('a',[0,1],'Type','integer');
But I have coupled variables, i.e, variables whose value depend on the existence of other variables, e.g., a={0,1}, b={0,1} iff a=1.
Meaning that b has an influence on the function if a==1.
I thought about creating a unique variables that encompasses all the possibilities, i.e., c=1 if a=0, c=2 if a=1,b=0, c=3 if a=1,b=0. The problem is that I am interested in optimizing continuous variables and the above approach does not hold anymore.
I tried something alone the line of
b = a * optimizableVariable('b',[0,1],'Type','integer');
But MATLAB threw an error.
Undefined operator '*' for input arguments of type 'optimizableVariable'.
After three months almost to the day, buried deep down in MATLAB documentation, the answer was to use constrained variables.
https://www.mathworks.com/help/stats/constraints-in-bayesian-optimization.html#bvaw2ar
I am trying to compare two simple expressions using Matlab symbolic toolbox. For some reason, the code returns 0. Any idea ?
syms a b c
A = (a/b)^c
B = a^c/b^c
isequal(A,B)
It seems like MATLAB has a hard time telling that two expressions are the same when (potentially) fractional exponents are involved.
So, one solution, as suggested by Mikhail is to restrict the values of c to be only integers although, as discussed in the Math.SE question jodag posted, there is nothing wrong with fractional exponents in this case.
Hence, since this restriction to integers is not necessary for the statement to be true, another solution is to use simplify function on the expression for B but allowing it to run more simplification steps in order to get the most simplified expression.
syms a b c
A = (a/b)^c
B = a^c/b^c
isequal(A,simplify(B,'step',4))
Four steps is actually the smallest number that worked for me, but that could vary across versions of MATLAB I'm assuming. To be sure, I would include more, but for really large expressions, this could become computationally intensive, so some judgment is necessary. Note that, you could also use the 'Seconds' option to limit the amount of time allowed for simplification.
In general what you wrote isn't true, under the right "assumptions" it becomes true: for example, assuming c is an integer you can trick MATLAB into expanding A
clc; clear all;
syms a
syms b
syms c integer
A = (a/b)^c;
B = simplify((a^c)/(b^c));
disp(isequal(A,B));
disp(A);
disp(B);
1
a^c/b^c
a^c/b^c
I have 9 variables and a final rate, in this case I will call the variables (A, B, C, D, E, F, G, H, I) and the final rate Z. It is given that variables A, B and one other variable from (C...I) together are good predictors of Z. I have the data in a spreadsheet for 12 years for all of these variables as well as the rate.
I need to find a formula so I can test this. I've been trying for hours to use MatLab to use Multiple Variable Linear Regression w/ least squares method but I just can't seem to find information, get correlating data or be sure that it is proper. I'm also open to using other methods such as predictive distribution, kernels, but I do need to use the least squares method and have a general formula I can show my work with.
I'm thinking of doing a formula such as Z = (mean(z) - b1*A - b2*B - b3*(VAR)) + b1A + b2B + b3C (where VAR is tested for each other variable) but I'm not sure how to do this on matlab or if that is even the correct way. Please explain why you use that formula as well.
Thanks
Suppose I have a CNF expression with variables (a,b,c,d,e,f,g). How would I go about using a SAT solver to find an assignment for (d,e,f) given that {a,b,c,g} = {1,0,0,1} and {a,b,c,g} = {1,1,1,1}? If it was one assumption, calling a sat solver to find assignments for {d,e,f} would be straight-forward (E.g., by adding unit clauses to the CNF). But what if I have multiple assumptions? Is this possible?
Here are the steps for what (I think) harold was trying to describe to you. You have some CNF formula F over the variables a, b, c, d, e, f and g.
Duplicate the formula, calling the duplicate G.
In G, replace the variable a with aa, b with bb, c with cc, and g with gg.
Add unit clauses to F so that (a,b,c,g) = (1,0,0,1).
Add unit clauses to G so that (aa,bb,cc,gg) = (1,1,1,1).
Concatenate the formulas F and G and feed the result into the SAT solver.
The solver will find a satisfying assignment consistent with both (a,b,c,g)'s and (aa,bb,cc,gg)'s preset values.
It is not quite clear if you want a practical answer or an interesting theoretical answer. I will go after practical.
For each set of assumptions, call a sat solver that supports solve with assumptions on that set of assumptions (example). Do this sequentially on the same solver instance.
Pros:
You do not mix satisfiability of mutually exclusive sets of assumptions. If set of assumptions A is sat for a formula F and the other set A' is unsat for F, each call to the solver tells you if those assumptions are sat/unsat.
Learned clauses from the first call may stick around for the second call. The intermediate learned clauses talk about the same variables. (Note: If you have a disjoint formula F & G where F is over variables X, G is over variables Y and X and Y share no variables, resolution -- the inference rule used in CDCL -- cannot derive clauses mixing F and G. There is no obvious gain of mixing the two together instead of splitting them apart unless one instance is much easier to prove unsat and stop early.)
Cons:
If instance A is hard to solve in practice but A' is trivial, you might get stuck on A.
It is not parallel so if you have way more instances than two that you want to solve ASAP you'll need additional mechanisms.
I know this is a bit of an obvious answer, but it is worth trying. If that fails, you can try doing fancier things like solving w.r.t. the assumptions A union A', and only if that is unsat solving falling back on this strategy of A then A'. This won't help for your example as (a,b,c,g) = (1,0,0,1) and (a,b,c,g) = (1,1,1,1) are mutually exclusive.
I'm having trouble understanding and applying the use of nlinfit function in Matlab. So, let's say I'm given vectors
x = [1, 2, 3, 4, 5]
y = [2.3, 2.1, 1.7, .95, .70]
and I'm asked to fit this data to an exponential form (I don't know if the numbers will work, I made them up) where y = A*e^(Bx) + C (A/B/C are constants).
My understanding is that nlinfit takes 4 arguments, the two vectors, a modelfunction which in this case should be the equation I have above, and then beta0, which I don't understand at all. My question is how do you implement the modelfunction in nlinft, and how do you find beta0 (when only working with 2 vectors you want to plot/fit) and how should it be implemented? Can someone show me an example so that I can apply this function for any fit? I suspect I'll be using this a lot in the future and really want to learn it.
Check out the second example in the docs: http://www.mathworks.com/help/stats/nlinfit.html
Basically you pass a function handle as your modelfunction parameter. Either make a function in a file and then just pass it the function name with an # in front or else make an anonymous function like this:
nlinfit(x, y, #(b,x)(b(1).*exp(b(2).*x) + b(3)), beta0)
You'll notice that in the above I have stuck all your parameters into a single vector. The first parameter of your function must be a vector of all the points you are trying to solve for (i.e. A, B and C in your case) and the second must be x.
As woodchips has said beta0 is your starting point so your best guess (doesn't have to be great) of your A, B and C parameters. so something like [1 1 1] or rand(3,1), it is very problem specific though. You should play around with a few. Just remember that this is a local search function and thus can get stuck on local optima so your starting points can actually be quite important.
beta0 is your initial guess at the parameters. The better your guess, the more likely you will see convergence to a viable solution. nlinfit is no more than an optimization. It has to start somewhere.