I have a system of 2 equations in 2 unknowns that I want to solve using MATLAB but don't know exactly how to program. I've been given some information about a gamma distribution (mean of 1.86, 90% interval between 1.61 and 2.11) and ultimately want to get the mean and variance. I know that I could use the normal approximation but I'd rather solve for A and B, the shape and scale parameters of the gamma distribution, and find the mean and variance that way. In pseudo-MATLAB code I would want to solve this:
gamcdf(2.11, A, B) - gamcdf(1.61, A, B) = 0.90;
A*B = 1.86;
How would you go about solving this? I have the symbolic math toolbox if that helps.
The mean is A*B. So can you solve for perhaps A in terms of the mean(mu) and B?
A = mu/B
Of course, this does no good unless you knew B. Or does it?
Look at your first expression. Can you substitute?
gamcdf(2.11, mu/B, B) - gamcdf(1.61, mu/B, B) = 0.90
Does this get you any closer? Perhaps. There will be no useful symbolic solution available, except in terms of the incomplete gamma function itself. How do you solve a single equation numerically in one unknown in matlab? Use fzero.
Of course, fzero looks for a zero value. But by subtracting 0.90, that is resolved.
Can we define a function that fzero can use? Use a function handle.
>> mu = 1.86;
>> gamfun = #(B) gamcdf(2.11, mu/B, B) - gamcdf(1.61, mu/B, B) - 0.90;
So try it. Before we do that, I always recommend plotting things.
>> ezplot(gamfun)
Hmm. That plot suggests that it might be difficult to find a zero of your function. If you do try it, you will find that good starting values for fzero are necessary here.
Sorry about my first try. Better starting values for fzero, plus some more plotting does give a gamma distribution that yields the desired shape.
>> B = fzero(gamfun,[.0000001,.1])
B =
0.0124760672290871
>> A = mu/B
A =
149.085442218805
>> ezplot(#(x) gampdf(x,A,B))
In fact this is a very "normal", i.e, Gaussian, looking curve.
Related
I have the following matrix
R=(A-C)*inv(A+B-C-C')*(A-C');
where A and B are n by n matrices. I want to find n*n matrix C such that the determinant of R is minimized, SO:
C=arg min (det(R));
Is there any function in MATLAB that can handle this problem?
It seems like you are trying to find the minimum of an unconstrained multivariable function. This can probably be achieved with fminunc
fun = #(x)x(1)*exp(-(x(1)^2 + x(2)^2)) + (x(1)^2 + x(2)^2)/20;
x0 = [1,2];
[x,fval] = fminunc(fun,x0)
Note that there are no examples in the documentation where a matrix is used, this is probably because horrendous performance could be expected when trying to solve this problem for a matrix of any nontiny size. (This is not because of matlab, but because of the nature of the problem).
It is also good to realize that this method does not (cannot) guarantee an optimum, only a local optimum.
I do just about everything in Matlab but I have yet to work out a good way to replicate Mathematica's FindInstance function in Matlab. As an example, with Mathematica, I can enter:
FindInstance[x + y == 1 && x > 0 && y > 0, {x, y}]
And it will give me:
{{x -> 1/2, y -> 1/2}}
When no solution exists, it will give me an empty Out. I use this often in my work to check whether or not a solution to a system of inequalities exists -- I don't really care about a particular solution.
It seems like there should be a way to replicate this in Matlab with Solve. There are sections in the help file on solving a set of inequalities for a parametrized solution with conditions. There's another section on spitting out just one solution using PrincipalValue, but this seems to just select from a finite solution set, rather than coming up with one that meets the parameters.
Can anybody come up with a way to replicate the FindInstance functionality in Matlab?
Building on what jlandercy said, you can certainly use MATLAB's linprog function, which is MATLAB's linear programming solver. A linear program in the MATLAB universe can be formulated like so:
You seek to find a solution x in R^n which minimizes the objective function f^{T}*x subject to a set of inequality constraints, equality constraints, and each component in x is bounded between a lower and upper bound. Because you want to find the minimum possible value that satisfies the above constraint given, what you're really after is:
Because MATLAB only supports inequalities of less than, you'll need to take the negative of the first two constraints. In addition, MATLAB doesn't support strict inequalities, and so what you'll have to do is enforce a constraint so that you are checking to see if each variable is lesser than a small number, perhaps something like setting a threshold epsilon to 1e-4. Therefore, with the above, your formulation is now:
Note that we don't have any upper or lower bounds as those conditions are already satisfied in the equality and inequality constraints. All you have to do now is plug this problem into linprog. linprog accepts syntax in the following way:
x = linprog(f,A,b,Aeq,beq);
f is a vector of coefficients that work with the objective function, A is a matrix of coefficients that work with the inequality, b is a vector of coefficients that are for the right-hand side of each inequality constraint, and Aeq,beq, are the same as the inequality but for the equality constraints. x would be the solution to the linear programming problem formulated. If we reformulate your problem into matrix form for the above, we now get:
With respect to the linear programming formulation, we can now see what each variable in the MATLAB universe needs to be. Therefore, in MATLAB syntax, each variable becomes:
f = [1; 1];
A = [-1 0; 0 -1];
b = [1e-4; 1e-4];
Aeq = [1 1];
beq = 1;
As such:
x = linprog(f, A, b, Aeq, beq);
We get:
Optimization terminated.
x =
0.5000
0.5000
If linear programming is not what you're looking for, consider looking at MATLAB's MuPAD interface: http://www.mathworks.com/help/symbolic/mupad_ug/solve-algebraic-equations-and-inequalities.html - This more or less mimics what you see in Mathematica if you're more comfortable with that.
Good luck!
Matlab is not a symbolic solver as Mathematica is, so you will not get exact solutions but numeric approximations. Anyway if you are about to solve linear programming (simplex) such as in your example, you should use linprog function.
I have a code that needs to evaluate the arc length equation below:
syms x
a = 10; b = 10; c = 10; d = 10;
fun = 4*a*x^3+3*b*x^2+2*c*x+d
int((1+(fun)^2)^.5)
but all that returns is below:
ans = int(((40*x^3 + 30*x^2 + 20*x + 10)^2 + 1)^(1/2), x)
Why wont matlab evaluate this integral? I added a line under to check if it would evaulate int(x) and it returned the desired result.
Problems involving square roots of functions may be tricky to intgrate. I am not sure whether the integral exists or not, but it if you look up the integral of a second order polynomial you will see that this one is already quite a mess. What you would have, would you expand the function inside the square root, would be a ninth order polynomial. If this integral actually would exist it may be too complex to calculate.
Anyway, if you think about it, would anyone really become any wiser by finding the analytical solution of this? If that is not the case a numerical solution should be sufficient.
EDIT
As thewaywewalk said in the comment, a general rule to calculate these kinds of integrals would be valuable, but to know the primitive function to the particular integral would probably be overkill (if a solution could be found).
Instead define the function as an anonymous function
fun = #(x) sqrt((4*a*x.^3+3*b*x.^2+2*c*x+d).^2+1);
and use integral to evaluate the function between some range, eg
integral(fun,0,100);
for evaluating the function in the closed interval [0,100].
My project require me to use Matlab to create a symbolic equation with square wave inside.
I tried to write it like this but to no avail:
syms t;
a=square(t);
Input arguments must be 'double'.
What can i do to solve this problem? Thanks in advance for the helps offered.
here are a couple of general options using floor and sign functions:
f=#(A,T,x0,x) A*sign(sin((2*pi*(x-x0))/T));
f=#(A,T,x0,x) A*(-1).^(floor(2*(x-x0)/T));
So for example using the floor function:
syms x
sqr=2*floor(x)-floor(2*x)+1;
ezplot(sqr, [-2, 2])
Here is something to get you started. Recall that we can express a square wave as a Fourier Series expansion. I won't bother you with the details, but you can represent any periodic function as a summation of cosines and sines (à la #RTL). Without going into the derivation, this is the closed-form equation for a square wave of frequency f, with a peak-to-peak amplitude of 2 (i.e. it goes from -1 to 1). Recall that the frequency is the amount of cycles per seconds. Therefore, f = 1 means that we repeat our square wave every second.
Basically, what you have to do is code up the first line of the equation... but how in the world would you do that? Welcome to the world of the Symbolic Math Toolbox. What we will need to do before hand is declare what our frequency is. Let's assume f = 1 for now. With the Symbolic Math Toolbox, you can define what are considered as mathematics variables within MATLAB. After, MATLAB has a whole suite of tools that you can use to evaluate functions that rely on these variables. A good example would be if you want to use this to define a closed-form solution of a function f(x). You can then use diff to differentiate and see what the derivative is. Try it yourself:
syms x;
f = x^4;
df = diff(f);
syms denotes that you are declaring anything coming after the statement to be a mathematical variable. In this case, x is just that. df should now give you 4x^3. Cool eh? In any case, let's get back to our problem at hand. We see that there are in fact two variables in the periodic square function that need to be defined: t and k. Once we do this, we need to create our function that is inside the summation first. We can do this by:
syms t k;
f = 1; %//Define frequency here
funcSum = (sin(2*pi*(2*k - 1)*f*t) / (2*k - 1));
That settles that problem... now how do we encapsulate this into an infinite sum!? The sum command in MATLAB assumes that we have a finite array to sum over. If you want to symbolically sum over a function, we must use the symsum function. We usually call it like this:
funcOut = symsum(func, v, start, finish);
func is the function we wish to sum over. v is the summation variable that we wish to use to index in the sum. In our case, that's k. start is the beginning of the sum, which is 1 in our case, and finish is where we wish to finish up our summation. In our case, that's infinity, and so MATLAB has a special keyword called Inf to denote that. Therefore:
xsquare = (4/pi) * symsum(funcSum, k, 1, Inf);
xquare now contains your representation of a square wave defined in terms of the Symbolic Math Toolbox. Now, if you want to plot your square wave and see if we have this right. We can do the following. Let's go between -3 <= t <= 3. As such, you would do something like this:
tVector = -3 : 0.01 : 3; %// Choose a step size of 0.01
yout = subs(xsquare, t, tVector);
You will notice though that there will be some values that are NaN. The reason why is because right at a multiple of the period (T = 1, 2, 3, ...), the behaviour is undefined as the derivative right at these points is undefined. As such, we can fill this in using either 1 or -1. Let's just choose 1 for now. Also, because the Fourier Series is generally a complex-valued function, and the square-wave is purely real, the output of this function will actually give you a complex-valued vector. As such, simply chop off the complex parts to get the real parts only:
yout = real(double(yout)); %// To cast back to double.
yout(isnan(yout)) = 1;
plot(tVector, yout);
You'll get something like:
You could also do this the ezplot way by doing: ezplot(xsquare). However, you'll see that at the points where the wave repeats itself, we get NaN values and so there is a disconnect between the high peak and low peak.
Note:
Natan's solution is much more elegant. I was still writing this post by the time he put something up. Either way, I wanted to give a more signal processing perspective to how to do this. Go Fourier!
A Fourier series for the square wave of unit amplitude is:
alpha + 2/Pi*sum(sin( n * Pi*alpha)/n*cos(n*theta),n=1..infinity)
Here is a handy trick:
cos(n*theta) = Re( exp( I * n * theta))
and
1/n*exp(I*n*theta) = I*anti-derivative(exp(I*n*theta),theta)
Put it all together: pull the anti-derivative ( or integral ) operator out of the sum, and you get a geometric series. Then integrate and finally take the real part.
Result:
squarewave=
alpha+ 1/Pi*Re(I*ln((1-exp(I*(theta+Pi*alpha)))/(1-exp(I*(theta-Pi*alpha)))))
I tried it in MAPLE and it works great! (probably not very practical though)
What is the least computational time consuming way to solve in Matlab the equation:
exp(ax)-ax+c=0
where a and c are constants and x is the value I'm trying to find?
Currently I am using the in built solver function, and I know the solution is single valued, but it is just taking longer than I would like.
Just wanting something to run more quickly is insufficient for that to happen.
And, sorry, but if fzero is not fast enough then you won't do much better for a general root finding tool.
If you aren't using fzero, then why not? After all, that IS the built-in solver you did not name. (BE EXPLICIT! Otherwise we must guess.) Perhaps you are using solve, from the symbolic toolbox. It will be more slow, since it is a symbolic tool.
Having said the above, I might point out that you might be able to improve by recognizing that this is really a problem with a single parameter, c. That is, transform the problem to solving
exp(y) - y + c = 0
where
y = ax
Once you know the value of y, divide by a to get x.
Of course, this way of looking at the problem makes it obvious that you have made an incorrect statement, that the solution is single valued. There are TWO solutions for any negative value of c less than -1. When c = -1, the solution is unique, and for c greater than -1, no solutions exist in real numbers. (If you allow complex results, then there will be solutions there too.)
So if you MUST solve the above problem frequently and fzero was inadequate, then I would consider a spline model, where I had precomputed solutions to the problem for a sufficient number of distinct values of c. Interpolate that spline model to get a predicted value of y for any c.
If I needed more accuracy, I might take a single Newton step from that point.
In the event that you can use the Lambert W function, then solve actually does give us a solution for the general problem. (As you see, I am just guessing what you are trying to solve this with, and what are your goals. Explicit questions help the person trying to help you.)
solve('exp(y) - y + c')
ans =
c - lambertw(0, -exp(c))
The zero first argument to lambertw yields the negative solution. In fact, we can use lambertw to give us both the positive and negative real solutions for any c no larger than -1.
X = #(c) c - lambertw([0 -1],-exp(c));
X(-1.1)
ans =
-0.48318 0.41622
X(-2)
ans =
-1.8414 1.1462
Solving your system symbolically
syms a c x;
fx0 = solve(exp(a*x)-a*x+c==0,x)
which results in
fx0 =
(c - lambertw(0, -exp(c)))/a
As #woodchips pointed out, the Lambert W function has two primary branches, W0 and W−1. The solution given is with respect to the upper (or principal) branch, denoted W0, your equation actually has an infinite number of complex solutions for Wk (the W0 and W−1 solutions are real if c is in [−∞, 0]). In Matlab, lambertw is only implemented for symbolic inputs and thus is very slow method of solving your equation if you're interested in numerical (double precision) solutions.
If you wish to solve such equations numerically in an efficient manner, you might look at Corless, et al. 1996. But, as long as your parameter c is in [−∞, 0], i.e., -exp(c) in [−1/e, 0] and you're interested in the W0 branch, you can use the Matlab code that I wrote to answer a similar question at Math.StackExchange. This code should be much much more efficient that using a naïve approach with fzero.
If your values of c are not in [−∞, 0] or you want the solution corresponding to a different branch, then your solution may be complex-valued and you won't be able to use the simple code I linked to above. In that case, you can more fully implement the function by reading the Corless, et al. 1996 paper or you can try converting the Lambert W to a Wright ω function: W0(z) = ω(log(z)), W−1(z) = ω(log(z)−2πi). In your case, using Matlab's wrightOmega, the W0 branch corresponds to:
fx0 =
(c - wrightOmega(log(-exp(c))))/a
and the W−1 branch to:
fxm1 =
(c - wrightOmega(log(-exp(c))-2*sym(pi)*1i))/a
If c is real, then the above reduces to
fx0 =
(c - wrightOmega(c+sym(pi)*1i))/a
and
fxm1 =
(c - wrightOmega(c-sym(pi)*1i))/a
Matlab's wrightOmega function is also symbolic only, but I have written a double precision implementation (based on Lawrence, et al. 2012) that you can find on my GitHub here and that is 3+ orders of magnitude faster than evaluating the function symbolically. As your problem is technically in terms of a Lambert W, it may be more efficient, and possibly more numerically accurate, to implement that more complicated function for the regime of interest (this is due to the log transformation and the extra evaluation of a complex log). But feel free to test.