Integration with parameters with maple - maple

I would like to compute (formally) some integrals which are just rational fraction and which depends on 3 parameter. It works if I set two parameter to trivial value, else i must stop the computation after 5 min. Does anyone can help me to make it works?
Here is my worksheet:
restart;
omega(x,y):= 1/(1+x^(2)+y^(2))*<2*x,2*y,x^(2)+y^(2)-1>:
Omega(x,y, a,b, l):= simplify(omega(evalc(Re((l*(x+I*y)+a+Ib)/(1-(a-I*b)*(x+I*y))) ),evalc(`&Im;`((l*(x+I*y)+a+I*b)/(1-(a-I*b)*(x+I*y)))) )):
assume(0 < l);
simplify(int(int(Omega(x, y, a, b, l)[1]*(diff(Omega(x, y, a, b, l)[1], x)), x = -infinity .. infinity), y = -infinity .. infinity));
Warning, computation interrupted
simplify(int(int(Omega(x, y, 0, 1, l)[3]*(diff(Omega(x, y, 0, 1, l)[1], x)), x = -infinity .. infinity), y = -infinity .. infinity));
Warning, computation interrupted
simplify(int(int(Omega(x, y, 0, 0, l)[3]*(diff(Omega(x, y, 0, 0, l)[1], x)), x = -infinity .. infinity), y = -infinity .. infinity));
2 Pi
- ----
l

You have Ib where you may intend I*b.
You have &Im where you may intend Im.
You place an assumption on l but place no helpful assumptions on a and b. Are both a and b to be taken as being purely real? I have used such assumptions below.
Below, I utilize assuming rather than assume, as I find it more convenient and it doesn't tend to lead to a muddle with mixes of distinct instances of both unassumed names and assumed names which ostensibly appear equal.
The syntax you used for assigning omega and Omega can produce operators (the apparent intention) when used for 2D Math input, albeit via a diambuguation popup. But here we have plaintext source, and such syntax used as 1D Maple Notation code makes remember table assignments instead of operators. Below I use a syntax valid in both 1D and 2D modes for assigning operators to those two names.
The following results each took anywhere from a few seconds to about a minute on 64bit Linux running Maple 17 on an Intel i5.
restart;
omega:=(x,y)->1/(1+x^(2)+y^(2))*<2*x,2*y,x^(2)+y^(2)-1>:
Omega:=(x,y,a,b,l)->simplify(omega(evalc(Re((l*(x+I*y)+a+I*b)/
(1-(a-I*b)*(x+I*y)))),
evalc(Im((l*(x+I*y)+a+I*b)/
(1-(a-I*b)*(x+I*y)))))):
T31:=simplify(Int(Int(Omega(x,y,a,b,l)[3]
*(diff(Omega(x,y,a,b,l)[1],x)),
x=-infinity..infinity),y=-infinity..infinity),
size) assuming real, l>0:
simplify(value(subs(b=0,T31))) assuming real, l>0;
2
2 Pi (a + l)
- -------------
2 2
a + l
simplify(value(T31)) assuming real, l>0;
4 2 2 4 2 2 2 2 3
2 (a + a l - b + b l + a l - b l + l ) Pi
- -------------------------------------------------
4 2 2 2 2 4 2 2 4
a + 2 a b + 2 a l + b + 2 b l + l
T11:=simplify(Int(Int(Omega(x,y,a,b,l)[1]
*(diff(Omega(x,y,a,b,l)[1],x)),
x=-infinity..infinity),y=-infinity..infinity),
size) assuming real, l>0:
simplify(value(subs(b=0,T11))) assuming real, l>0;
0
simplify(value(T11)) assuming real, l>0;
0

Related

Matlab integral over function of symbolic matrix

In an attempt to speed up for loops (or eliminate all together), I've been trying to pass matrices into functions. I have to use sine and cosine as well. However, when I attempt to find the integral of a matrix where the elements are composed of sines and cosines, it doesn't work and I can't seem to find a way to make it do so.
I have a matrix SI that is composed of sines and cosines with respect to a variable that I have defined using the Symbolic Math Toolbox. As such, it would actually be even better if I could just pass the SI matrix and receive a matrix of values that is the integral of the sine/cosine function at every location in this matrix. I would essentially get a square matrix back. I am not sure if I phrased that very well, but I have the following code below that I have started with.
I = [1 2; 3 4];
J = [5 6; 7 8];
syms o;
j = o*J;
SI = sin(I + j);
%SI(1,1) = sin(5*o + 1)
integral(#(o) o.*SI(1,1), 0,1);
Ideally, I would want to solve integral(#(o) o*SI,0,1) and get a matrix of values. What should I do here?
Given that A, B and C are all N x N matrices, for the moment, let's assume they're all 2 x 2 matrices to make the example I'm illustrating more succinct to understand. Let's also define o as a mathematical symbol based on your comments in your question above.
syms o;
A = [1 2; 3 4];
B = [5 6; 7 8];
C = [9 10; 11 12];
Let's also define your function f according to your comments:
f = o*sin(A + o*B + C)
We thus get:
f =
[ o*sin(5*o + 10), o*sin(6*o + 12)]
[ o*sin(7*o + 14), o*sin(8*o + 16)]
Remember, for each element in f, we take the corresponding elements in A, B and C and add them together. As such, for the first row and first column of each matrix, we have 1, 5 and 9. As such, A + o*B + C for the first row, first column equates to: 1 + 5*o + 9 = 5*o + 10.
Now if you want to integrate, just use the int command. This will find the exact integral, provided that the integral can be solvable in closed form. int also can handle matrices so it will integrate each element in the matrix. You can call it like so:
out = int(f,a,b);
This will integrate f for each element from the lower bound a to the upper bound b. As such, supposing our limits were from 0 to 1 as you said. Therefore:
out = int(f,0,1);
We thus get:
out =
[ sin(15)/25 - sin(10)/25 - cos(15)/5, sin(18)/36 - sin(12)/36 - cos(18)/6]
[ sin(21)/49 - sin(14)/49 - cos(21)/7, sin(24)/64 - sin(16)/64 - cos(24)/8]
Bear in mind that out is defined in the symbolic math toolbox. If you want the actual numerical values, you need to cast the answer to double. Therefore:
finalOut = double(out);
We thus get:
finalOut =
0.1997 -0.1160
0.0751 -0.0627
Obviously, this can generalize for any size M x N matrices, so long as they all share the same dimensions.
Caveat
sin, cos, tan and the other related functions have their units in radians. If you wish for the degrees equivalent, append a d at the end of the function (i.e. sind, cosd, tand, etc.)
I believe this is the answer you're after. Good luck!

how to tell maple two operators don't commute when expanding in a taylor polynomial

Let's start with something that works:
restart:
with(Physics):
Setup(noncommutativeprefix = {A, B}):
expand((A+B)^2);
gives
A^2+A*B+B*A+B^2
Maple recognizes the A and B don't commute. Now, let's expand their sum in a taylor series, and expand that:
restart:
with(Physics):
Setup(noncommutativeprefix = {A, B}):
S := convert(taylor(exp((A+B)*delta), delta = 0, 3), polynom);
gives
1 2 2
S := 1 + (A + B) delta + - (A + B) delta
2
and then
expand(S);
gives
1 2 2 2 1 2 2
1 + delta A + delta B + - A delta + A B delta + - B delta
2 2
Maple no longer recognizes that A and B don't commute. Clearly(?) I don't know how to use maple properly. How do I get maple to recognize that A and B don't commute in this context? there is discussion of this here: http://www.mapleprimes.com/questions/95808-Noncommutative-Operators, in the maple help, and elsewhere, I'm sure..
I should add, (obviously), that the following works, but it gets ugly. there must be a better way:
restart;
unassign(`&*`); define(`&*`, multilinear, zero = 0, identity = 1, flat);
constants := constants, lambda;
No := 3;
S := convert(taylor(exp((A+B)*delta), delta = 0, No), polynom);
1 2 2
S := 1 + (A + B) delta + - (A + B) delta
2
S := subs((A+B)^2 = `&*`(A+B, A+B), (A+B)^3 = `&*`(`&*`(A+B, A+B), A+B), (A+B)^4 = `&*`(`&*`(`&*`(A+B, A+B), A+B), A+B), S);
S := 1 + (A + B) delta
1 2
+ - (A &* A + A &* B + B &* A + B &* B) delta
2
simplify(S);
1 2 1 2
1 + delta A + delta B + - delta (A &* A) + - delta (A &* B)
2 2
1 2 1 2
+ - delta (B &* A) + - delta (B &* B)
2 2
definemore(`&*`, `&*`(A, A) = A^2, `&*`(B, B) = B^2, `&*`(A, B) = AB, `&*`(B, A) = BA);
simplify(S);
1 2 2 1 2 2 1 2
1 + delta A + delta B + - A delta + - B delta + - AB delta
2 2 2
1 2
+ - BA delta
2
I'm now using maple 17.
Edit: Here is a continuation of the above question, now with edgardo's feedback:
I am trying to perform the following calculation, using Gtaylor:
with(Physics);
Setup(noncommutativeprefix = {A, B});
exp3 := convert(Gtaylor(exp((a-I*b))*delta*B), delta = 0, No), polynom);
exp5 := convert(Gtaylor(exp((a-I*b))*delta*A), delta = 0, No), polynom);
expansion := coeff(simplify(subs(delta = lambda, exp1*exp2*exp1*exp3*exp5*exp3)), lambda, No-1);
not all the code is included. exp5 &3 are examples of what all the other exp's look like. No is set to 5, and and b are fractions. This code works (haven't confirmed with independent code but let's assume it does), but it takes a -very- long time. Is there any way to speed it up?
In brief: a) use Physics:-Gtaylor, not taylor and b) Before proceeding, update your Physics package with the latest version, available for download at the Maplesoft Maple Physics: Research & Development webpage.
In details: Physics is a relatively new package. The taylor command is from before Physics, and uses * and ^ operators that assume commutativity. An important number of developments happen every year towards making the Maple library more aware of the presence of non commutative objects within algebraic expressions, so that their products, powers, simplification, expansion and combination rules, etc happen as expected. A relevant command in this process is Physics:-Check that will tell you, among other things, whether products of non-commutative objects are ill-formed; i.e. expressed using a commutative * operator. Try it with the output of taylor (not Physics:-Gtaylor) and you will see.
Regarding updating Physics: bug fixes and new Physics and Physics-related developments are integrated into the R&D version of the package every week.
Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Weird result of CPLEX "cplexmilp" function in MATLAB

According to my previous question, I want to optimize an objective function using binary integer linear programming (all of variables are binary) as follows:
Minimize f = (c1*x1) + (c2*x2) + MAX((c3*x3),(c4*x4)) + (c5*x5)
Subject to: some equality and inequality constraints
For MAX operator, I used auxiliary variable x6 and added x6>=(c3*x3) and x6>=(c4*x4) constraints to the problem so the problem turn into:
Minimize f = (c1*x1) + (c2*x2) + x6 + (c5*x5), with added constraints.
I used CPLEX API for MATLAB to optimize the objective function.
Because all of variables are binary except one ((x6) which is continuous) and coefficients have double values, the problem turn into mixed integer linear programming so I used cplexmilp function with this configuration:
Variable types: ctype='BBBBBC' ( B:Binary, C:Continuous)
Lower Bounds: lb=[0 0 0 0 0 0]
Upper Bounds: ub=[0 0 0 0 0 inf]
Function Call:
[fval] = cplexmilp(f, Aineq, bineq, Aeq, beq,[],[],[],lb,ub,ctype)
but sometimes in the results I see x3 and x4 have continuous values(between 0 and 1) and x3+x4=1.
So my questions are:
Can any one tell me what's wrong with x3 and x4?
Is there a solution for not using auxiliary variable and solving this optimization problem with cplexbilp ?
Thanks in advance
[UPDATE]:
One parts of my code had logical errors which I fixed them, now in all cases which x3 and x4 are not binary we have x3*(1-x3)< 10^(-5), x4*(1-x4)< 10^(-5) and x3+x4=1, so #David Nehme were right (according to his useful comment), but my second question still remains here!
David's solution shows you why your formulation has become linearized but non-binary. You could also try printing out the problem in LP or MPS format to see all the resulting constraints.
You asked about a formulation that continues to be purely binary. Here's one way to do that:
Transforming this to a purely binary formulation
Here's a way to keep the problem with Max() also binary. It involves additional aux. variables, but it is relatively straight-forward once you apply the standard if-then IP tricks.
First, let's list out the four possible cases in a simple table, and see what values the max() term can take. These cases are mutually exclusive.
x3 | x4 | max (c3.x4, c4.x3)
-------------------------------
0 | 0 | 0
1 | 0 | c3
0 | 1 | c4
1 | 1 | max(c3, c4) - a constant
Now, let C34 be the max(c3, c4). Note that C34 is a number, not a variable in the problem. We need this for the new Objective function.
Introducing new binary variables
For each of the four cases above, let's introduce an auxiliary BINARY variable. For clarity, call them y0, y3, y4, y34.
Only one of the cases in the table above can hold, so we add:
y0 + y3 + y4 + y34 = 1
yi are BINARY
Now, all that remains is to add linking constraints that ensure:
If x3=0 AND x4=0 then y0=1
If x3=1 AND x4=0 then y3=1
If x3=0 AND x4=1 then y4=1
If x3=1 AND x4=1 then y34=1
We can ensure that by adding a pair of linear constraints for each of the conditions above.
2 y0 <= (1- x3) + (1 -x4)
(1-x3) + (1-x4) <= y0 + 1
2 y3 <= x3 + (1-x4)
x3+(1-x4) <= y3 + 1
2 y4 <= x4 + (1-x3)
x4+(1-x3) <= y4 + 1
2 y34 <= x3 + x4
x3+x4 <= y34 + 1
The new objective function now becomes:
Minimize f = (c1*x1) + (c2*x2) + (c5*x5) + 0*Y0 + C3*Y3 + C4*Y4 + C34*Y34
Notice that we don't have the Max() term anymore in the objective function. And all x and y variables are binary. All your original constraints plus the new ones above (8+1 = 9 of them) should be included. Once you do that, you can use cplexbilp because it is a pure BILP problem.
Hope that helps.
In your case, the auxiliary variable x6 is needed because the "MAX" function is not linear (it has a discontinuous gradient where c3*x3 == c4*x4). By adding the additional variable and constraints you are creating a linear version of the problem with a solution that is equivalent to your original nonlinear problem. The trade-off is that if c3 or c4 have a value other than 0 or 1, then your problem is not a pure binary problem. That is a very good trade-off, especially if you are using a good MIP solver.

What is the Haskell / hmatrix equivalent of the MATLAB pos function?

I'm translating some MATLAB code to Haskell using the hmatrix library. It's going well, but
I'm stumbling on the pos function, because I don't know what it does or what it's Haskell equivalent will be.
The MATLAB code looks like this:
[U,S,V] = svd(Y,0);
diagS = diag(S);
...
A = U * diag(pos(diagS-tau)) * V';
E = sign(Y) .* pos( abs(Y) - lambda*tau );
M = D - A - E;
My Haskell translation so far:
(u,s,v) = svd y
diagS = diag s
a = u `multiply` (diagS - tau) `multiply` v
This actually type checks ok, but of course, I'm missing the "pos" call, and it throws the error:
inconsistent dimensions in matrix product (3,3) x (4,4)
So I'm guessing pos does something with matrix size? Googling "matlab pos function" didn't turn up anything useful, so any pointers are very much appreciated! (Obviously I don't know much MATLAB)
Incidentally this is for the TILT algorithm to recover low rank textures from a noisy, warped image. I'm very excited about it, even if the math is way beyond me!
Looks like the pos function is defined in a different MATLAB file:
function P = pos(A)
P = A .* double( A > 0 );
I can't quite decipher what this is doing. Assuming that boolean values cast to doubles where "True" == 1.0 and "False" == 0.0
In that case it turns negative values to zero and leaves positive numbers unchanged?
It looks as though pos finds the positive part of a matrix. You could implement this directly with mapMatrix
pos :: (Storable a, Num a) => Matrix a -> Matrix a
pos = mapMatrix go where
go x | x > 0 = x
| otherwise = 0
Though Matlab makes no distinction between Matrix and Vector unlike Haskell.
But it's worth analyzing that Matlab fragment more. Per http://www.mathworks.com/help/matlab/ref/svd.html the first line computes the "economy-sized" Singular Value Decomposition of Y, i.e. three matrices such that
U * S * V = Y
where, assuming Y is m x n then U is m x n, S is n x n and diagonal, and V is n x n. Further, both U and V should be orthonormal. In linear algebraic terms this separates the linear transformation Y into two "rotation" components and the central eigenvalue scaling component.
Since S is diagonal, we extract that diagonal as a vector using diag(S) and then subtract a term tau which must also be a vector. This might produce a diagonal containing negative values which cannot be properly interpreted as eigenvalues, so pos is there to trim out the negative eigenvalues, setting them to 0. We then use diag to convert the resulting vector back into a diagonal matrix and multiply the pieces back together to get A, a modified form of Y.
Note that we can skip some steps in Haskell as svd (and its "economy-sized" partner thinSVD) return vectors of eigenvalues instead of mostly 0'd diagonal matrices.
(u, s, v) = thinSVD y
-- note the trans here, that was the ' in Matlab
a = u `multiply` diag (fmap (max 0) s) `multiply` trans v
Above fmap maps max 0 over the Vector of eigenvalues s and then diag (from Numeric.Container) reinflates the Vector into a Matrix prior to the multiplys. With a little thought it's easy to see that max 0 is just pos applied to a single element.
(A>0) returns the positions of elements of A which are larger than zero,
so forexample, if you have
A = [ -1 2 -3 4
5 6 -7 -8 ]
then B = (A > 0) returns
B = [ 0 1 0 1
1 1 0 0]
Note that we have ones corresponding to an elemnt of A which is larger than zero, and 0 otherwise.
Now if you multiply this elementwise with A using the .* notation, then you are multipling each element of A that is larger than zero with 1, and with zero otherwise. That is, A .* B means
[ -1*0 2*1 -3*0 4*1
5*1 6*1 -7*0 -8*0 ]
giving finally,
[ 0 2 0 4
5 6 0 0 ]
So you need to write your own function that will return positive values intact, and negative values set to zero.
And also, u and v does not match in dimension, for a generall SVD decomposition, so you actually would need to REDIAGONALIZE pos(diagS - Tau), so that u* diagnonalized_(diagS -tau) agrres to v

Solving for variables in an over-parameterised system

I am trying to write a Matlab program that accepts variables for a system from the user, but there are more variables than system parameters. To be specific, six variables in three equations:
w - d - M = 0
l - d - T = 0
N - T + M = 0
This could be represented in matrix form as A*x=0 where
A = [1 0 0 -1 0 -1;
0 1 0 -1 -1 0;
0 0 1 0 -1 1];
x = [w l N d T M]';
I would like to be able to solve this system given a known subset of the variables. For example, if the user gives d, T, M, then the system is trivially solved for the other three variables. If the user supplies w, N, M, then it becomes a solvable 3-DOF system. And so on. (If the user over- or under-specifies the system then an error may of course result.)
Given any one of these combinations it's simple to (a priori) use matrix algebra to calculate the unknown quantities. But I don't know how to solve the general case, aside from using the symbolic toolbox (which I prefer not to do for compatibility reasons).
When I started with this approach I thought this step would be easy, but my linear algebra is rusty; am I missing something simple?
First, let x be a vector with NaN for the unknown values. This allows you to use ISNAN to find the indeces of the unknowns. If you calculate A*x for only the user-specified terms, that gives you a column of constants b. Take those constants to the right-hand side of the equation, and you have an equation of the form A*x = -b.
A = [1 0 0 -1 0 -1;
0 1 0 -1 -1 0;
0 0 1 0 -1 1];
idx = ~isnan(x);
b = A(:,idx)*x(idx); % user provided constants
z = A(:,~idx)\(-b); % solution of Ax = -b
x(~idx) = z;
With input x = [NaN NaN NaN 1 1 1]', for instance, you get the result [2 2 0 1 1 1]'. This uses MLDIVIDE, I'm not well versed enough in linear algebra to know whether PINV or something else would be better.
Given the linear system
A = [1 0 0 -1 0 -1;
0 1 0 -1 -1 0;
0 0 1 0 -1 1];
A*x = 0
Where the elements of x are identified as:
x = [w l N d T M]';
Now, suppose that {d,T,M} have known, fixed values. What we need are the indices of these elements in x. We've chosen the 4th, 5th and 6th elements of x to be knowns.
known_idx = [4 5 6];
unknown_idx = setdiff(1:6,known_idx);
Now, let me pick some arbitrary numbers for those known variables.
xknown = [1; -3; 7.5];
We will partition A into two submatrices, corresponding to the known and unknown variables.
Aknown = A(:,known_idx);
Aunknown = A(:,unknown_idx);
Now, move the known values to the right hand side of the equality, and solve. See that Aknown is a 3x3 matrix, so the problem is (hopefully) well posed.
xunknown = Aunknown\(-Aknown*xknown)
xunknown =
-8.5
2
10.5
Combine it all into the final solution.
x = zeros(6,1);
x(known_idx) = xknown;
x(unknown_idx) = xunknown;
x =
-8.5
2
10.5
1
-3
7.5
Note that I've expanded this all out into a few lines to show what is happening more clearly. But I could have done it all in just a line or two of code had I wanted to be parsimonious.
Finally, see that had I chosen some other sets of numbers to be the knowns, such as {l,d,T}, then the resulting system would be singular. So you must watch for that event. A test on the rank of Aunknown might be useful to weed out the problems. Or you might choose to employ pinv to build the solution.
The system of equations is fixed? What if you store the variables present in your three equations in a list per equation:
(w, d, M)
(l, d, T)
(N, T, M)
Then you get the user input and you can calculate the number of variables given in each equation:
User input: w, N, M
Given variables:
(w, d, M) -> 2
(l, d, T) -> 0
(N, T, M) -> 1
This would trivially give you d from the first equation. Therefore you end up with two equations containing two variables and you know you the equation system you have to solve.
It's basically your own simple symbolic solver for a single system of equations.