I would like to solve a linear system of the form A*X=B', where B' is the transpose of B. A is a square matrix N-by-N and B is N-by-M. In lapack/lapacke, the function LAPACKE_dgesv (see an example here) is used to solve systems of the form A*X=B, where B is treated as multiple right-hand side vectors. Is it possible to solve a system of the form A*X=B' without having to create a copy of B as Z=B' by re-ordering its values and then solve A*X=Z?
To the very best of my knowledge, LAPACK offers no such functionality. You have to perform the transpose of B outside the calls to LAPACK.
Related
I have a symbolic matrix that depends on a complex parameter q. Let the matrix be A(q) and b a column vector. I would like to simultaneously solve the equations
A*b==0; b'*b==1;
using the solve command (preferably the numerical variant vpasolve). The variables to be found are both b and q. I am not quite sure about the syntax on how to do this and would appreciate any help on it. My main problem is that the equation is partially given in matrix form and the searched variable is a vector.
Do I have to resort to fsolve to achieve this? Or is there a way without defining a function?
I come across an equation that I want to solve in this paper. (I'm not sure if it can be read by public). It is about minimizing equation 9 (see the attached file) subject to the constraint in equation 10.
n is a 3-dimensional vector (so n can be expressed as a Px3 array), and c is just a vector of length K (so c is just a 1xK array). I' (a PxL array) and l (a 3xL array) are both known. I need to find the set of n and c.
The paper stated that:
'We used MATLAB implementation of the trust region reflective quadratic programming for optimization.'
I don't know how this can be done. I am not sure if what it refers to is the quadprog or just the direct use of fmincon. In either case, I have no idea how to write the objective equation and the constraints equation in the appropriate form for the function call. It would be great if someone can show me how to rewrite equation and use the 'trust region reflective quadratic programming for optimization', or provide other efficient ways to solve that equation.
thanks
I have a linear program of the form min(f*x) s.t. A1*x < d1; A2*x < d2. The form with one constraint is implemented in Matlab in command linprog. What command can I use to solve linear program with two constrraints?
I could of course create a block diagonal matrix, and double the size of the variable x, but if there is more efficient way I would like to use it, because the size of the matrix is quite large.
Possibly I don't understand the question right but can't you combine the matrixes A1 und A2 by A = [A1; A2]?
You maybe interested in Dantzig-Wolfe Decomposition algorithm for solving linear programming. It takes advantage of this block diagonal structure. However, I don't think there is an out-of-the box implementation of it in commercial softwares.
I have a linear system of equations AX = B to solve in MATLAB. What I have known is A is sparse, positive-definite and symmetric. I know the command x = A \ b works yet I am not sure MATLAB takes full advantage of A's good properties so as to maximize the efficiency. Is there any way to specify the algorithm to solve it, for example Conjugate Gradient algorithm in MATLAB?
If your matrix is sparse, you can use all these iterative functions, for example bicg for a biconjugate gradients method.
MATLAB's mldivide operator does indeed take advantage of properties of A. See the documentation for details - expand the "Algorithm" section.
I am working MuPad in order to have a symbolic tool to find solution for an equation. But I am working with matrices.
Consider this:
blck := A -> matrix([
[A[1..linalg::matdim(A)[1]/2,1..linalg::matdim(a)[2]/2],
A[1..linalg::matdim(A)[1]/2,linalg::matdim(A)[2]/2+1..linalg::matdim(A)[2]]],
[A[linalg::matdim(A)[1]/2+1..linalg::matdim(A)[1],1..linalg::matdim(A)[2]/2],
A[linalg::matdim(A)[1]/2+1..linalg::matdim(A)[1],linalg::matdim(A)[2]/2+1..linalg::matdim(A)[2]]]
])
This function enables me to have a block representation of a matrix and it works. Now consider this function
myfun := A -> matrix([[blck(A)[1,1]*blck(A)[2,2]*blck(A)[2,1],blck(A)[1,1]],
[blck(A)[1,1],blck(A)[1,1]]])
This will manipulate a little a matrix and returns matrix whose components are combined somehow. The problem is that, considering that I cannot tell MuPad that matrix A and its components are matrices and not reals, it happens that MuPad will show me matrix products in different order
For example. Consider
myfun(matrix([[A11,A12],[A21,A22]]))
The first component of the returned matrix, element (1,1), is A11*A21*A22 which is incorrect being A11,A12,A21,A22 matrices!
How can i tell MuPad that A11,A12,A21 and A22 are matrices so that MuPad will expand products correctly?
You can have matrices in matrices in MuPAD, as long as you explicitly put them in there. Just telling the system to treat A1*A2 as non-commutative is more difficult and not well supported. You could go full-blown, create your own datatype and implement arithmetic accordingly, but that's not necessarily easy if you still want simplifications to happen.