Say I have the following matrix equation
X - B*X*C = D
Where,
X: 3 by 5, to be solved;
B: 3 by 3;
C: 5 by 5;
D: 3 by 5;
Is there any convenient method that I can use for solving the system? fsolve?
In case B or C are invertible, you can check the matrix cookbook section 5.1.10 deals with similar settings:
X * inv(C) - B * X = D * inv(C)
Can be translated to
x = inv( kron( eye, -B ) + kron( inv(C)', eye ) ) * d
where x and d are vector-stack of X and D respectively.
You can use MATLAB's dlyap function:
X = dlyap(B,C,D)
Related
I am working on solving a system of boolean equations. Specifically, I am trying to find the values of bit vectors S1...S3 and/or C1...C3 such that their XOR results are given in the table below (in hexadecimal values). Any ideas?
So we have six 32-digit sequences we want to determine, for a total of 192 unknown hexadecimal digits. We can focus on just the first digits of each to illustrate how we could try to solve for the others.
Let's call the first digits of S1, S2 and S3 a, b and c, respectively. Let's also call the first digits of C1, C2 and C3 x, y and z, respectively. Then we have the following equations, where * stands for XOR:
a * x = E b * x = A c * x = 7
a * y = 2 b * y = 6 c * y = B
a * z = 1 b * z = 5 c * z = 8
Let us note some properties of XOR. First, it is associative. That is, A XOR (B XOR C) is always equal to (A XOR B) XOR C. Second, it is commutative. That is, A XOR B is always equal to B XOR A. Also, A XOR A is always the "false" vector (0) for any A. A XOR FALSE is always A where FALSE stands for the "false" vector (0). These facts let us do algebra to solve for variables and substitute to solve. We can solve for c first, substitute in and simplify to get:
a * x = E b * x = A z * x = F
a * y = 2 b * y = 6 z * y = 3
a * z = 1 b * z = 5 c = 8 * z
We can do z next:
a * x = E b * x = A y * x = C
a * y = 2 b * y = 6 z = 3 * y
a * y = 2 b * y = 6 c = 8 * z
We found a couple of our equations are redundant. We expected that if the system were not inconsistent since we had nine equations in six unknowns. Let's continue with y:
a * x = E b * x = A y = C * x
a * x = E b * x = A z = 3 * y
a * x = E b * x = A c = 8 * z
We find now that we have even more unhelpful equations. Now we are in trouble, though: we only have five distinct equalities and six unknowns. This means that our system is underspecified and we will have multiple solutions. We can pick one or list them all. Let us continue with x:
a * b = 4 x = b * A y = C * x
a * b = 4 x = b * A z = 3 * y
a * b = 4 x = b * A c = 8 * z
What this means we have one solution for every solution to the equation a * b = 4. How many solutions are there? Well, there must be 16, one for every value of a. Here is a table:
a: 0 1 2 3 4 5 6 7 8 9 A B C D E F
b: 4 5 6 7 0 1 2 3 C D E F 8 9 A B
We can fill in the rest of the table using the other equations we determined. Each row will then be a solution.
You can continue this process for each of the 32 places in the hexadecimal sequences. For each position, you will find:
there are multiple solutions, as we found for the first position
there is a unique solution, if you end up with six useful equations
there are no solutions, if you find one of the equations becomes a contradiction upon substitution (e.g., A = 3).
If you find a position that has no solutions, then the whole system has no solutions. Otherwise, you'll have a number of solutions that is the product of the nonzero numbers of solutions for each of the 32 positions.
I have three variables : A, B, C.
I want to write a set of linear equations such that
X = 1 if atleast 2 of A,B,C are ones.
X= 0 if only one of A,B,C is one.
X = 0 if all of them are zero.
A,B and C are binary (0,1).
Kindly suggest a linear equation for this.
Thank you.
To implement
A+B+C ≥ 2 => X=1
A+B+C ≤ 1 => X=0
we can write simply:
2X ≤ A+B+C ≤ 2X+1
A,B,C,X ∈ {0,1}
In practice, you may have to formulate the sandwich equation as two inequalities.
X >= A + B - 1
X >= B + C - 1
X >= A + C - 1
X <= A + B
X <= B + C
X <= A + C
as mentioned in comment, you didn't define outcome for A=B=C=0, but in that case X -> 0 by inspection.
Question: How to use "fmincon" to solve the following minimization matrix problem?
I am trying to find the f such that
a * ( b – ( inv(a) * inv(inv(a) + transpose(c)*inv(f)*c) * (inv(a)*d + transpose(c) * inv(f) * e) ) )^2
is minimized subject to:
f > 0
++++Variables:
a is (8x8) matrix which is known.
b is (8x1) column vector which is known.
c is (1x8) column vector which is known.
d is (8x1) scalar which is known.
e is (1x1) scalar which is known.
and
f is a scalar and is unknown.
Thanks to the #m7913d, I solved the code via Isqnonlin:
clc;
clear all;
% random inputs A, B, C, D and E
a = rand(8,8)'*rand(8,8);
b = 2*rand(8,1) - 1;
c = 2*rand(1,8) - 1;
d = 2*rand(8,1) - 1;
e = 2*rand(1,1) - 1;
% minimization term
fun = #(f) a * ( b - ( inv(a) * inv(inv(a)+c'*inv(f)*c) * (inv(a)*d+c' * inv(f) * e) ) );
f = lsqnonlin(fun,0.1,0,+inf)
I know that MATLAB works better when most or everything is vectorized. I have two set of vectors X and T. For every vector x in X I want to compute:
this is because I want to compute:
which can be easily expressed as MATLAB linear algebra operations as I wrote above with a dot product. I am hoping that I can speed this up by having those vectors, instead of computing each f(x) with a for loop. Ideally I could have it all vectorized and compute:
I've been think about this for some time now, but it doesn't seem to be a a nice way were a function takes two vectors and computes the norm between each one of them, with out me having to explicitly write the for loop.
i.e. I've implemented the trivial code:
function [ f ] = f_start( x, c, t )
% Computes f^*(x) = sum_i c_i exp( - || x_i - t_i ||^2)
% Inputs:
% x = data point (D x 1)
% c = weights (K x 1)
% t = centers (D x K)
% Outputs:
% f = f^*(x) = sum_k c_k exp( - || x - t_k ||^2)
[~, K] = size(t);
f = 0;
for k=1:K
c_k = c(k);
t_k = t(:, k);
norm_squared = norm(x - t_k, 2)^2;
f = f + c_k * exp( -1 * norm_squared );
end
end
but I was hoping there was a less naive way to do this!
I think you want pdist2 (Statistics Toolbox):
X = [1 2 3;
4 5 6];
T = [1 2 3;
1 2 4;
7 8 9];
result = pdist2(X,T);
gives
result =
0 1.0000 10.3923
5.1962 4.6904 5.1962
Equivalently, if you don't have that toolbox, use bsxfun as follows:
result = squeeze(sqrt(sum(bsxfun(#minus, X, permute(T, [3 2 1])).^2, 2)));
Another method just for kicks
X = [1 2 3;
4 5 6].';
T = [1 2 3;
1 2 4;
7 8 9].';
tT = repmat(T,[1,size(X,2)]);
tX = reshape(repmat(X,[size(T,2),1]),size(tT));
res=reshape(sqrt(sum((tT-tX).^2)).',[size(T,2),size(X,2)]).'
The formula I have to translate to Octave/Matlab goes something like this:
\sum (v_i - m) (v_i - m)^T
I have a matrix, and I need to take each row, subtract m from it and then multiply it with its own transpose. I wrote the inner part as a function:
function w = str(v, m)
y = v - m
w = y * transpose(y)
end
My matrix is like this
xx = [1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5]
Now I have no idea how to apply this function to each row in a matrix and then sum them up to a new matrix. Maybe someone can help me here.
EDIT: The result is not the dot product. I'm looking for v * v^T, which has a matrix as result!
Probably you need this
X = bsxfun( #minus, A, m );
Y = X'* X;
Suppose the matrix is A, then the solution is
total = sum(sum((A-m).*(A-m),2));
A.*A is an element wise multiplication, hence sum(A.*A,2) returns a column vector, with each element being the self dot product of each row in A.
If m is a vector then, it is slightly more complicated.
[p,~]=size(A);
total = sum(sum((A-repmat(m,p,1)).*(A-repmat(m,p,1)),2));
Cheers.
In the end, I wrote this:
function w = str(v, m)
y = v - m;
w = y' * y;
end
y = zeros(5,5);
for i=1:12
y = y + str(A(i,:), m);
end
Surely not the most elegant way to do this, but it seems to work.
You can subtract the mean using bsxfun
>> v_m = bsxfun( #minus, v, m );
For the sum of outer product of all vectors you can use bsxfun again
>> op = bsxfun( #times, permute( v, [3 1 2]), permute( v, [1 3 2] ) );
>> op = sum( op, 3 );
There are two ways to solve this issue:
Assume A is your matrix:
sum(drag(A' * A))
will do the job. However, it is slightly more efficient with the following:
sum((A .* A)(:))