System of Boolean Equations - boolean

I am working on solving a system of boolean equations. Specifically, I am trying to find the values of bit vectors S1...S3 and/or C1...C3 such that their XOR results are given in the table below (in hexadecimal values). Any ideas?

So we have six 32-digit sequences we want to determine, for a total of 192 unknown hexadecimal digits. We can focus on just the first digits of each to illustrate how we could try to solve for the others.
Let's call the first digits of S1, S2 and S3 a, b and c, respectively. Let's also call the first digits of C1, C2 and C3 x, y and z, respectively. Then we have the following equations, where * stands for XOR:
a * x = E b * x = A c * x = 7
a * y = 2 b * y = 6 c * y = B
a * z = 1 b * z = 5 c * z = 8
Let us note some properties of XOR. First, it is associative. That is, A XOR (B XOR C) is always equal to (A XOR B) XOR C. Second, it is commutative. That is, A XOR B is always equal to B XOR A. Also, A XOR A is always the "false" vector (0) for any A. A XOR FALSE is always A where FALSE stands for the "false" vector (0). These facts let us do algebra to solve for variables and substitute to solve. We can solve for c first, substitute in and simplify to get:
a * x = E b * x = A z * x = F
a * y = 2 b * y = 6 z * y = 3
a * z = 1 b * z = 5 c = 8 * z
We can do z next:
a * x = E b * x = A y * x = C
a * y = 2 b * y = 6 z = 3 * y
a * y = 2 b * y = 6 c = 8 * z
We found a couple of our equations are redundant. We expected that if the system were not inconsistent since we had nine equations in six unknowns. Let's continue with y:
a * x = E b * x = A y = C * x
a * x = E b * x = A z = 3 * y
a * x = E b * x = A c = 8 * z
We find now that we have even more unhelpful equations. Now we are in trouble, though: we only have five distinct equalities and six unknowns. This means that our system is underspecified and we will have multiple solutions. We can pick one or list them all. Let us continue with x:
a * b = 4 x = b * A y = C * x
a * b = 4 x = b * A z = 3 * y
a * b = 4 x = b * A c = 8 * z
What this means we have one solution for every solution to the equation a * b = 4. How many solutions are there? Well, there must be 16, one for every value of a. Here is a table:
a: 0 1 2 3 4 5 6 7 8 9 A B C D E F
b: 4 5 6 7 0 1 2 3 C D E F 8 9 A B
We can fill in the rest of the table using the other equations we determined. Each row will then be a solution.
You can continue this process for each of the 32 places in the hexadecimal sequences. For each position, you will find:
there are multiple solutions, as we found for the first position
there is a unique solution, if you end up with six useful equations
there are no solutions, if you find one of the equations becomes a contradiction upon substitution (e.g., A = 3).
If you find a position that has no solutions, then the whole system has no solutions. Otherwise, you'll have a number of solutions that is the product of the nonzero numbers of solutions for each of the 32 positions.

Related

How to Compute f(x) for a=1 and various values of the parameter b on the interval 0<x<3

I tried using below codes with no luck
f = 0;
a = 1;
b = [1 2 3.5];
for x = 0:3
f = (a * b * x).^(b-1)*exp(-a*x.^b);
end
disp (f);
Assuming the domain is x and the comparison parameter is b we can loop through the values of b to create three distinct vectors for which the function, f is plotted for. Here the value of b is swapped on each iteration of the for-loop. The resultant end up being f with 3 rows by 4 columns where the columns correspond to the x-values/domain and the rows correspond to the value of parameter, b.
x = (0:3);
a = 1;
B = [1 2 3.5];
for Parameter_Index = 1: length(B)
b = B(Parameter_Index);
f(Parameter_Index,:) = (a.*b.*x).^(b-1).*exp(-a.*x.^b);
end
plot(x,f(1,:),x,f(2,:),x,f(3,:));
xlabel("x"); ylabel("f(x)");
legend("B = " + num2str(B(1)),"B = " + num2str(B(2)),"B = " + num2str(B(3)));
Ran using MATLAB R2019b

Least Squares Method to fit parameters

I am asked to use the least squares method to fit the parameters α and β in y = α*exp(-β*x),
given the points:
x = [1 2 3 4 5 6 7]
y = [9 6 4 2 4 6 9]
I am having trouble determining what my matrix should look like. I know I should take the natural logarithm of both sides of the function in order to get rid of the exponential, and also obtain the natural logarithm of the y-values, which are:
ln_y = [2.19 1.79 1.39 0.69 1.39 1.79 2.19]
However what should my matrix look like, because what I am left with is
ln(y) = ln(α) - β*x?
So the -β column consists of ones and the x column will be my x values, but what should the α column contain?
This is what I assume I should get:
A = [1 1 1 1 1 1 1; 1 2 3 4 5 6 7]
Am I thinking correctly?
The first thing we can do is to take the natural logarithm ln (log in Matlab)) on both sides of the equation:
y = α * e^(-β * x)
becomes:
ln(y) = ln(α * e^(-β * x))
// Law of logarithms
ln(x * y) = ln(x) + ln(y)
// thus:
ln(y) = ln(α) + ln(e^(-β * x))
Simplifying:
ln(y) = -β * x + ln(α)
Now we have ln(y) as a linear function of x and the problem reduces to finding the linear regression in the least square sense. Let's define lny = log(y), and A = ln(α) and we can rewrite the problem as
lny = -β * x + A
Where
x = [1 2 3 4 5 6 7]
lny = [2.19 1.79 1.39 0.69 1.39 1.79 2.19]
For each x_i in x we can evaluate lny as follows (rewritten in ascending power of x):
lny(x1) = A - β * x1
lny(x2) = A - β * x2
...
lny(xn) = A - β * xn
In matrix form
LNY = X * [A β]'
Or,
X * [A β]' = LNY
// let Coefs = [A β]'
Coefs = X^-1 * LNY
In Matlab
x = [1 2 3 4 5 6 7];
y = [9 6 4 2 4 6 9];
lny = log(y);
X = [ones(length(y), 1), -x']; % design matrix
coefs = X\lny'
% A = coefs(1) and β = coefs(2)
% ln(α) = A thus α = exp(A)
alpha = exp(coefs(1));
beta = coefs(2)
You almost had it. The second row should be -x.
x = [1 2 3 4 5 6 7]
y = [9 6 4 2 4 6 9]
logy = log(y)
n = length(x);
A = [ones(1,n); -x]
c = logy/A; %Solve for coefficients
alpha = exp(c(1))
beta = c(2);
In this example, deriving the least squares estimator is a good idea. The other answers take this approach.
There is quick and dirty approach that is flexible and handy.
Just to it numerically. You can use fminsearch to get the job done.
% MATLAB R2019a
x = [1 2 3 4 5 6 7];
y = [9 6 4 2 4 6 9];
% Create anonymous function (your supposed model to fit)
fh =#(params) params(1).*exp(-params(2).*x);
% Create anonymous function for Least Squares Error with single input
SSEh =#(params) sum((fh(params)-y).^2); % Sum of Squared Error (SSE)
p0 = [1 0.5]; % Initial guess for [alpha, beta]
[p, SSE] = fminsearch(SSEh,p0);
alpha = p(1); % 5.7143
beta = p(2); % 1.2366e-08 (AKA zero)
It is always a good idea to plot the results as a sanity check (I screw up often and this saves me time and time again).
yhath=#(params,xval) params(1).*exp(-params(2).*xval);
Xrng = min(x)-1:.2:max(x)+1;
figure, hold on, box on
plot(Xrng,p(1).*exp(-p(2).*Xrng),'r--','DisplayName','Fit')
plot(x,y,'ks','DisplayName','Data')
legend('show')
A Note on Nonlinearity:
This works fine with linear models due to convexity. If your error function is nonlinear but convex, as Sum of Squared Error (SSE), then this also returns the global optimum.
Note that a non-convex function would require multiple start points to attempt to capture many local optima, then taking the best one would still carry no guarantees of optimality. Adding constraints to the solution would require penalty functions or switching to the constrained solver since fminsearch solves the unconstrained problem (unless you penalize it properly).
Easy to Modify:
It is easy to modify the model and the error function. For example, if you wanted to minimize the sum of the absolute error instead, it is straightforward using abs.
% Create anonymous function for Least Absolute Error with single input
SAEh =#(params) sum(abs(fh(params)-y)); % Sum of Absolute Error

Dimension of arrays do not agree

I have to solve this matrix equation in MATLAB
(A + p(1)E) V(1) = B , and find V(1)
B dimension is 280 x 4
A dimension is 280 x 280
E dimension is 280 x 280
p dimension is 15 x 1
I have tried this
L=inv((A + p(1)*E));
V(1) = B*L;
but i get this error
Error using ==> mtimes
Inner matrix dimensions must agree.
Do you know what goes wrong, or an other way to solve it?
Thanks in advance
As the error says, you can only multiply two matrices which have same size of the inner dimension, eg:
Q(l x m) * P(m x n) = R(l x n)
So when you try to multiply
B(280 x 4) * L(280 x 280)
The error comes up.
Obviously this is because the algebra is also not ok, which should be
V(280 x 4) = L(280 x 280) * B(280 x 4)
Since the product of matrices is not commutative and the correct algebra here is
(A + p E) V = B
L V = B
V = L^-1 B

Inverse of a spark RowMatrix

I am trying to inverse a spark rowmatrix. The function I am using is below.
def computeInverse(matrix: RowMatrix): BlockMatrix = {
val numCoefficients = matrix.numCols.toInt
val svd = matrix.computeSVD(numCoefficients, computeU = true)
val indexed_U = new IndexedRowMatrix(svd.U.rows.zipWithIndex.map(r => new IndexedRow(r._2, r._1)))
val invS = DenseMatrix.diag(new DenseVector(svd.s.toArray.map(x => if(x == 0) 0 else math.pow(x,-1))))
val V_inv = svd.V.multiply(invS)
val inverse = indexed_U.multiply(V_inv.transpose)
inverse.toBlockMatrix.transpose
}
The logic I am implementing is through SVD. An explanation of the process is
U, Σ, V = svd(A)
A = U * Σ * V.transpose
A.inverse = (U * Σ * V.transpose).inverse
= (V.transpose).inverse * Σ.inverse * U.inverse
Now U and V are orthogonal matrix
Therefore,
M * M.transpose = 1
Applying the above,
A.inverse = V * Σ.inverse * U.transpose
Let V * Σ.inverse be X
A.inverse = X * U.transpose
Now, A * B = ((A * B).transpose).transpose
= (B.transpose * A.transpose).transpose
Applying the same, to keep U as a row matrix, not a local matrix
A.inverse = X * U.transpose
= (U.transpose.transpose * X.transpose).transpose
= (U * X.transpose).transpose
The problem is with the input row matrix. For example
1, 2, 3
4, 5, 6
7, 8, 9
10,11,12
the inverse from the above code snippet and on using python numpy is different. I am unable to find out why is it so? Is it because of some underlying assumption made during svd calculation? Any help will be greatly appreciated. Thanks.
The above code works properly. The reason I was getting this error was that I made the RowMatrix with a RDD[Vector]. Now, in spark things are sorted column wise to form a matrix, whereas in the case of numpy, array is converted row wise to a matrix
Array(1,2,3,4,5,6,7,8,9)
In Spark
1 4 7
2 5 8
3 6 9
In python, it is interpreted as
1 2 3
4 5 6
7 8 9
So, the test case was failing :|

How Can I Solve Matrix Equation in Matlab

Say I have the following matrix equation
X - B*X*C = D
Where,
X: 3 by 5, to be solved;
B: 3 by 3;
C: 5 by 5;
D: 3 by 5;
Is there any convenient method that I can use for solving the system? fsolve?
In case B or C are invertible, you can check the matrix cookbook section 5.1.10 deals with similar settings:
X * inv(C) - B * X = D * inv(C)
Can be translated to
x = inv( kron( eye, -B ) + kron( inv(C)', eye ) ) * d
where x and d are vector-stack of X and D respectively.
You can use MATLAB's dlyap function:
X = dlyap(B,C,D)