I've been struggling with boolean simplification in class, and took it to practice some more at home. I found a list of questions, but they don't have any answers or workings. This one I'm stuck on, if you could answer clearly showing each step I'd much appreciate:
Q=A.B.(~B+C)+B.C+B
I tried looking for a calculator to give me the answer and then to work out how to get to that, but I'm lost
(I'm new to this)
Edit: ~B = NOT B
I've never done this, so I'm using this site to help me.
A.B.(B' + C) = A.(B.B' + B.C) = A.(0 + B.C) = A.(B.C)
So the expression is now A.(B.C) + B.C + B.
Not sure about this, but I'm guessing A.(B.C) + (B.C) = (A + 1).(B.C).
This equals A.(B.C).
So the expression is now A.(B.C) + B.
As A.(B + C) = B.(A.C), the expression is now B.(A.C) + B, which equals (B + 1).(A.C) = B.(A.C).
NOTE: This isn't complete yet, so please avoid downvoting as I'm not finished yet (posted this to help the OP understand the first part).
Let's be lazy and use sympy, a Python library for symbolic computation.
>>> from sympy import *
>>> from sympy.logic import simplify_logic
>>> a, b, c = symbols('a, b, c')
>>> expr = a & b & (~b | c) | b & c | b # A.B.(~B+C)+B.C+B
>>> simplify_logic(expr)
b
There are two ways to go about such a formula:
Applying simplifications,
Brute force
Let's look at brute force first. The following is a dense truth table (for a better looking table, look at Wα), enumerating all possible value for a, b and c, alongside the values of the expression.
a b c -- a & b & (~b | c) | b & c | b = Q
0 0 0 0 0 10 1 0 0 0 0 0 = 0
0 0 1 0 0 10 1 1 0 0 1 0 = 0
0 1 0 0 1 01 0 0 1 0 0 1 = 1
0 1 1 0 1 01 1 1 1 1 1 1 = 1
1 0 0 1 0 10 1 0 0 0 0 0 = 0
1 0 1 1 0 10 1 1 0 0 1 0 = 0
1 1 0 1 1 01 1 0 1 0 0 1 = 1
1 1 1 1 1 01 1 1 1 1 1 1 = 1
You can also think of the expression as a tree, which will depend on the precedence rules (e.g. usually AND binds stronger than OR, see also this question on math.se).
So the expression:
a & b & (~b | c) | b & c | b
is a disjunction of three terms:
a & b & (~b | c)
b & c
b
You can try to reason about the individual terms, knowing that only one has to be true (as this is a disjunction).
The last two will be true, if and only if b is true. For the first, this a bit harder to see, but if you look closely: you have now a conjunction (terms concatenated by AND): All of them must be true, for the whole expression to be true, so a and b must be true. Especially b must be true.
In summary: For the whole expression to be true, in all three top-level cases, b must be true (and it will be false, if b is false). So it simplifies to just b.
Explore more on Wolfram Alpha:
https://www.wolframalpha.com/input/?i=a+%26+b+%26+(~b+%7C+c)+%7C+b+%26+c+%7C+b
A.B.(~B+C) + B.C + B = A.B.~B + A.B.C + B.C + B ; Distribution
= A.B.C + B.C + B ; Because B.~B = 0
= B.C + B ; Because A.B.C <= B.C
= B ; Because B.C <= B
I've got coordinates of 4 points in 2D that form a rectangle and their coordinates after a perspective transformation has been applied.
The perspective transformation is calculated in homogeneous coordinates and defined by a 3x3 matrix M. If the matrix is not known, how can I calculate it from the given points?
The calculation for one point would be:
| M11 M12 M13 | | P1.x | | w*P1'.x |
| M21 M22 M23 | * | P1.y | = | w*P1'.y |
| M31 M32 M33 | | 1 | | w*1 |
To calculate all points simultaneously I write them together in one matrix A and analogously for the transformed points in a matrix B:
| P1.x P2.x P3.x P4.x |
A = | P1.y P2.y P3.y P4.y |
| 1 1 1 1 |
So the equation is M*A=B and this can be solved for M in MATLAB by M = B/A or M = (A'\B')'.
But it's not that easy. I know the coordinates of the points after transformation, but I don't know the exact B, because there is the factor w and it's not necessary 1 after a homogeneous transformation. Cause in homogeneous coordinates every multiple of a vector is the same point and I don't know which multiple I'll get.
To take account of these unknown factors I write the equation as M*A=B*W
where W is a diagonal matrix with the factors w1...w4 for every point in B on the diagonal. So A and B are now completely known and I have to solve this equation for M and W.
If I could rearrange the equation into the form x*A=B or A*x=B where x would be something like M*W I could solve it and knowing the solution for M*W would maybe be enough already. However despite trying every possible rearrangement I didn't managed to do so. Until it hit me that encapsulating (M*W) would not be possible, since one is a 3x3 matrix and the other a 4x4 matrix. And here I'm stuck.
Also M*A=B*W does not have a single solution for M, because every multiple of M is the same transformation. Writing this as a system of linear equations one could simply fix one of the entries of M to get a single solution. Furthermore there might be inputs that have no solution for M at all, but let's not worry about this for now.
What I'm actually trying to achieve is some kind of vector graphics editing program where the user can drag the corners of a shape's bounding box to transform it, while internally the transformation matrix is calculated.
And actually I need this in JavaScript, but if I can't even solve this in MATLAB I'm completely stuck.
OpenCV has a neat function that does this called getPerspectiveTransform. The source code for this function is available on github with this description:
/* Calculates coefficients of perspective transformation
* which maps (xi,yi) to (ui,vi), (i=1,2,3,4):
*
* c00*xi + c01*yi + c02
* ui = ---------------------
* c20*xi + c21*yi + c22
*
* c10*xi + c11*yi + c12
* vi = ---------------------
* c20*xi + c21*yi + c22
*
* Coefficients are calculated by solving linear system:
* / x0 y0 1 0 0 0 -x0*u0 -y0*u0 \ /c00\ /u0\
* | x1 y1 1 0 0 0 -x1*u1 -y1*u1 | |c01| |u1|
* | x2 y2 1 0 0 0 -x2*u2 -y2*u2 | |c02| |u2|
* | x3 y3 1 0 0 0 -x3*u3 -y3*u3 |.|c10|=|u3|,
* | 0 0 0 x0 y0 1 -x0*v0 -y0*v0 | |c11| |v0|
* | 0 0 0 x1 y1 1 -x1*v1 -y1*v1 | |c12| |v1|
* | 0 0 0 x2 y2 1 -x2*v2 -y2*v2 | |c20| |v2|
* \ 0 0 0 x3 y3 1 -x3*v3 -y3*v3 / \c21/ \v3/
*
* where:
* cij - matrix coefficients, c22 = 1
*/
This system of equations is smaller as it avoids solving for W and M33 (called c22 by OpenCV). So how does it work? The linear system can be recreated by the following steps:
Start with the equation for one point:
| c00 c01 c02 | | xi | | w*ui |
| c10 c11 c12 | * | yi | = | w*vi |
| c20 c21 c22 | | 1 | | w*1 |
Convert this to a system of equations, solve ui and vi, and eliminate w. You get the formulas for projection transformation:
c00*xi + c01*yi + c02
ui = ---------------------
c20*xi + c21*yi + c22
c10*xi + c11*yi + c12
vi = ---------------------
c20*xi + c21*yi + c22
Multiply both sides with the denominator:
(c20*xi + c21*yi + c22) * ui = c00*xi + c01*yi + c02
(c20*xi + c21*yi + c22) * vi = c10*xi + c11*yi + c12
Distribute ui and vi:
c20*xi*ui + c21*yi*ui + c22*ui = c00*xi + c01*yi + c02
c20*xi*vi + c21*yi*vi + c22*vi = c10*xi + c11*yi + c12
Assume c22 = 1:
c20*xi*ui + c21*yi*ui + ui = c00*xi + c01*yi + c02
c20*xi*vi + c21*yi*vi + vi = c10*xi + c11*yi + c12
Collect all cij on the left hand side:
c00*xi + c01*yi + c02 - c20*xi*ui - c21*yi*ui = ui
c10*xi + c11*yi + c12 - c20*xi*vi - c21*yi*vi = vi
And finally convert to matrix form for four pairs of points:
/ x0 y0 1 0 0 0 -x0*u0 -y0*u0 \ /c00\ /u0\
| x1 y1 1 0 0 0 -x1*u1 -y1*u1 | |c01| |u1|
| x2 y2 1 0 0 0 -x2*u2 -y2*u2 | |c02| |u2|
| x3 y3 1 0 0 0 -x3*u3 -y3*u3 |.|c10|=|u3|
| 0 0 0 x0 y0 1 -x0*v0 -y0*v0 | |c11| |v0|
| 0 0 0 x1 y1 1 -x1*v1 -y1*v1 | |c12| |v1|
| 0 0 0 x2 y2 1 -x2*v2 -y2*v2 | |c20| |v2|
\ 0 0 0 x3 y3 1 -x3*v3 -y3*v3 / \c21/ \v3/
This is now in the form of Ax=b and the solution can be obtained with x = A\b. Remember that c22 = 1.
Should have been an easy question. So how do I get M*A=B*W into a solvable form? It's just matrix multiplications, so we can write this as a system of linear equations. You know like: M11*A11 + M12*A21 + M13*A31 = B11*W11 + B12*W21 + B13*W31 + B14*W41. And every system of linear equations can be written in the form Ax=b, or to avoid confusion with already used variables in my question: N*x=y. That's all.
An example according to my question: I generate some input data with a known M and W:
M = [
1 2 3;
4 5 6;
7 8 1
];
A = [
0 0 1 1;
0 1 0 1;
1 1 1 1
];
W = [
4 0 0 0;
0 3 0 0;
0 0 2 0;
0 0 0 1
];
B = M*A*(W^-1);
Then I forget about M and W. Meaning I now have 13 variables I'm looking to solve. I rewrite M*A=B*W into a system of linear equations, and from there into the form N*x=y. In N every column has the factors for one variable:
N = [
A(1,1) A(2,1) A(3,1) 0 0 0 0 0 0 -B(1,1) 0 0 0;
0 0 0 A(1,1) A(2,1) A(3,1) 0 0 0 -B(2,1) 0 0 0;
0 0 0 0 0 0 A(1,1) A(2,1) A(3,1) -B(3,1) 0 0 0;
A(1,2) A(2,2) A(3,2) 0 0 0 0 0 0 0 -B(1,2) 0 0;
0 0 0 A(1,2) A(2,2) A(3,2) 0 0 0 0 -B(2,2) 0 0;
0 0 0 0 0 0 A(1,2) A(2,2) A(3,2) 0 -B(3,2) 0 0;
A(1,3) A(2,3) A(3,3) 0 0 0 0 0 0 0 0 -B(1,3) 0;
0 0 0 A(1,3) A(2,3) A(3,3) 0 0 0 0 0 -B(2,3) 0;
0 0 0 0 0 0 A(1,3) A(2,3) A(3,3) 0 0 -B(3,3) 0;
A(1,4) A(2,4) A(3,4) 0 0 0 0 0 0 0 0 0 -B(1,4);
0 0 0 A(1,4) A(2,4) A(3,4) 0 0 0 0 0 0 -B(2,4);
0 0 0 0 0 0 A(1,4) A(2,4) A(3,4) 0 0 0 -B(3,4);
0 0 0 0 0 0 0 0 1 0 0 0 0
];
And y is:
y = [ 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 1 ];
Notice the equation described by the last row in N whose solution is 1 according to y. That's what I mentioned in my question, you have to fix one of the entries of M to get a single solution. (We can do this because every multiple of M is the same transformation.) And with this equation I'm saying M33 should be 1.
We solve this for x:
x = N\y
and get:
x = [ 1.00000; 2.00000; 3.00000; 4.00000; 5.00000; 6.00000; 7.00000; 8.00000; 1.00000; 4.00000; 3.00000; 2.00000; 1.00000 ]
which are the solutions for [ M11, M12, M13, M21, M22, M23, M31, M32, M33, w1, w2, w3, w4 ]
W is not needed after M has been calculated. For a generic point (x, y), the corresponding w is calculated while solving x' and y'.
| M11 M12 M13 | | x | | w * x' |
| M21 M22 M23 | * | y | = | w * y' |
| M31 M32 M33 | | 1 | | w * 1 |
When solving this in JavaScript I could use the Numeric JavaScript library which has the needed function solve to solve Ax=b.
I am trying to solve the problem http://postimg.org/image/4bmfha8m7/
I am having trouble in implementing the weight matrix for the 36 inputs.
I have a 3 neuron hidden layer.
I use the backpropagation algorithm to learn.
What I have tried so far is:
% Sigmoid Function Definition
function [result] = sigmoid(x)
result = 1.0 ./ (1.0 + exp(-x));
end
% Inputs
input = [1 1 0 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0 0 0 0 1 1 1 0 0 1 1 1 0 0 1 1 1 0;
0 0 0 0 1 0 1 1 1 0 0 1 1 1 0 0 0 0 0 1 0 1 1 1 0 0 1 1 1 0 0 0 0 0 1;
0 0 0 0 0 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 0 0 0 0;
0 0 0 0 1 0 1 1 1 0 0 1 1 1 0 0 1 1 1 0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 1];
% Desired outputs
output = [1;1;1;1];
% Initializing the bias (Bias or threshold are the same thing, essential for learning, to translate the curve)
% Also, the first column of the weight matrix is the weight of the bias values
bias = [-1 -1 -1 -1];
% Learning coefficient
coeff = 1.0;
% Number of learning iterations
iterations = 100;
disp('No. Of Learning Iterations = ');
disp(iterations);
% Initial weights
weights = ones(36,36);
% Main Algorithm Begins
for i = 1:iterations
out = zeros(4,1);
numIn = length (input(:,1));
for j = 1:numIn
% 1st neuron in the hidden layer
H1 = bias(1,1)*weights(1,1) + input(j,1)*weights(1,2) + input(j,2)*weights(1,3) + input(j,3)*weights(1,4)+ input(j,4)*weights(1,5) + input(j,5)*weights(1,6) + input(j,6)*weights(1,7)
+ input(j,7)*weights(1,8) + input(j,8)*weights(1,9) + input(j,9)*weights(1,10)+ input(j,10)*weights(1,11) + input(j,11)*weights(1,12) + input(j,12)*weights(1,13)
+ input(j,13)*weights(1,14) + input(j,14)*weights(1,15) + input(j,15)*weights(1,16)+ input(j,16)*weights(1,17) + input(j,17)*weights(1,18) + input(j,18)*weights(1,19)
+ input(j,19)*weights(1,20) + input(j,20)*weights(1,21) + input(j,21)*weights(1,22)+ input(j,22)*weights(1,23) + input(j,23)*weights(1,24) + input(j,24)*weights(1,25)
+ input(j,25)*weights(1,26) + input(j,26)*weights(1,27) + input(j,27)*weights(1,28)+ input(j,28)*weights(1,29) + input(j,29)*weights(1,30) + input(j,30)*weights(1,31)
+ input(j,31)*weights(1,32) + input(j,32)*weights(1,33) + input(j,33)*weights(1,34)+ input(j,34)*weights(1,35) + input(j,35)*weights(1,36)
x2(1) = sigmoid(H1);
% 2nd neuron in the hidden layer
H2 = bias(1,2)*weights(2,1) + input(j,1)*weights(2,2) + input(j,2)*weights(2,3) + input(j,3)*weights(2,4)+ input(j,4)*weights(2,5) + input(j,5)*weights(2,6) + input(j,6)*weights(2,7)
+ input(j,7)*weights(2,8) + input(j,8)*weights(2,9) + input(j,9)*weights(2,10)+ input(j,10)*weights(2,11) + input(j,11)*weights(2,12) + input(j,12)*weights(2,13)
+ input(j,13)*weights(2,14) + input(j,14)*weights(2,15) + input(j,15)*weights(2,16)+ input(j,16)*weights(2,17) + input(j,17)*weights(2,18) + input(j,18)*weights(2,19)
+ input(j,19)*weights(2,20) + input(j,20)*weights(2,21) + input(j,21)*weights(2,22)+ input(j,22)*weights(2,23) + input(j,23)*weights(2,24) + input(j,24)*weights(2,25)
+ input(j,25)*weights(2,26) + input(j,26)*weights(2,27) + input(j,27)*weights(2,28)+ input(j,28)*weights(2,29) + input(j,29)*weights(2,30) + input(j,30)*weights(2,31)
+ input(j,31)*weights(2,32) + input(j,32)*weights(2,33) + input(j,33)*weights(2,34)+ input(j,34)*weights(2,35) + input(j,35)*weights(2,36)
x2(2) = sigmoid(H2);
% 3rd neuron in the hidden layer
H3 = bias(1,3)*weights(3,1) + input(j,1)*weights(3,2) + input(j,2)*weights(3,3) + input(j,3)*weights(3,4)+ input(j,4)*weights(3,5) + input(j,5)*weights(3,6) + input(j,6)*weights(3,7)
+ input(j,7)*weights(3,8) + input(j,8)*weights(3,9) + input(j,9)*weights(3,10)+ input(j,10)*weights(3,11) + input(j,11)*weights(3,12) + input(j,12)*weights(3,13)
+ input(j,13)*weights(3,14) + input(j,14)*weights(3,15) + input(j,15)*weights(3,16)+ input(j,16)*weights(3,17) + input(j,17)*weights(3,18) + input(j,18)*weights(3,19)
+ input(j,19)*weights(3,20) + input(j,20)*weights(3,21) + input(j,21)*weights(3,22)+ input(j,22)*weights(3,23) + input(j,23)*weights(3,24) + input(j,24)*weights(3,25)
+ input(j,25)*weights(3,26) + input(j,26)*weights(3,27) + input(j,27)*weights(3,28)+ input(j,28)*weights(3,29) + input(j,29)*weights(3,30) + input(j,30)*weights(3,31)
+ input(j,31)*weights(3,32) + input(j,32)*weights(3,33) + input(j,33)*weights(3,34)+ input(j,34)*weights(3,35) + input(j,35)*weights(3,36)
x2(3) = sigmoid(H3);
% Output layer
x3_1 = bias(1,4)*weights(4,1) + x2(1)*weights(4,2) + x2(2)*weights(4,3) + x2(3)*weights(4,4);
out(j) = sigmoid(x3_1);
% Adjust delta values of weights
% For output layer: delta(wi) = xi*delta,
% delta = (1-actual output)*(desired output - actual output)
delta3_1 = out(j)*(1-out(j))*(output(j)-out(j));
% Propagate the delta backwards into hidden layers
delta2_1 = x2(1)*(1-x2(1))*weights(3,2)*delta3_1;
delta2_2 = x2(2)*(1-x2(2))*weights(3,3)*delta3_1;
delta2_3 = x2(3)*(1-x2(3))*weights(3,4)*delta3_1;
% Add weight changes to original weights and then use the new weights.
% delta weight = coeff*x*delta
for k = 1:4
if k == 1 % Bias cases
weights(1,k) = weights(1,k) + coeff*bias(1,1)*delta2_1;
weights(2,k) = weights(2,k) + coeff*bias(1,2)*delta2_2;
weights(3,k) = weights(3,k) + coeff*bias(1,3)*delta2_3;
weights(4,k) = weights(4,k) + coeff*bias(1,4)*delta3_1;
else % When k=2 or 3 input cases to neurons
weights(1,k) = weights(1,k) + coeff*input(j,1)*delta2_1;
weights(2,k) = weights(2,k) + coeff*input(j,2)*delta2_2;
weights(3,k) = weights(3,k) + coeff*input(j,3)*delta2_3;
weights(4,k) = weights(4,k) + coeff*x2(k-1)*delta3_1;
end
end
end
end
disp('For the Input');
disp(input);
disp('Output Is');
disp(out);
disp('Test Case: For the Input');
input = [1 1 0 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0 0 0 0 1 1 1 0 0 1 1 1 0 0 1 1 1 0];
For me the problem is the labeling, I can't see where do you have the output
Output (1,1,1,1)? What do you mean. Perhaps I miss something but for me there are two ways of labeling a multiclass clasification one is with a label directly (0 for A, 1 for B, 3 for C...) and expanding it after or directly expanded like A=1,0,0,0 = [1,0,0,0;0,1,0,0;0,0,1,0;0,0,0,1]
The way that you make operations is very easy to make a mistakes, take a look to matlab/octave matrix operations, it's very powerful and could simplify everything a lot.