I have a module that takes three inputs, each of which is three bits wide.
output = f(inputA, inputB, inputC)
The output depends on the values of the three inputs but does not depend on their order.
i.e. f(inputA, inputB, inputC) = f(inputB, inputC, inputA)
The solution should work well for both FPGAs and ASICs. Currently I am implementing it without taking advantage of the symmetry, but I assume that explicitly forcing the synthesizer to consider the symmetry will result in a better implementation.
I am planning on implementing this using a hash, h, of the three inputs that does not depend on their order. I can then do:
hash <= h(inputA, inputB, inputC);
output <= VALUE0 when hash = 0 else
VALUE1 when hash = 1 else
.....
My question is what should I use for the hash function?
My thoughts so far:
If each input is 3 bits wide there are 512 possibilities, but only 120 when you consider the symmetry, so theoretically I should be able to use a hash that is 7 bits wide. Practically it may need to be longer.
Each bit of the hash is a function of the input bits and must respect the symmetry of the three inputs. The bits of the hash should be independent from one another. But I'm not sure how to generate these functions.
As mentioned in your question, you could sort and concatenate your inputs.
In pseudo code:
if (A < B)
swap(A, B);
if (B < C)
swap(B, C);
if (A < B)
swap(A, B);
As block diagram:
The 6-in/6-out function needed for a "conditional swap" block:
A3x = A3 B3 ;
A2x = A3 B3' B2 + A3' A2 B3 + A2 B2 ;
A1x = A2 B3' B2' B1 + A3' A2' A1 B2 + A3 A2 B2' B1
+ A2' A1 B3 B2 + A3 B3' B1 + A3' A1 B3 + A1 B1;
B3x = B3 + A3 ;
B2x = A3' B2 + A2 B3' + B3 B2 + A3 A2 ;
B1x = A3' A2' B1 + A1 B3' B2' + A2' B3 B1 + A3 A1 B2'
+ A3' B2 B1 + A2 A1 B3' + A3' B3 B1 + A3 A1 B3'
+ B3 B2 B1 + A3 A2 A1 ;
I have to admit that this solution is not exactly "cheap" and results in a 9-bit hash rather than in a 7-bit hash. Therefore, a look-up table might in fact be the best solution.
Related
Suppose I have vectors z1 z2 z3 z4 and b and matrices D1 D2 D3 D4.
I want to construct:
b1 = D2*z2 + D3*z3 +D4*z4 -b
b2 = D1*z1 + D3*z3 +D4*z4 -b
b3 = D1*z1 + D2*z2 +D4*z4 -b
b4 = D1*z1 + D2*z2 +D3*z3 -b
I planned to store my z vectors and D matrices in cells and extract them to create b by a for loop. e.g.
for i = 1:3
b(i) = D{i+1}*z{i+1} + D{i}*z{i};
end
Of course it certainly fails because it involves D{i}*z{i} at each i step. Can you please help me to accomplish my task?
You can do it like this (no recursion, but still any pair-wise product is only computed once).
pairs = zeros(size(D{1},1), 4);
for ii=4:-1:1,
pairs(:,ii) = D{ii}*z{ii};
end
Once you have the product of all pairs, you can take the sum
all_sum = sum(pairs, 2) - b_vec; % D1*z1 + D2*z2 + D3*z3 +D4*z4 -b
To get the proper b_i you only need to subtract pairs(:,ii) from the sum:
for ii=4:-1:1
b{ii} = all_sum - pairs{ii};
end
I am working through Andrew Ng new deep learning Coursera course.
We are implementing the following code :
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(*A1.shape) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = (D1 < 0.5) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 =np.random.rand(*A2.shape) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2 < 0.5) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2 * D2 # Step 3: shut down some neurons of A2
A2 = A2 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
Calling:
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
My output was :
A3 = [[ 0.36974721 0.49683389 0.04565099 0.49683389 0.36974721]]
The expected output should be :
A3 = [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
Only one number difference. Any ideas why ?
I think it is because of the way I shaped D1 and D2.
I think it is because you put D1 = (D1 < 0.5) and D2 = (D2 < 0.5)
You need to put "keep_prob" instead of 0.5
A = rand(4,2);
B = rand(4,3)
Now after performing some operations on B (roots, derivative etc) we get a new matrix B1, whose dimensions are size(B1) = size(B),
The operation I want to perform
B.' * ( A - B1.')
Like when each element of B.' multiplies with A, at that same time, The corresponding element from element B1 gets subtracted from A before multiplication.
The final dimensions need to be of What we would usually get from multiplication of B.' * A
Note - dimensions of intialized matrices change at each runtime so no manual operations
EXAMPLE
Lets say we have
A = 2x2
[ x1, x2 ]
[ y1, y2 ]
and
B = 2X1
[a1]
[b1]
and
B1 = 2x1
[a11]
[b11]
So during a simple multiplication of B.' * A
[(a1 * x1 + b1 * y1), (a1 * x2 + b1 * y2)]
I want to subtract B1 such that
[ (a1 * (x1-a11) + b1 * (y1-b11)), (a1 * (x2-a11) + b1 * (y2-b11))]
Example inputs of different size:
INPUTS
B =
[ a1 b1;
a2 b2;
a3 b3;
a4 b4]
A =
[ x11 x12 x13;
x21 x22 x23;
x31 x32 x33;
x41 x42 x43]
B1 =
[a10 b10;
a20 b20;
a30 b30;
a40 b40]
Result =
[b1(x11-b10)+b2(x21-b20)+b3(x31-b30)+b4(x41-b40) b1(x12-b10)+b2(x22-b20)+b3(x32-b30)+b4(x42-b40) b1(x13-b10)+b2(x23-b20)+b3(x33-b30)+b4(x43-b40);
a1(x11-a10)+a2(x21-a20)+a3(x31-b30)+a4(x41-a40) a1(x12-a10)+a2(x22-a20)+a3(x32-a30)+a4(x42-a40) a1(x13-a10)+a2(x23-a20)+a3(x33-a30)+a4(x43-a40)]
I assumed that size(B,2) >= size(A,2):
A = rand(4,2);
B = rand(4,3);
B1 = rand(size(B)).*B;
res = B' * ( A - B1(:,1:size(A,2)))
Does matlab supports such multiplication??
I searched a lot and find these
>> X = #(a1,a2,a3,a4)[a1 a2;a3 a4];
>> Y = #(b1,b2,b3,b4)[b1 b2;b3 b4];
>> % Something like ==> X*Y
But this just solves an equation with "value" and does not solve parametric for me. Does matlab support such a multiplication?
Maybe more of a long comment than an answer, but are you looking for symbolic variables? It requires the Symbolic Math Toolbox.
Example:
clc
clear
syms a1 a2 a3 a4 b1 b2 b3 b4
A = [a1 a2;a3 a4]
B = [b1 b2;b3 b4]
C = (A*B)
C =
[ a1*b1 + a2*b3, a1*b2 + a2*b4]
[ a3*b1 + a4*b3, a3*b2 + a4*b4]
Is this what you mean by "parametric matrix"?
So I'm wondering how to input pre-defined variables into the equations themselves.
Here's the code.
function[A, B, C] = A_B_C_problem_generalized(lambda_1, lambda_2, mu_1, mu_2, gamma_1, gamma_2)
clear
clc
syms a1 a2 a4 b1 b2 b4 c1 c2 c4
[a1, a2, a4, b1, b2, b4, c1, c2, c4] = ...
solve('a1 + a4 = lambda_1 + lambda_2', ...
'a1*a4 - a2^2 = lambda_1 * lambda_2', ...
'b1 + b4 = mu_1 + mu_2', ...
'b1*b4 - b2^2 = mu_1 * mu_2', ...
'c1 + c4 = gamma_1 + gamma_2', ...
'c1*c4 - c2^2 = gamma_1 * gamma_2', ...
'c1 = a1 + b1', ...
'c2 = a2 + b2', ...
'c4 = a4 + b4');
...
How could I go about doing this? The lambda's, mu's, and gamma's are supposed to be numbers you put in.
Ah, the correct approach was to not be using strings. Instead, convert it into a problem of finding roots, which the solver does on its own if you move everything onto one side of the equation...
syms a1 a2 a4 b1 b2 b4 c1 c2 c4
[a1, a2, a4, b1, b2, b4, c1, c2, c4] = ...
solve(a1 + a4 - (lambda_1 + lambda_2), ...
a1*a4 - a2^2 - (lambda_1*lambda_2), ...
b1 + b4 - (mu_1 + mu_2), ...
b1*b4 - b2^2 - (mu_1*mu_2), ...
c1 + c4 - (gamma_1 + gamma_2), ...
c1*c4 - c2^2 - (gamma_1*gamma_2), ...
c1 - a1 - b1, ...
c2 - a2 - b2, ...
c4 - a4 - b4)
is the way to go.