Adding elements of 3 lists in NetLogo - netlogo

I have three lists [a1 a2 a3 a4][b1 b2 b3 b4][c1 c2 c3 c4]
I want to create a new list by adding elements of each list
Result: [a1+b1+c1 a2+b2+c2 a3+b3+c3 a4+b4+c4]

Checkout map:
(map [ [a b c] -> a + b + c ] list1 list2 list3)

Related

Big Step Substraction

I'm trying to define some big-step operations in coq, first I did the addition:
Big Step Addition
Plus : forall a1 a2 i1 i2 sigma n,
a1 =[sigma]=> i1 ->
a2 =[sigma]=> i2 ->
n = i1 + i2 ->
a1 +' a2 =[sigma]=> n
And now I want to do the substraction:
Big Step Substraction
But I'm not sure how to implement the condition "if i1 >= i2"
Some help would be greatly appreciated!

Why are there problems using map in an IF statement?

Why won't the following run?
if (map [ [a b c] -> a + b = c ] [1] [2] [3]) [show 100]
The following produces 'true' as an output:
show (map [ [a b c] -> a + b = c ] [1] [2] [3])
so I expected the first statement above to be the same as:
if true [show 100]
(P.S. in my full version the lists are longer but are collapsed into a single true/false using reduce.)
Thanks.
To elaborate on ThomasC's comment, map always produces a list, even if it only has one element. So
(map [ [a b c] -> a + b = c ] [1] [2] [3])
does produce [true]. Thus
(map [ [a b c] -> a + b = c ] [1 2] [2 3] [3 5])
will produce [true true]. reduce is helpful here,
reduce AND (map [ [a b c] -> a + b = c ] [1 2] [2 3] [3 5])
will produce a simple true by "anding" all the elements of the map output, and
reduce AND (map [ [a b c] -> a + b = c ] [1 2] [2 3] [3 6])
will produce a simple false.

Using vectorization to get a specific matrix output from two vectors

I wanted to know if there is a efficient way using MATLAB vectorization to generate a specific matrix from two vectors.
Suppose the vectors are
x = [u v]
y = [a1 a2 a3 b1 b2 b3]
where u, v, a1, a2, a3, b1, b2, b3 are some real numbers.
The 2-column matrix that I wish to generate using these vectors is
M = [u a1;
u a2;
u a3;
v a1;
v a2;
v a3;
u b1;
u b2;
u b3;
v b1;
v b2;
v b3]
In general, the length of x can be anything and the length of y is multiple of 3. Here is the code that I have now, but I think there should some better way (that possibly avoids the use of for-loop):
M = [];
Y = reshape(y, 3, []);
for j = 1:size(Y, 2)
[a, b] = meshgrid(x, Y(:, j));
L = [a(:) b(:)];
M = [M; L];
end
A solution using repmat and repelem :
M = [repmat(repelem(x(:),3),numel(y)/3,1) , ...
reshape(repmat(reshape(y,3,[]),numel(x),1),[],1)];
you have a pretty strange order in M. Is that order important? if not, or if you are happy to fix the order later yourself I have two solutions:
1) code
[a,b] = meshgrid(x,y);
M = [a(:) b(:)]
will give you:
M = [
u a1
u a2
u a3
u b1
u b2
u b3
v a1
v a2
v a3
v b1
v b2
v b3]
and
2) code M = combvec(x, y)' gives you:
M = [
u a1
v a1
u a2
v a2
u a3
v a3
u b1
v b1
u b2
v b2
u b3
v b3]

Forward Propagation with Dropout

I am working through Andrew Ng new deep learning Coursera course.
We are implementing the following code :
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(*A1.shape) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = (D1 < 0.5) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 =np.random.rand(*A2.shape) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2 < 0.5) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2 * D2 # Step 3: shut down some neurons of A2
A2 = A2 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
Calling:
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
My output was :
A3 = [[ 0.36974721 0.49683389 0.04565099 0.49683389 0.36974721]]
The expected output should be :
A3 = [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
Only one number difference. Any ideas why ?
I think it is because of the way I shaped D1 and D2.
I think it is because you put D1 = (D1 < 0.5) and D2 = (D2 < 0.5)
You need to put "keep_prob" instead of 0.5

Cheap hash of three inputs independent of their order

I have a module that takes three inputs, each of which is three bits wide.
output = f(inputA, inputB, inputC)
The output depends on the values of the three inputs but does not depend on their order.
i.e. f(inputA, inputB, inputC) = f(inputB, inputC, inputA)
The solution should work well for both FPGAs and ASICs. Currently I am implementing it without taking advantage of the symmetry, but I assume that explicitly forcing the synthesizer to consider the symmetry will result in a better implementation.
I am planning on implementing this using a hash, h, of the three inputs that does not depend on their order. I can then do:
hash <= h(inputA, inputB, inputC);
output <= VALUE0 when hash = 0 else
VALUE1 when hash = 1 else
.....
My question is what should I use for the hash function?
My thoughts so far:
If each input is 3 bits wide there are 512 possibilities, but only 120 when you consider the symmetry, so theoretically I should be able to use a hash that is 7 bits wide. Practically it may need to be longer.
Each bit of the hash is a function of the input bits and must respect the symmetry of the three inputs. The bits of the hash should be independent from one another. But I'm not sure how to generate these functions.
As mentioned in your question, you could sort and concatenate your inputs.
In pseudo code:
if (A < B)
swap(A, B);
if (B < C)
swap(B, C);
if (A < B)
swap(A, B);
As block diagram:
The 6-in/6-out function needed for a "conditional swap" block:
A3x = A3 B3 ;
A2x = A3 B3' B2 + A3' A2 B3 + A2 B2 ;
A1x = A2 B3' B2' B1 + A3' A2' A1 B2 + A3 A2 B2' B1
+ A2' A1 B3 B2 + A3 B3' B1 + A3' A1 B3 + A1 B1;
B3x = B3 + A3 ;
B2x = A3' B2 + A2 B3' + B3 B2 + A3 A2 ;
B1x = A3' A2' B1 + A1 B3' B2' + A2' B3 B1 + A3 A1 B2'
+ A3' B2 B1 + A2 A1 B3' + A3' B3 B1 + A3 A1 B3'
+ B3 B2 B1 + A3 A2 A1 ;
I have to admit that this solution is not exactly "cheap" and results in a 9-bit hash rather than in a 7-bit hash. Therefore, a look-up table might in fact be the best solution.