I want to input a probability value in the code through a slider. The name of the slider is lawful-industry with a value range from 0 to 1, so the code I wrote is:
[let probs [["lawful" lawful-industry]["unlawful" 1 - lawful-industry]]
but NetLogo outputs "Expected a literal value". What is wrong?
Can you give more context for this code? What are you expecting probs to be in this example- a list of lists? If that's the case, you need to use the list primitive since you are making a list from interface inputs. Try this:
to test
let probs list ( list "lawful" lawful-industry ) ( list "unlawful" ( 1 - lawful-industry ) )
print probs
end
Output (when lawful-industry is 0.37):
[[lawful 0.37] [unlawful 0.63]]
Related
I have a MiniZinc model for wolf-goat-cabbage in which I store the locations of each entity in its own array, e.g., array[1..max] of Loc: wolf where Loc is defined as an enum: enum Loc = {left, rght}; and max is the maximum possible number of steps needed, e.g., 20..
To find a shortest plan I define a variable var 1..max: len; and constrain the end state to occur at step len.
constraint farmer[len] == left /\ wolf[len] == left /\ goat[len] == left /\ cabbage[len] == left
Then I ask for
solve minimize len
I get all the right answers.
I'd like to display the arrays from 1..len, but I can't find a way to do it. When I try, for example, to include in the output:
[ "\(wolf[n]), " | n in 1..max where n <= len ]
I get an error message saying that I can't display an array of opt string.
Is there a way to display only an initial portion of an array, where the length of the initial portion is determined by the model?
Thanks.
Did you try to fix the len variable in the output statement like n <= fix(len)?. See also What is the use of minizinc fix function?
I am trying something (in netlogo), but it is not working. I want a value of a position from a list of numbers. And I want to use the number that comes out of it to retrieve a name from a list of names.
So if I have a list like [1 2 3 4] en a list with ["chicken" "duck" "monkey" "dog"]
I want my number 2 to correspond with "duck".
So far, my zq is a list of numbers and my usedstrategies is a list of names.
let m precision (max zq) 1
let l position m zq
let p (position l zq) usedstrategies
But when I try this the result will be false, because l is not part of usedstrategies.
Ideas?
You need the item primitive to select from the list after matching on the other list. I am not sure what the precision line is for. However, here is a self contained piece of code that I think demonstrates what you want to do. Note that NetLogo counts positions from 0, not 1. I also used arbitrary numbers in the list so you don't get confused between the number in the list and its position.
to testme
let usedstrategies (list "chicken" "duck" "monkey" "dog")
let zq (list 5 6 7 8)
let strategynum position 7 zq
let thisstrategy item strategynum usedstrategies
type "Selected strategy number " type strategynum
type " which is " print thisstrategy
end
Jen's solution is perfectly fine, but I think this could also be a good use case for the table extension. Here is an example:
extensions [table]
to demo
let usedstrategies ["chicken" "duck" "monkey" "dog"]
let zq [5 6 7 8]
let strategies table:from-list (map list zq usedstrategies)
; get item corresponding with number 7:
print table:get strategies 7
end
A "table", here, is a data structure where a set of keys are associated with values. Here, your numbers are the keys and the strategies are the values.
If you try to get an item for which there is no key in the table (e.g., table:get strategies 9), you'll get the following error:
Extension exception: No value for 9 in table.
Here is a bit more detail about how the code works.
To construct the table, we use the table:from-list reporter, which takes a list of lists as input and gives you back a table where the first item of each sublist is used as a key and the second item is used as a value.
To construct our list of lists, we use the map primitive. This part is a bit more tricky to understand. The map primitive needs two kind of inputs: one or more lists, and a reporter to be applied to elements of these lists. The reporter comes first, and the whole expression needs to be inside parentheses:
(map list zq usedstrategies)
This expression "zips" our two lists together: it takes the first element of zq and the first element of usedstrategies, passes them to the list reporter, which constructs a list with these two elements, and adds that result to a new list. It then takes the second element of zq and the second element of usedstrategies and does the same thing with them, until we have a list that looks like:
[[5 "chicken"] [6 "duck"] [7 "monkey"] [8 "dog"]]
Note that the zipping expression could also have be written:
(map [ [a b] -> list a b ] zq usedstrategies)
...but it's a more roundabout way to do it. The list reporter by itself is already what we want; there is no need to construct a separate anonymous reporter that does the same thing.
I have a 2-layer non-convolutional network in Tensorflow, using tanh as the activation function. I understand that weights should be initialized with a truncated normal distribution divided by sqrt(nInputs) e.g.:
weightsLayer1 = tf.Variable(tf.div(tf.truncated_normal([nInputUnits, nUnitsHiddenLayer1),math.sqrt(nInputUnits))))
Being a bit of a bumbling newbie in NN and Tensorflow, I mistakenly implemented this as 2 lines only to make it more readable:
weightsLayer1 = tf.Variable(tf.truncated_normal([nInputUnits, nUnitsHiddenLayer1])
weightsLayer1 = tf.div(weightsLayer1, math.sqrt(nInputUnits))
I now know that this is wrong and that the 2nd line causes the weights to be recomputed at each learning step. However, to my suprise, the "incorrect" implementation consistently yields better performance, both in train and test/evaluation datasets. I thought that the incorrect 2-line implementation should be a train wreck, since it is recomputing (suppressing) weights to values other than those chosen by the optimizer, which I would expect would wreak havoc in the optimization process, but it actually improves it. Does anyone have any explanation for this? I am using the Tensorflow adam optimizer.
Update 2016.6.22 - updated the 2nd code block above.
You are right that weightsLayer1 = tf.div(weightsLayer1, math.sqrt(nInputUnits)) is executed at each step. But that does NOT mean that the values in the weight variable are scaled down by sqrt(nInputUnits) in each step. This line is not an in-place operation that affects the values stored in the variable. It computes a new tensor, holding the values in the variable divided by sqrt(nInputUnits) and that tensor, I assume, then goes into the rest of your computation graph. This does not interfere with the optimizer. You are still defining a valid computation graph, just with an somewhat arbitrary scaling of the weights. The optimizer can still compute the gradients with respect to this variable (it will back-propagate through your division operation) and create the corresponding update operations.
In terms of the model that you are defining, the two versions are totally equivalent. For any set of values of weightsLayer1 in the original model (where you don't do the division), you can simply scale them up by sqrt(nInputUnits) and you will get the identical results with your second model. The two represent exactly the same model class, if you will.
Why one works better than the other? Your guess is as good as mine. If you have done the same division for all your variables, you have effectively divided your learning rate by sqrt(nInputUnits). This smaller learning rate might have been beneficial to the problem at hand.
Edit: I think the fact that you give the same name to the variable and the newly created tensor causes confusion. When you do
A = tf.Variable(1.0)
A = tf.mul(A, 2.0)
# Do something with A
then the second line creates a new tensor (as discussed above) and you re-bind the name (and it is only a name) A to that new tensor. For the graph being defined, the naming is absolutely irrelevant. The following code defines the same graph:
A = tf.Variable(1.0)
B = tf.mul(A, 2.0)
# Do something with B
Maybe this becomes clear if you execute the following code:
A = tf.Variable(1.0)
print A
B = A
A = tf.mul(A, 2.0)
print A
print B
The output is
<tensorflow.python.ops.variables.Variable object at 0x7ff025c02bd0>
Tensor("Mul:0", shape=(), dtype=float32)
<tensorflow.python.ops.variables.Variable object at 0x7ff025c02bd0>
The first time you print A it tells you that A is a variable object. After executing A = tf.mul(A, 2.0) and printing A again, you can see that the name A is now bound to a tf.Tensor object. However, the variable still exists, as can be seen by looking at the object behind the name B.
This is what the single line of code does:
t = tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] )
Creates a Tensor with shape [ nInputUnits, nUnitsHiddenLayer1 ], initialized with 1.0 as the standard deviation of the truncated normal distribution. ( 1.0 is standard stddev value )
t1 = tf.div( t, math.sqrt( nInputUnits ) )
divide all values in t with math.sqrt( nInputUnits )
Your two lines of code do exactly the same thing. On the first line and the second line all values are divided by math.sqrt( nInputUnits ).
As for your statement:
I now know that this is wrong and that the 2nd line causes the weights to be recomputed at each learning step.
EDIT my mistake
Indeed you are right, they are divided by math.sqrt( nInputUnits ) at every execuction, but not reinitialized! The point of importance is where you put tf.variable()
Here both lines are only initialized once:
weightsLayer1 = tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] )
weightsLayer1 = tf.Variable( tf.div( weightsLayer1, math.sqrt( nInputUnits ) ) )
and here the second line is preformed at every step:
weightsLayer1 = tf.Variable( tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] )
weightsLayer1 = tf.div( weightsLayer1, math.sqrt( nInputUnits ) )
Why does the second yield better results? it looks like some kind normalization to me, but somebody more knowledgeable should verify that.
Ps.
you can write it more readable like this:
weightsLayer1 = tf.Variable( tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] , stddev = 1. / math.sqrt( nInputUnits ) )
We've got an array of values, and we would like to create another array whose values are not in the first one.
Example:
load('internet.mat')
The first column contains the values in MBs, we have thought in something like:
MB_no = setdiff(v, internet(:,1))
where v is a 0 vector whose length equals to the number of rows in internet.mat. But it just doesn't work.
So, how do we do this?
You need to specify the range of possible values to define what values are not in internet . Say the range is v = 1:10 then setdiff(v,internet(:,1)) will give you the values in 1:10 that are not in the first column of internet.
It seems as if you don't want the first column.
You can simply do:
MB_no=internet(:,2:end);
assuming internet(:,1) has only positive integers and you wish to find which are the integers in [1,...,max( internet(:,1) )] that do not appear in that range you can simply do
app = [];
app( internet(:,1) ) = 1;
MB_no = find( app == 0 );
This is somewhat like bucket sort.
Ok so i have a DECIMAL field called "Score". (e.g 10.00)
Now, in my SP, i want to increment/decrement the value of this field in update transactions.
So i might want to do this:
SET #NewScore = #CurrentScore + #Points
Where #Points is the value im going to increment/decrement.
Now lets say #Points = 10.00.
In a certain scenario, i want 10.00 to become -10.00
So the statement would be translated to:
SET #NewScore = #CurrentScore + -10.00
How can i do that?
I know its a strange question, but basically i want that statement to be dynamic, in that i dont want to have a different statement for incrementing/decrementing the value.
I just want something like this:
SET #Points = 10.00
IF #ActivityBeingPerformedIsFoo
BEGIN
-- SET #Points to be equivalent negative value, (e.g -10.00)
END
SET #NewScore = #CurrentScore + #Points
Can't you just multiply it by -1?
I always do 0 - #Points. It was this way in some code I inherited. "A foolish consistency..."
Multiply #Points by -1 in that certain scenario.
I thought of subtracting it with a multiple of 2, i.e. x - 2x