Unexpected RangeEditor for Integers - enthought

It appears RangeEditor is hard-wired to expect a float trait. The following sample script illustrates the behavior. Is there a cleaner way to specify RangeEditor for an integer trait?
import sys
print (sys.version) # '3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)]'
from traits import __version__
print (__version__) # '4.6.0.dev0'
from traits.api import HasTraits, Int, Range
from traitsui.api import Item, RangeEditor, View
class Exposure (HasTraits):
duration = Range (low=0.0, high=3600.0, exclude_low=True , exclude_high=False, editor=RangeEditor(mode='slider'))
min_count = Int ( 1)
max_count = Int (10000)
#<1>count = Range (low=1 , high=10000 , exclude_low=False, exclude_high=False, editor=RangeEditor(mode='slider'))
count = Range (low=1 , high=10000 , exclude_low=False, exclude_high=False, editor=RangeEditor(mode='slider', low_name='min_count', high_name='max_count'))
traits_view = View (Item('duration'), Item('count'), buttons=['OK', 'Cancel'])
exposure = Exposure (duration = 0.00032, count=1500)
exposure.configure_traits()
# Specifying `count` as per <1>, above, raises the following exception:
# traits.trait_errors.TraitError: The 'count' trait of an Exposure instance must be 1 <= a long integer <= 10000, but a value of 1.0 <class 'float'> was specified.
#
# A work-around is to declare min_count and max_count, and include them in the
# RangeEditor definition of `count`. Note, I am now duplicating `count` low and
# high limits in 2 places. Is there a better approach?

A RangeEditor does indeed default to returning float values, unless you override the default by setting the high or low parameters to int values. The low and high parameters of a Range trait are not passed to the trait's default editor (see the source here), so you have to set them separately. (It should be evident from the source why using low_name and high_name did work.) IIUC, this is a deliberate decoupling of the trait from its view, but you're welcome to open an issue in Traits for discussion if you think the behavior should be different.
One thing that may help clarify your code would be to separate the definitions of your model and view. Specifying the editors in your trait definitions makes it easy to get mixed up w.r.t. the parameters of the trait versus its view. If the constraints on the value are intrinsic to the Exposure model, use a Range, otherwise just use the underlying trait type. Defining the view separately makes your code clearer: which constraints belong to the model and which to the view. Also, if you want to define a second view with different constraints, it's no problem to do so.
Here's how I would propose re-writing your code:
from traits.api import HasTraits, Int, Range
from traitsui.api import Item, RangeEditor, View
class Exposure (HasTraits):
duration = Range (low=0.0, high=3600.0, exclude_low=True , exclude_high=False)
count = Int( 1500)
my_view = View(
Item('duration', editor=RangeEditor(mode='slider')),
Item('count', editor=RangeEditor(low=1, high=10000, mode='slider')),
buttons=['OK', 'Cancel']
)
exposure = Exposure (duration = 0.00032, count=1500)
exposure.configure_traits(view=my_view)

Related

How to implement exponentially decay learning rate in Keras by following the global steps

Look at the following example
# encoding: utf-8
import numpy as np
import pandas as pd
import random
import math
from keras import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import Adam, RMSprop
from keras.callbacks import LearningRateScheduler
X = [i*0.05 for i in range(100)]
def step_decay(epoch):
initial_lrate = 1.0
drop = 0.5
epochs_drop = 2.0
lrate = initial_lrate * math.pow(drop,
math.floor((1+epoch)/epochs_drop))
return lrate
def build_model():
model = Sequential()
model.add(Dense(32, input_shape=(1,), activation='relu'))
model.add(Dense(1, activation='linear'))
adam = Adam(lr=0.5)
model.compile(loss='mse', optimizer=adam)
return model
model = build_model()
lrate = LearningRateScheduler(step_decay)
callback_list = [lrate]
for ep in range(20):
X_train = np.array(random.sample(X, 10))
y_train = np.sin(X_train)
X_train = np.reshape(X_train, (-1,1))
y_train = np.reshape(y_train, (-1,1))
model.fit(X_train, y_train, batch_size=2, callbacks=callback_list,
epochs=1, verbose=2)
In this example, the LearningRateSchedule does not change the learning rate at all because in each iteration of ep, epoch=1. Thus the learning rate is just const (1.0, according to step_decay). In fact, instead of setting epoch>1 directly, I have to do outer loop as shown in the example, and insider each loop, I just run 1 epoch. (This is the case when I implement deep reinforcement learning, instead of supervised learning).
My question is how to set an exponentially decay learning rate in my example and how to get the learning rate in each iteration of ep.
You can actually pass two arguments to the LearningRateScheduler.
According to Keras documentation, the scheduler is
a function that takes an epoch index as input (integer, indexed from
0) and current learning rate and returns a new learning rate as output
(float).
So, basically, simply replace your initial_lr with a function parameter, like so:
def step_decay(epoch, lr):
# initial_lrate = 1.0 # no longer needed
drop = 0.5
epochs_drop = 2.0
lrate = lr * math.pow(drop,math.floor((1+epoch)/epochs_drop))
return lrate
The actual function you implement is not exponential decay (as you mention in your title) but a staircase function.
Also, you mention your learning rate does not change inside your loop. That's true because you set model.fit(..., epochs=1,...) and your epochs_drop = 2.0 at the same time. I am not sure this is your desired case or not. You are providing a toy example and it's not clear in that case.
I would like to add the more common case where you don't mix a for loop with fit() and just provide a different epochs parameter in your fit() function. In this case you have the following options:
First of all keras provides a decaying functionality itself with the predefined optimizers. For example in your case Adam() the actual code is:
lr = lr * (1. / (1. + self.decay * K.cast(self.iterations, K.dtype(self.decay))))
which is not exactly exponential either and it's somehow different than tensorflow's one. Also, it's used only when decay > 0.0 as it's obvious.
To follow the tensorflow convention of exponential decay you should implement:
decayed_learning_rate = learning_rate * ^ (global_step / decay_steps)
Depending on your needs you could choose to implement a Callback subclass and define a function within it (see 3rd bullet below) or use LearningRateScheduler which is actually exactly this with some checking: a Callback subclass which updates the learning rate at each epoch end.
If you want a finer handling of your learning rate policy (per batch for example) you would have to implement your subclass since as far as I know there is no implemented subclass for this task. The good part is that it's super easy:
Create a subclass
class LearningRateExponentialDecay(Callback):
and add the __init__() function which will initialize your instance with all needed parameters and also create a global_step variables to keep track of the iterations (batches):
def __init__(self, init_learining_rate, decay_rate, decay_steps):
self.init_learining_rate = init_learining_rate
self.decay_rate = decay_rate
self.decay_steps = decay_steps
self.global_step = 0
Finally, add the actual function inside the class:
def on_batch_begin(self, batch, logs=None):
actual_lr = float(K.get_value(self.model.optimizer.lr))
decayed_learning_rate = actual_lr * self.decay_rate ^ (self.global_step / self.decay_steps)
K.set_value(self.model.optimizer.lr, decayed_learning_rate)
self.global_step += 1
The really cool part is the if you want the above subclass to update every epoch you could use on_epoch_begin(self, epoch, logs=None) which nicely has epoch as parameter to it's signature. This case is even easier as you could skip global step altogether (no need to keep track of it now unless you want a fancier way to apply your decay) and use epoch in it's place.

Overriding `Comparison method violates its general contract` exception

I have a comparator like this:
lazy val seq = mapping.toSeq.sortWith { case ((_, set1), (_, set2)) =>
// Just propose all the most connected nodes first to the users
// But also allow less connected nodes to pop out sometimes
val popOutChance = random.nextDouble <= 0.1D && set2.size > 5
if (popOutChance) set1.size < set2.size else set1.size > set2.size
}
It is my intention to compare sets sizes such that smaller sets may appear higher in a sorted list with 10% chance.
But compiler does not let me do that and throws an Exception: java.lang.IllegalArgumentException: Comparison method violates its general contract! once I try to use it in runtime. How can I override it?
I think the problem here is that, every time two elements are compared, the outcome is random, thus violating the transitive property required of a comparator function in any sorting algorithm.
For example, let's say that some instance a compares as less than b, and then b compares as less than c. These results should imply that a compares as less than c. However, since your comparisons are stochastic, you can't guarantee that outcome. In fact, you can't even guarantee that a will be less than b next time they're compared.
So don't do that. No sort algorithm can handle it. (Such an approach also violates the referential transparency principle of functional programming and will make your program much harder to reason about.)
Instead, what you need to do is to decorate your map's members with a randomly assigned weighting - before attempting to sort them - so that they can be sorted consistently. However, since this happens at the start of a sort operation, the result of the sort will be different each time, which I think is what you're looking for.
It's not clear what type mapping has in your example, but it appears to be something like: Map[Any, Set[_]]. (You can replace the types as required - it's not that important to this approach. For example, say mapping actually has the type Map[String, Set[SomeClass]], then you would replace references below to Any with String and Set[_] to Set[SomeClass].)
First, we'll create a case class that we'll use to score and compare the map elements. Then we'll map the contents of mapping to a sequence of elements of this case class. Next, we sort those elements. Finally, we extract the tuple from the decorated class. The result should look something like this:
final case class Decorated(x: (Any, Set[_]), rand: Double = random.nextDouble)
extends Ordered[Decorated] {
// Calculate a rank for this element. You'll need to change this to suit your precise
// requirements. Here, if rand is less than 0.1 (a 10% chance), I'm adding 5 to the size;
// otherwise, I'll report the actual size. This allows transitive comparisons, since
// rand doesn't change once defined. Values are negated so bigger sets come to the fore
// when sorted.
private def rank: Int = {
if(rand < 0.1) -(x._2.size + 5)
else -x._2.size
}
// Compare this element with another, by their ranks.
override def compare(that: Decorated): Int = rank.compare(that.rank)
}
// Now sort your mapping elements as follows and convert back to tuples.
lazy val seq = mapping.map(x => Decorated(x)).toSeq.sorted.map(_.x)
This should put the elements with larger sets towards the front, but there's 10% chance that sets appear 5 bigger and so move up the list. The result will be different each time the last line is re-executed, since map will create new random values for each element. However, during sorting, the ranks will be fixed and will not change.
(Note that I'm setting the rank to a negative value. The Ordered[T] trait sorts elements in ascending order, so that - if we sorted purely by set size - smaller sets would come before larger sets. By negating the rank value, sorting will put larger sets before smaller sets. If you don't want this behavior, remove the negations.)

chisel3 arithmetic operations on Doubles

Please I have problems manipulating arithmetic operations with doubles in chisel. I have been seeing examples that uses just the following types: Int,UInt,SInt.
I saw here that arithmetic operations where described only for SInt and UInt. What about Double?
I tried to declare my output out as Double, but didn't know how. Because the output of my code is Double.
Is there a way to declare in Bundle an input and an output of type Double?
Here is my code:
class hashfunc(val k:Int, val n: Int ) extends Module {
val a = k + k
val io = IO(new Bundle {
val b=Input(UInt(k.W))
val w=Input(UInt(k.W))
var out = Output(UInt(a.W))
})
val tabHash1 = new Array[Array[Double]](n)
val x = new ArrayBuffer[(Double, Data)]
val tabHash = new Array[Double](tabHash1.size)
for (ind <- tabHash1.indices){
var sum=0.0
for (ind2 <- 0 until x.size){
sum += ( x(ind2) * tabHash1(ind)(ind2) )
}
tabHash(ind) = ((sum + io.b) / io.w)
}
io.out := tabHash.reduce(_ + _)
}
When I compile the code, I get the following error:
code error
Thank you for your kind attention, looking forward to your responses.
Chisel does have a native FixedPoint type which maybe of use. It is in the experimental package
import chisel3.experimental.FixedPoint
There is also a project DspTools that has simulation support for Doubles. There are some nice features, e.g. it that allows modules to parameterized on the numeric types (Complex, Double, FixedPoint, SInt) so that you can run simulations on double to validate the desired mathematical behavior and then switch to a synthesizable number format that meets your precision criteria.
DspTools is an ongoing research projects and the team would appreciate outside users feedback.
Operations on floating point numbers (Double in this case) are not supported directly by any HDL. The reason for this is that while addition/subtraction/multiplication of fixed point numbers is well defined there are a lot of design space trade-offs for floating point hardware as it is a much more complex piece of hardware.
That is to say, a high performance floating point unit is a significant piece of hardware in it's own right and would be time shared in any realistic design.

Is there a range data structure in Scala?

I'm looking for a way to handle ranges in Scala.
What I need to do is:
given a set of ranges and a range(A) return the range(B) where range(A) intersect range (B) is not empty
given a set of ranges and a range(A) remove/add range(A) from/to the set of ranges.
given range(A) and range(B) create a range(C) = [min(A,B), max(A,B)]
I saw something similar in java - http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/RangeSet.html
Though subRangeSet returns only the intersect values and not the range in the set (or list of ranges) that it intersects with.
RangeSet rangeSet = TreeRangeSet.create();
rangeSet.add(Range.closed(0, 10));
rangeSet.add(Range.closed(30, 40));
Range range = Range.closed(12, 32);
System.out.println(rangeSet.subRangeSet(range)); //[30,32] (I need [30,40])
System.out.println(range.span(Range.closed(30, 40))); //[12,40]
There is an Interval[A] type in the spire math library. This allows working with ranges of arbitrary types that define an Order. Boundaries can be inclusive, exclusive or omitted. So e.g. (-∞, 0.0] or [0.0, 1.0) would be possible intervals of doubles.
Here is a library intervalset for working with sets of non-overlapping intervals (IntervalSeq or IntervalTrie) as well as maps of intervals to arbitrary values (IntervalMap).
Here is a related question that describes how to use IntervalSeq with DateTime.
Note that if the type you want to use is 64bit or less (basically any primitive), IntervalTrie is extremely fast. See the Benchmarks.
As Tzach Zohar has mentioned in the comment, if all you need is range of Int - go for scala.collection.immutable.Range:
val rangeSet = Set(0 to 10, 30 to 40)
val r = 12 to 32
rangeSet.filter(range => range.contains(r.start) || range.contains(r.end))
If you need it for another underlying type - implement it by yourself, it's easy for your usecase.

MATLAB: dynamic variable definitions

For a numerical simulation in MATLAB I have parameters defined in an .m file.
%; Parameters as simple definitons
amb.T = 273.15+25; ... ambient temperature [K]
amb.P = 101325; ... ambient pressure [Pa]
combustor.T = 273.15+800; ... [K]
combustor.P = 100000; ... [Pa]
combustor.lambda = 1.1;
fuel.x.CH4 = 0.5; ... [0..1]
fuel.n = 1;
air.x.O2 = 0.21;
%; more complex definitions consisting of other params
air.P = reactor.P;
air.T = amb.T;
air.n = fuel.x.CH4 * 2 * fuel.n * combustor.lambda / air.x.O2;
Consider this set as 'default' definitions. For running one simulation this definitions works fine.
It's getting more complicated if I want to change one of these parameters programmatically for a parameter study (the effect of changing parameters on the results), that is, to perform multiple simulations by using a for loop. In the script performing this I want to change the defintion of several parameters beforehand, i.e. overwrite default definitions. Is there a way to do this without touching the default definitions in-code (comment them/overwrite them literally)? It should be possible to change any parameter in the study-performing script and catch up on default definitions from the listing above (or the other way round).
Let me illustrate the problem with the following example: If I want to vary combustor.lambda (let's say running from 0.9 to 1.3) field air.n has to be evaluated again for the change to take place in the actual simulation. So, I could evaluate the listing again, but this way I would lose the study-defined combustor.lambda for the default one.
I am thinking about these solutions but I cannot get to how to do this:
Use references/handles in a way that the struct fields only hold the definitions, not the actual values. This allows for changing default definitions before 'parsing' the whole struct to get the actual values.
Evaluate the default definition set by a function considering (non-default) definitions defined preliminarily, i.e. skipping these lines of the default definition set during evaluation.
Any OOP approach. Of course, it is not limited to struct data types, but on the other hand, maybe there are useful functions for structs?
Edit:
The purpose of the default set is for the programmer to be as free as possible in choosing the varying parameters with any of the other parameters keeping their default definition which can be independent (= values) as well as dependent (= equations like air.n).
% one default parameter set
S = struct('T', 25, 'P', 101000, 'lambda', .5, 'fuel', .5);
GetNByLambda = #(fuel, lambda) fuel * 2 * lambda;
T = struct('P', S.P, 'n', GetNByLambda(S.fuel, S.lambda));
% add more sets
S(end+1) = struct('T', 200, 'P', 10000, 'lambda', .8, 'fuel', .7);
T(end+1) = struct('P', S.P, 'n', GetNByLambda(S(end+1).fuel, S(end+1).lambda));
% iterate over parameter sets
for ii = 1:length(S)
disp(S(end+1))
disp(T(end+1))
end