I'm looking for a way to handle ranges in Scala.
What I need to do is:
given a set of ranges and a range(A) return the range(B) where range(A) intersect range (B) is not empty
given a set of ranges and a range(A) remove/add range(A) from/to the set of ranges.
given range(A) and range(B) create a range(C) = [min(A,B), max(A,B)]
I saw something similar in java - http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/RangeSet.html
Though subRangeSet returns only the intersect values and not the range in the set (or list of ranges) that it intersects with.
RangeSet rangeSet = TreeRangeSet.create();
rangeSet.add(Range.closed(0, 10));
rangeSet.add(Range.closed(30, 40));
Range range = Range.closed(12, 32);
System.out.println(rangeSet.subRangeSet(range)); //[30,32] (I need [30,40])
System.out.println(range.span(Range.closed(30, 40))); //[12,40]
There is an Interval[A] type in the spire math library. This allows working with ranges of arbitrary types that define an Order. Boundaries can be inclusive, exclusive or omitted. So e.g. (-∞, 0.0] or [0.0, 1.0) would be possible intervals of doubles.
Here is a library intervalset for working with sets of non-overlapping intervals (IntervalSeq or IntervalTrie) as well as maps of intervals to arbitrary values (IntervalMap).
Here is a related question that describes how to use IntervalSeq with DateTime.
Note that if the type you want to use is 64bit or less (basically any primitive), IntervalTrie is extremely fast. See the Benchmarks.
As Tzach Zohar has mentioned in the comment, if all you need is range of Int - go for scala.collection.immutable.Range:
val rangeSet = Set(0 to 10, 30 to 40)
val r = 12 to 32
rangeSet.filter(range => range.contains(r.start) || range.contains(r.end))
If you need it for another underlying type - implement it by yourself, it's easy for your usecase.
Related
I have the file "global power plants" with a column "capacity_in_mw" (with numbers 30, 100, 45, ...) and another column is "primary_fuel" (Coal, Hydro, Oil, Solar, Nuclear, Wind, Coal).
I can generate a map in function of "capacity_in_mw" by setting the condition
plotdata = data.query('capacity_in_mw > 50')
Now, I would like to generate a map in function of "primary_fuel". Because data is alphanumeric, how do I set up the condition?
Furthermore, when making the map, to assign color='black' for Coal, color='green' for Wind, color='yellow' for Solar, ... etc.
Python.
I am a novice, but I think I found the solution. It is more of an issue of syntax, to use the == in the query to identify the alphanumeric value.
plotdata = data.query('primary_fuel == "Hydro"')
Also, lesson learned in the future to dig more before posting a question.
I am trying to use k-medoids to cluster some trajectory data I am working with (multiple points along the trajectory of an aircraft). I want to cluster these into a set number of clusters (as I know how many types of paths there should be).
I have found that k-medoids is implemented inside the pyclustering package, and am trying to use that. I am technically able to get it to cluster, but I do not know how to control the number of clusters. I originally thought it was directly tied to the number of elements inside what I called initial_medoids, but experimentation shows that it is more complicated than this. My relevant code snippet is below.
Note that D holds a list of lists. Each list corresponds to a single trajectory.
def hausdorff( u, v):
d = max(directed_hausdorff(u, v)[0], directed_hausdorff(v, u)[0])
return d
traj_count = len(traj_lst)
D = np.zeros((traj_count, traj_count))
for i in range(traj_count):
for j in range(i + 1, traj_count):
distance = hausdorff(traj_lst[i], traj_lst[j])
D[i, j] = distance
D[j, i] = distance
from pyclustering.cluster.kmedoids import kmedoids
initial_medoids = [104, 345, 123, 1]
kmedoids_instance = kmedoids(traj_lst, initial_medoids)
kmedoids_instance.process()
cluster_lst = kmedoids_instance.get_clusters()[0]
num_clusters = len(np.unique(cluster_lst))
print('There were %i clusters found' %num_clusters)
I have a total of 1900 trajectories, and the above-code finds 1424 clusters. I had expected that I could control the number of clusters through the length of initial_medoids, as I did not see any option to input the number of clusters into the program, but this seems unrelated. Could anyone guide me as to the mistake I am making? How do I choose the number of clusters?
In case of requirement to obtain clusters you need to call get_clusters():
cluster_lst = kmedoids_instance.get_clusters()
Not get_clusters()[0] (in this case it is a list of object indexes in the first cluster):
cluster_lst = kmedoids_instance.get_clusters()[0]
And that is correct, you can control amount of clusters by initial_medoids.
It is true you can control the number of cluster, which correspond to the length of initial_medoids.
The documentation is not clear about this. The get__clusters function "Returns list of medoids of allocated clusters represented by indexes from the input data". so, this function does not return the cluster labels. It returns the index of rows in your original (input) data.
Please check the shape of cluster_lst in your example, using .get_clusters() and not .get_clusters()[0] as annoviko suggested. In your case, this shape should be (4,). So, you have a list of four elements (clusters), each containing the index or rows in your original data.
To get, for example, data from the first cluster, use:
kmedoids_instance = kmedoids(traj_lst, initial_medoids)
kmedoids_instance.process()
cluster_lst = kmedoids_instance.get_clusters()
traj_lst_first_cluster = traj_lst[cluster_lst[0]]
I created a routing problem and added some dimension to it. A solution assignment is found and I want to know the cumulative value at each index. I noticed that the CumulVar of an assignment does not only have a Value method but also Min and Max methods. Apparently the cumulative variables are implemented in such a way that they can represent intervals. I can see how setting
slack_max>0
fix_start_cumul_to_zero=False
introduces an ambiguity for the cumulative variables as their is a choice in how to start and how much slack to add at each stop. But
Question: How are the Min and Max at each index computed?
You can get the Min and Max range of a given node index from solution.Min(dimension.Cumulvar(index))
Note you'll get Min and Max exactly the same when slack_max=0 unless you know something I don't ;)
Assuming you are using an output solution object solution and a time dimension time_dimension, this will store em as a dict with min-max tuples, you may wish to adapt the output format however you wish:
time_dict = {}
for vehicle_id in range(num_vehicles):
vehicle_time_dict={}
index = routing.Start(vehicle_id)
start_time = solution.Min(time_dimension.CumulVar(index))
vehicle_time_dict[index]=(index_min,index_max)
while not routing.isEnd(index):
previous_index = index
index = solution.Value(routing.NextVar(index))
index_min = solution.Min(time_dimension.CumulVar(index))
index_max = solution.Max(time_dimension.CumulVar(index))
vehicle_time_dict[index]=(index_min,index_max)
time_dict[vehicle_id]=vehicle_time_dict
routing.IsEnd(index) returns True if it's the last index of that vehicle's route (or anywhere after the last index, so if it's 10 nodes long:
routing.IsEnd(8) will return False,
routing.IsEnd(9) will return True,
routing.IsEnd(10) will also return True, etc)
Is it possible to perform a dynamic "where/filter" in a dataframe ?
I am running a "like" operation to remove items that match specific strings
eventsDF.where(
~eventsDF.myColumn.like('FirstString%') &
~eventsDF.myColumn.like('anotherString%')
).count()
However I need to filter based on strings that come from another dataframe/list.
The solution that I was going for (which doesn't really work) involves a function that receives an index
#my_func[0] = "FirstString"
#my_func[1] = "anotherString"
def my_func(n):
return str(item[n])
newDf.where(
~newDf.useragent.like(str(my_func(1))+'%')
).count()
but I'm struggling to make it work by passing a range (mainly because it's a list instead of an integer)
newDf.where(
~newDf.useragent.like(str(my_func([i for i in range(2)])+'%'))
).count()
I don't want to go down the path of using "exec" or "eval" to perform it
str_likes = [~df.column.like(s) for s in strings] then reduce it into one expression reduce(lambda x, y: x & y, str_likes)
It's a little bit ugly but does what you want. You can also do this in a for loop like so
bool_expr = ~df.column.like(strings[0])
for s in strings[1:]:
bool_expr &= ~df.column.like(s)
df.where(bool_expr).count()
I have a comparator like this:
lazy val seq = mapping.toSeq.sortWith { case ((_, set1), (_, set2)) =>
// Just propose all the most connected nodes first to the users
// But also allow less connected nodes to pop out sometimes
val popOutChance = random.nextDouble <= 0.1D && set2.size > 5
if (popOutChance) set1.size < set2.size else set1.size > set2.size
}
It is my intention to compare sets sizes such that smaller sets may appear higher in a sorted list with 10% chance.
But compiler does not let me do that and throws an Exception: java.lang.IllegalArgumentException: Comparison method violates its general contract! once I try to use it in runtime. How can I override it?
I think the problem here is that, every time two elements are compared, the outcome is random, thus violating the transitive property required of a comparator function in any sorting algorithm.
For example, let's say that some instance a compares as less than b, and then b compares as less than c. These results should imply that a compares as less than c. However, since your comparisons are stochastic, you can't guarantee that outcome. In fact, you can't even guarantee that a will be less than b next time they're compared.
So don't do that. No sort algorithm can handle it. (Such an approach also violates the referential transparency principle of functional programming and will make your program much harder to reason about.)
Instead, what you need to do is to decorate your map's members with a randomly assigned weighting - before attempting to sort them - so that they can be sorted consistently. However, since this happens at the start of a sort operation, the result of the sort will be different each time, which I think is what you're looking for.
It's not clear what type mapping has in your example, but it appears to be something like: Map[Any, Set[_]]. (You can replace the types as required - it's not that important to this approach. For example, say mapping actually has the type Map[String, Set[SomeClass]], then you would replace references below to Any with String and Set[_] to Set[SomeClass].)
First, we'll create a case class that we'll use to score and compare the map elements. Then we'll map the contents of mapping to a sequence of elements of this case class. Next, we sort those elements. Finally, we extract the tuple from the decorated class. The result should look something like this:
final case class Decorated(x: (Any, Set[_]), rand: Double = random.nextDouble)
extends Ordered[Decorated] {
// Calculate a rank for this element. You'll need to change this to suit your precise
// requirements. Here, if rand is less than 0.1 (a 10% chance), I'm adding 5 to the size;
// otherwise, I'll report the actual size. This allows transitive comparisons, since
// rand doesn't change once defined. Values are negated so bigger sets come to the fore
// when sorted.
private def rank: Int = {
if(rand < 0.1) -(x._2.size + 5)
else -x._2.size
}
// Compare this element with another, by their ranks.
override def compare(that: Decorated): Int = rank.compare(that.rank)
}
// Now sort your mapping elements as follows and convert back to tuples.
lazy val seq = mapping.map(x => Decorated(x)).toSeq.sorted.map(_.x)
This should put the elements with larger sets towards the front, but there's 10% chance that sets appear 5 bigger and so move up the list. The result will be different each time the last line is re-executed, since map will create new random values for each element. However, during sorting, the ranks will be fixed and will not change.
(Note that I'm setting the rank to a negative value. The Ordered[T] trait sorts elements in ascending order, so that - if we sorted purely by set size - smaller sets would come before larger sets. By negating the rank value, sorting will put larger sets before smaller sets. If you don't want this behavior, remove the negations.)