How to restrict the domain of a VehicleVar with disjunction for reload nodes? - or-tools

I would like to restrict the domain of a vehicle var for non-depot nodes as well as reload nodes. Also, reloading is optional. For example, suppose I have 8 nodes and 2 vehicles such that,
Vehicle 1: Capacity (3)
Vehicle 2: Capacity (4)
0 -> depot (All vehicles start and end here)
1 -> reload (Allowed vehicle -> {1,2})
2 -> reload (Allowed vehicle -> {1,2})
3 -> drop point (demand = 1) (Allowed vehicle -> 1)
4 -> drop point (demand = 1) (Allowed vehicle -> 1)
5 -> drop point (demand = 2) (Allowed vehicle -> 1)
6 -> drop point (demand = 1) (Allowed vehicle -> 2)
7 -> drop point (demand = 2) (Allowed vehicle -> 2)
Expected visiting sequence:
Vehicle 1 -> [0,3,4,1,5,0]
Vehicle 2 -> [0,6,7,0]
The way I tried to achieve this:
Without disjunction for the reload nodes (1st). (Edited)
#[routing.AddDisjunction([manager.NodeToIndex(i)], 0)
#for i in data['reloadNodes']] # without penalty
Added restriction
for node_i in range(routing.nodes()):
index_i = manager.NodeToIndex(node_i) # internal index
if node_i is depot_node: continue # Leave depot node
allowed_vehicles = list(data['vehiclesAllowed'][node_i]) # Vehicles allowed to visit `node_i`
if node_i in data['reloadNodes']:
allowed_vehicles.insert(0, -1) # because reload nodes are optional.
routing.VehicleVar(index_i).SetValues(allowed_vehicles)
But this way I got a result in which all reload nodes were visited (In this case vehicle 2 visited reload node 2 just before its last depot node, like this [0,6,7,2,0]). I have tried with more reload nodes but got the same behavior every time.
And this was fine because disjunctions were not there, therefore every node was visited. (Edited)
Then I tried with disjunction (uncommented the code snippet of point 1st )
And after this change, I got no result after 50 sec run time with the status ROUTING_FAIL_TIMEOUT
Note: There was no restriction on trip max_time and max_length while trying this.
Please help me with this, I have the feeling that I have done something wrong with the disjunctions :)

the index of the vehicle var of node_i is not node_i, it is index_manager.NodeToIndex(node_i).
The rest of the code seems to be correct.

Related

What kind of variable select for incrementing node labels in a community detection algorithm

i am working on a community detection algorithm that uses the concept of propagating label to nodes. i have problem in selecting the true type for the Label_counter variable.
we have an algorithm with name LPA(label propagation algorithm) which propagates labels to nodes through iterations. think labels as node property. the initial label for each node is the node id, and in iterations nodes update their new label based on the most frequent label among its neighbors. the algorithm i am working on is something like LPA. at first every node has initial label equal to 0 and then nodes get new labels. as nodes update and get new labels, based on some conditions the Label_counter should be incremented by one to use this value as label for other nodes . for example label=1 or label = 2 and so on. for example we have zachary karate club dataset that it has 34 nodes and the dataset has 2 communities.
the initial state is like this:
(1,0)
(2,0)
.
.
.
(34,0)
first number is node Id and second one is label.
as nodes get new label, the Label_counter increments and other nodes in next iterations get new label and again Label_counter increments.
(1,1)
(2,1)
(3,1)
.
.
.
(33,3)
(34,3)
nodes with same label, belong to same community.
the problem that i have is:
because nodes in RDD and variables are distributed across the machines(each machine has a copy of variables) and executors dont have connection with each other, if an executor updates the Label_counter, other executors wont be informed of new value of Label_counter and maybe nodes will get wrong labels, IS it true to use Accumulator as label counter in this case, because Accumulators are shared variables across machines, or there is other ways for handling this problem???
In spark it is always complicated to compute index like values because they depend on things that are not in all the partitions. I can propose the following idea.
Compute the number of time the condition is met per partition
Compute the cumulated increment per partition so that we know the initial increment of each partition.
Increment the values of the partition based on that initial increment
Here is what the code could look like this. Let me start by setting up a few things.
// Let's define some condition
def condition(node : Long) = node % 10 == 1
// step 0, generate the data
val rdd = spark.range(34)
.select('id+1).repartition(10).rdd
.map(r => (r.getAs[Long](0), 0))
.sortBy(_._1).cache()
rdd.collect
Array[(Long, Int)] = Array((1,0), (2,0), (3,0), (4,0), (5,0), (6,0), (7,0), (8,0),
(9,0), (10,0), (11,0), (12,0), (13,0), (14,0), (15,0), (16,0), (17,0), (18,0),
(19,0), (20,0), (21,0), (22,0), (23,0), (24,0), (25,0), (26,0), (27,0), (28,0),
(29,0), (30,0), (31,0), (32,0), (33,0), (34,0))
Then the core of the solution:
// step 1 and 2
val partIncrInit = rdd
// to each partition, we associate the number of times we need to increment
.mapPartitionsWithIndex{ case (i,p) =>
Iterator(i -> p.map(_._1).count(condition))
}
.collect.sorted // sort by partition index
.map(_._2) // we don't need the index anymore
.scanLeft(0)(_+_) // cumulated sum
// step 3, we increment each partition based on this initial increment.
val result = rdd
.mapPartitionsWithIndex{ case (i, p) =>
var incr = 0
p.map{ case (node, value) =>
if(condition(node))
incr+=1
(node, partIncrInit(i) + value + incr)
}
}
result.collect
Array[(Long, Int)] = Array((1,1), (2,1), (3,1), (4,1), (5,1), (6,1), (7,1), (8,1),
(9,1), (10,1), (11,2), (12,2), (13,2), (14,2), (15,2), (16,2), (17,2), (18,2),
(19,2), (20,2), (21,3), (22,3), (23,3), (24,3), (25,3), (26,3), (27,3), (28,3),
(29,3), (30,3), (31,4), (32,4), (33,4), (34,4))

take(n) doesn't have effect after groupBy in RxJava2

I am trying to group several model instances by name and then use take(n) to take only certain items per group, but somehow the take has no effect on GroupedObservable. Here is the code
Let's assume this contains a list with 10 items and 5 have the name "apple" and the other 5 have the name "pear"
Observable<Item> items....
Observable<Item> groupedItems = items.groupBy(Item::name)
.flatMap(it -> it.take(2));
So I imagine groupedItems has to emit 2 "apples" and 2 "pears", but it has all of them instead.
Is there something which I am getting wrong, do I need to do it differently?
Cancelled groups are recreated when the same key is encountered again. You need to make sure the group is not stopped and you'd have to ignore further items in some fashion:
source.groupBy(func)
.flatMap(group ->
group.publish(p -> p.take(5).mergeWith(p.ignoreElements()))
);

How do I work with the 1st and least element in Observable?

Let's say I've got an Observable<Player> and I'd like to map this to another Observable<Integer>, where Integer equals to player.height, but there's a condition: I'd like to map all players but the very first and last one (we should check one more thing for them). So in iterative programming it'll sth like this:
heights = []
num_of_players = len(players)
for idx in len(num_of_players):
if (idx == 0 or idx == num_of_players - 1):
if (isGoodEnough(players[idx]):
heights.append(player.height)
else:
heights.append(player.height)
return height
How do I rewrite this in Rx way (you should assume I'm given Observable instead of List)?
Given:
Observable<Player> players;
Single<Integer> playerHeight(int playerId);
You have to split the sequence into first, middle and last with publish(Function)s and then combine them back together:
players
.publish(sharedPlayers -> {
return Observable.merge(
// work only on the very first player
sharedPlayers.take(1)
.filter(firstPlayer -> isGoodEnough(firstPlayer))
.flatMapSingle(firstPlayer -> playerHeight(firstPlayer.playerId)),
// work with not the first and not the last
sharedPlayers.skip(1).skipLast(1)
.flatMapSingle(midPlayers -> playerHeight(midPlayers .playerId)),
// work with the last which shouldn't be the first again
sharedPlayers.skip(1).takeLast(1)
.filter(lastPlayer -> isGoodEnough(lastPlayer))
.flatMapSingle(lastPlayer-> playerHeight(lastPlayer.playerId))
);
})
.subscribe(/* ... */);
Please adapt this solution as necessary.

OptaPlanner iterating through custom moves in original order

I have a basic scheduling task where I'm trying to schedule activities in a list of time slots. I'm creating an initial solution where I assign activities pretty close to the time slot that they should end up in, e.g.:
activity a -> timeSlot 10
activity b -> timeSlot 50
activity c -> timeSlot 100
and what I want from this point is for OptaPlanner to simply move those activities backwards, one timeSlot at a time, until none of my hard or soft constraints are broken.
I've created a custom change move factory where I'm trying to do exactly that:
public List<Move> createMoveList(ActivityScheduler activityScheduler) {
List<Move> moveList = new ArrayList<>();
List<TimeSlot> timeSlotList = activityScheduler.getTimeSlotList();
for (Activity activity : activityScheduler.getActivityList()) {
for (int n = activity.getStartingTimeSlot().getIndex(); n >= 0; n--) {
moveList.add(new TimeSlotChangeMove(activity, timeSlotList.get(n)));
}
}
return moveList;
}
And I've set my selection order to ORIGINAL in my config:
<localSearch>
<moveListFactory>
<selectionOrder>ORIGINAL</selectionOrder>
<moveListFactoryClass>...TimeSlotChangeMoveFactory</moveListFactoryClass>
</moveListFactory>
...
What I was hoping was that when it's moving say activity c, it would move it to timeSlot 99, then 98, 97 etc. until no constraints are broken. But that's not what's happening; e.g. one of the first steps shows:
2014-08-27 13:37:12.382 DEBUG 7401 --- [nio-8080-exec-8] o.o.c.i.l.DefaultLocalSearchSolverPhase :
Step index (1), time spend (326), score (-16hard/13soft), new best score (-16hard/13soft),
accepted/selected move count (1000/2242) for picked step (Do Homework #2 -> Required Slots: 2 Starting TimeSlot: TimeSlot 2035 => TimeSlot 1240).
It moves the activity way too far back (from slot 2035 to 1240). How can I get OptaPlanner to move the activity only as far back as is needed and no further?
Sounds like you want to use a <constructionHeuristic>, not a <localSearch>. Take a look at the docs chapter on Construction Heuristics.
If you do insist in using <localSearch>, configure a <pickEarlyType>. See docs on Local Search, section on pickEarlyType.

How to get nodes that have a given amount of outgoing relationships with a given property in Neo4j Cypher?

In my domain a node can have several relationships of the same type to other entities. Each relationship have several properties and I'd like to retrieve the nodes that are connected by at least 2 relationships that present a given property.
EG: A relationship between nodes have a property year. How do I find the nodes that have at least two outgoing relationships with the year set to 2012?
Why Chypher query so far looks like this (syntax error)
START x = node(*)
MATCH x-[r:RELATIONSHIP_TYPE]->y
WITH COUNT(r.year == 2012) AS years
WHERE HAS(r.year) AND years > 1
RETURN x;
I tried also nesting queries but I believe that it's not allowed in Cypher. The closest thing is the following but I do not know how to get rid of the nodes with value 1:
START n = node(*)
MATCH n-[r:RELATIONSHIP_TYPE]->c
WHERE HAS(r.year) AND r.year == 2012
RETURN n, COUNT(r) AS counter
ORDER BY counter DESC
Try this query
START n = node(*)
MATCH n-[r:RELATIONSHIP_TYPE]->c
WHERE HAS(r.year) AND r.year=2012
WITH n, COUNT(r) AS rc
WHERE rc > 1
RETURN n, rc