OR-Tools: TSP with Profits - or-tools

is there an option to calculate the TSP with Profits using OR-Tools?
I tried to implement a revenue for every end node
data['revenue'] = {(1, 0): 0,
(1, 2): 100,
(1, 3): 10,
(1, 4): 10000000000,
(2, 0): 0,
(2, 1): 1000,
(2, 3): 10,
(2, 4): 10000000000,
(3, 0): 0,
(3, 1): 1000,
(3, 2): 100,
(3, 4): 10000000000,
(4, 0): 0,
(4, 1): 1000,
(4, 2): 100,
(4, 3): 10
}
and then i added from this question: Maximize profit with disregard to number of pickup-delivery done OR tools
for node, revenue in data["revenue"].items():
start, end = node
routing.AddDisjunction(
[manager.NodeToIndex(end)], revenue
)
routing.AddDisjunction(
[manager.NodeToIndex(start)], 0
)
this is not working unfortunately. I always get solutions, that are not making sense.
Can someone help or give me an advice, how to implement profits into the TSP with OR-Tools?

Adding start or end node in a Disjunction makes no sense...
AddDisjunction must be applied to each node (or pair) BUT start and end nodes.
for node, revenue in data["revenue"].items():
pickup, delivery = node
pickup_index = manager.NodeToIndex(pickup)
delivery_index = manager.NodeToIndex(delivery)
routing.AddDisjunction(
[pickup_index, delivery_index], revenue, 2 # cardinality
)
Does your key value in your dict are pair of nodes ? seems to be coordinate to me...

Related

How do I remove the proper subsets from a list of sets in Scala?

I have a list of sets of integers as followed: {(1, 0), (0, 1, 2), (1, 2), (1, 2, 3, 4, 5), (3, 4)}.
I want to write a program in Scala to remove the sets that are proper subset of some set in the given list, i.e. the final result would be: {(0,1,2), (1,2,3,4,5)}.
An O(n2) solution can be done by checking each set against the entire list but that would be very expensive and does not scale very well for ~100000 sets. I also thought of generating edges from the sets, remove duplicate edges and run a DFS but I have no idea how to do it in Scala (the more Scala-ish way and not one-to-one from Java code).
Individual elements (sets) need only be compared to other elements of the same size or larger.
val ss = List(Set(1, 0), Set(0, 1, 2), Set(1, 2), Set(1, 2, 3, 4, 5), Set(3, 4))
ss.sortBy(- _.size) match {
case Nil => Nil
case hd::tl =>
tl.foldLeft(List(hd)){case (acc, s) =>
if (acc.exists(s.forall(_))) acc
else s::acc
}
}
//res0: List[Set[Int]] = List(Set(0, 1, 2), Set(5, 1, 2, 3, 4))

How to store each element to dictionary and count dictionary value with pyspark?

I want to count elements value of dictionary. I try with this code:
def f_items(data, steps=0):
items = defaultdict(int)
for element in data:
if element in data:
items[element] += 1
else:
items[element] = 1
return items.items()
data = [[1, 2, 3, 'E'], [1, 2, 3, 'E'], [5, 2, 7, 112, 'A'] ]
rdd = sc.parallelize(data)
items = rdd.flatMap(lambda data: [y for y in f_items(data)], True)
print (items.collect())
The output of this code is shown below:
[(1, 1), (2, 1), (3, 1), ('E', 1), (1, 1), (2, 1), (3, 1), ('E', 1), (5, 1), (2, 1), (7, 1), (112, 1), ('A', 1)]
But, it should show the result following:
[(1, 2), (2, 3), (3, 3), ('E', 2), (5, 1), (7, 1), (112, 1), ('A', 1)]
How to achieve this?
Your last step should be a reduceByKey function call on the items rdd.
final_items = items.reduceByKey(lambda x,y: x+y)
print (final_items.collect())
You can look into this link to see some examples of reduceByKey in scala, java and python.

Merging time distributed tensor gives 'inbound node error'

In my network, I have some time distributed convolutions.Batch size = 1 image, which breaks down into 32 sub-images, for each sub-image, 3 feature of dimension 6x6x256. I need to merge all the 3 features corresponding to a particular image.
Tensor definition are like:
out1 = TimeDistributed(Convolution2D(256, (3, 3), strides=(2, 2), padding='same', activation='relu'))(out1)
out2 = TimeDistributed(Convolution2D(256, (3, 3), strides = (2,2), padding='same', activation='relu'))(out2)
out3 = TimeDistributed(Convolution2D(256, (1, 1), padding='same', activation='relu'))(out3)
out1: <tf.Tensor 'time_distributed_3/Reshape_1:0' shape=(1, 32, 6, 6, 256) dtype=float32>
out2: <tf.Tensor 'time_distributed_5/Reshape_1:0' shape=(1, 32, 6, 6, 256) dtype=float32>
out4: <tf.Tensor 'time_distributed_6/Reshape_1:0' shape=(1, 32, 6, 6, 256) dtype=float32>
I have tried different techniques to merge like TimeDistributed(merge(... )), etc but nothing works.
out = Lambda(lambda x:merge([x[0],x[1],x[2]],mode='concat'))([out1,out2,out3])
It gives correct dimension tensor (1,32,6,6,768), but then it further passes through some flatten and dense layers. When i build the model like
model = Model( .... , .... ), it gives error
File "/home/adityav/.virtualenvs/cv/local/lib/python2.7/site-packages/keras/engine/topology.py", line 1664, in build_map_of_graph
next_node = layer.inbound_nodes[node_index]
AttributeError: 'NoneType' object has no attribute 'inbound_nodes'
Any idea on how to do this time distributed concatenation, when the tensors are 5dimensional.
Thanks

Merge outputs into one list in Prolog

I have a prolog assignment that requires us to make a list of all the pairs of numbers in the range of two givens. I can get it to output (using one function) the following, but I don't know how to merge all the outputs. Here is the outbut of calling the function:
?- i(L,5,7).
L = [(5, 5), (5, 6), (5, 7)] ;
L = [(6, 5), (6, 6), (6, 7)] ;
L = [(7, 5), (7, 6), (7, 7)] ;
And here is the code (the interval methods are prof defined and not allowed to modified):
interval(X,L,H) :-
number(X),
number(L),
number(H),
!,
X>=L,
X=<H.
interval(X,X,H) :-
number(X),
number(H),
X=<H.
interval(X,L,H) :-
number(L),
number(H),
L<H,
L1 is L+1,
interval(X,L1,H).
i(L,X,Y):-
interval(N2,X,Y),
setof((N2,N),interval(N,X,Y),L).
I am looking for the output to be this instead:
L = [ (5, 5), (5, 6), (5, 7), (6, 5), (6, 6), (6, 7), (7, 5), (7, 6), (7, 7)]
The problem is that:
i(L,X,Y):-
interval(N2,X,Y),
setof((N2,N),interval(N,X,Y),L).
Will first set N2 to a number in the interval, and next you ask to generate a set for that given number N2 and a variable number N.
You can however simply define a composite in the goal of setof/3:
i(L,X,Y) :-
setof((N2,N),(interval(N2,X,Y),interval(N,X,Y)),L).
Nevertheless, perhaps a more elegant way to do it (an probably more Prolog) is to define an interval_tuple/3 predicate:
interval_tuple(X,Y,(N,N2)) :-
interval(N,X,Y),
interval(N2,X,Y).
and then call your setof/3 or listof/3 on that predicate:
i(L,X,Y) :-
listof(Tup,interval_tuple(X,Y,Tup),L).
You can do this concisely with CLP(FD):
:- use_module(library(clpfd)]).
interval(Left, Right, (X,Y)) :- % Definition of one interval
[X, Y] ins Left .. Right,
label([X, Y]).
intervals(Left, Right, IntervalList) :-
Left #=< Right,
label([Left, Right]),
findall(Interval, interval(Left, Right, Interval), IntervalList). % Find all intervals
I'm using the more descriptive name, intervals/3 rather than simply i/3. I've also reordered the arguments a bit.

How to check if a list of matrix contains a given matrix in Maple

I have some problems in Maple.
If I have a matrix:
Matrix1 := Matrix(2, 2, {(1, 1) = 31, (1, 2) = -80, (2, 1) = -50, (2, 2) = 43});
I want to decide if it is in the below list:
MatrixList := [Matrix(2, 2, {(1, 1) = 31, (1, 2) = -80, (2, 1) = -50, (2, 2) = 43}), Matrix(2, 2, {(1, 1) = -61, (1, 2) = 77, (2, 1) = -48, (2, 2) = 9})];
I did the following:
evalb(Matrix1 in MatrixList);
but got "false".
Why? And how do I then make a program that decide if a matrix is
contained in a list of matrices.
Here's a much cheaper way than DrC's
ormap(LinearAlgebra:-Equal, MatrixList, Matrix1)