Merge outputs into one list in Prolog - merge

I have a prolog assignment that requires us to make a list of all the pairs of numbers in the range of two givens. I can get it to output (using one function) the following, but I don't know how to merge all the outputs. Here is the outbut of calling the function:
?- i(L,5,7).
L = [(5, 5), (5, 6), (5, 7)] ;
L = [(6, 5), (6, 6), (6, 7)] ;
L = [(7, 5), (7, 6), (7, 7)] ;
And here is the code (the interval methods are prof defined and not allowed to modified):
interval(X,L,H) :-
number(X),
number(L),
number(H),
!,
X>=L,
X=<H.
interval(X,X,H) :-
number(X),
number(H),
X=<H.
interval(X,L,H) :-
number(L),
number(H),
L<H,
L1 is L+1,
interval(X,L1,H).
i(L,X,Y):-
interval(N2,X,Y),
setof((N2,N),interval(N,X,Y),L).
I am looking for the output to be this instead:
L = [ (5, 5), (5, 6), (5, 7), (6, 5), (6, 6), (6, 7), (7, 5), (7, 6), (7, 7)]

The problem is that:
i(L,X,Y):-
interval(N2,X,Y),
setof((N2,N),interval(N,X,Y),L).
Will first set N2 to a number in the interval, and next you ask to generate a set for that given number N2 and a variable number N.
You can however simply define a composite in the goal of setof/3:
i(L,X,Y) :-
setof((N2,N),(interval(N2,X,Y),interval(N,X,Y)),L).
Nevertheless, perhaps a more elegant way to do it (an probably more Prolog) is to define an interval_tuple/3 predicate:
interval_tuple(X,Y,(N,N2)) :-
interval(N,X,Y),
interval(N2,X,Y).
and then call your setof/3 or listof/3 on that predicate:
i(L,X,Y) :-
listof(Tup,interval_tuple(X,Y,Tup),L).

You can do this concisely with CLP(FD):
:- use_module(library(clpfd)]).
interval(Left, Right, (X,Y)) :- % Definition of one interval
[X, Y] ins Left .. Right,
label([X, Y]).
intervals(Left, Right, IntervalList) :-
Left #=< Right,
label([Left, Right]),
findall(Interval, interval(Left, Right, Interval), IntervalList). % Find all intervals
I'm using the more descriptive name, intervals/3 rather than simply i/3. I've also reordered the arguments a bit.

Related

How to sort an array with a custom order in Scala

I have a collection of data like the following:
val data = Seq(
("M", 1),
("F", 2),
("F", 3),
("F/M", 4),
("M", 5),
("M", 6),
("F/M", 7),
("F", 8)
)
I would like to sort this array according to the first value of the tuple. But I don't want to sort them in alphabetic order, I want them to be sorted like that: all Fs first, then all Ms and finally all F/Ms (I don't are about the inner sorting for values with the same key).
I thought about extending the Ordering class, but it feel quite overkilling for such a simple problem. Any idea?
EDIT: See #Eastsun's comment below for an even simpler solution.
I finally came up with a simple solution based on a map:
val sortingOrder = Map("F" -> 0, "M" -> 1, "F/M" -> 2)
data.sortWith((p1, p2) => sortingOrder(p1._1) < sortingOrder(p2._1))
This will of course fail if there is a unknown key in data, but it will be fine in my case.
In order to avoid an error when a new key is met, we can do the following:
val sortingOrder = Map("F" -> 0, "M" -> 1, "F/M" -> 2)
val nKeys = sortingOrder.size
data.sortWith((p1, p2) => sortingOrder.getOrElse(p1._1, nKeys) < sortingOrder.getOrElse(p2._1, nKeys))
This will push tuples with unknown keys at the end of the list.

OR-Tools: TSP with Profits

is there an option to calculate the TSP with Profits using OR-Tools?
I tried to implement a revenue for every end node
data['revenue'] = {(1, 0): 0,
(1, 2): 100,
(1, 3): 10,
(1, 4): 10000000000,
(2, 0): 0,
(2, 1): 1000,
(2, 3): 10,
(2, 4): 10000000000,
(3, 0): 0,
(3, 1): 1000,
(3, 2): 100,
(3, 4): 10000000000,
(4, 0): 0,
(4, 1): 1000,
(4, 2): 100,
(4, 3): 10
}
and then i added from this question: Maximize profit with disregard to number of pickup-delivery done OR tools
for node, revenue in data["revenue"].items():
start, end = node
routing.AddDisjunction(
[manager.NodeToIndex(end)], revenue
)
routing.AddDisjunction(
[manager.NodeToIndex(start)], 0
)
this is not working unfortunately. I always get solutions, that are not making sense.
Can someone help or give me an advice, how to implement profits into the TSP with OR-Tools?
Adding start or end node in a Disjunction makes no sense...
AddDisjunction must be applied to each node (or pair) BUT start and end nodes.
for node, revenue in data["revenue"].items():
pickup, delivery = node
pickup_index = manager.NodeToIndex(pickup)
delivery_index = manager.NodeToIndex(delivery)
routing.AddDisjunction(
[pickup_index, delivery_index], revenue, 2 # cardinality
)
Does your key value in your dict are pair of nodes ? seems to be coordinate to me...

How do I remove the proper subsets from a list of sets in Scala?

I have a list of sets of integers as followed: {(1, 0), (0, 1, 2), (1, 2), (1, 2, 3, 4, 5), (3, 4)}.
I want to write a program in Scala to remove the sets that are proper subset of some set in the given list, i.e. the final result would be: {(0,1,2), (1,2,3,4,5)}.
An O(n2) solution can be done by checking each set against the entire list but that would be very expensive and does not scale very well for ~100000 sets. I also thought of generating edges from the sets, remove duplicate edges and run a DFS but I have no idea how to do it in Scala (the more Scala-ish way and not one-to-one from Java code).
Individual elements (sets) need only be compared to other elements of the same size or larger.
val ss = List(Set(1, 0), Set(0, 1, 2), Set(1, 2), Set(1, 2, 3, 4, 5), Set(3, 4))
ss.sortBy(- _.size) match {
case Nil => Nil
case hd::tl =>
tl.foldLeft(List(hd)){case (acc, s) =>
if (acc.exists(s.forall(_))) acc
else s::acc
}
}
//res0: List[Set[Int]] = List(Set(0, 1, 2), Set(5, 1, 2, 3, 4))

Pyspark - after groupByKey and count distinct value according to the key?

I would like to find how many distinct values according to the key, for example, suppose I have
x = sc.parallelize([("a", 1), ("b", 1), ("a", 1), ("b", 2), ("a", 2)])
And I have done using groupByKey
sorted(x.groupByKey().map(lambda x : (x[0], list(x[1]))).collect())
x.groupByKey().mapValues(len).collect()
the output will by like,
[('a', [1, 1, 2]), ('b', [1, 2])]
[('a', 3), ('b', 2)]
However, I want to find distinct values in the list, the output should be like,
[('a', [1, 2]), ('b', [1, 2])]
[('a', 2), ('b', 2)]
I am very new to spark and try to apply the distinct() function somewhere, but all failed :-(
Thanks a lot in advance!
you can use set instead of list -
sorted(x.groupByKey().map(lambda x : (x[0], set(x[1]))).collect())
You can try number of approaches for same. I solved it using below approach:-
from operator import add
x = sc.parallelize([("a", 1), ("b", 1), ("a", 1), ("b", 2), ("a", 2)])
x = x.map(lambda n:((n[0],n[1]), 1))
x.groupByKey().map(lambda n:(n[0][0],1)).reduceByKey(add).collect()
OutPut:-
[('b', 2), ('a', 2)]
Hope This will help you.

Reverse a word-frequency map in Scala

I have a word-frequency array like this:
[("hello", 1), ("world", 5), ("globle", 1)]
I have to reverse it such that I get frequency-to-wordCount map like this:
[(1, 2), (5, 1)]
Notice that since two words ("hello" and "globe") have the frequency 1, the value of the reversed mapping is 2. However, since there is only one word with a frequency 5, so, the value of that entry is 1. How can I do this in scala?
Update:
I happened to figure this out as well:
arr.groupBy(_._2).map(x => (x._1,x._2.toList.length))
You can first group by the count, and then just get the size of each group
val frequencies = List(("hello", 1), ("world", 5), ("globle", 1))
val reversed = frequencies.groupBy(_._2).mapValues(_.size).toList
res0: List[(Int, Int)] = List((5,1), (1,2))