I have an expensive function which I want to run as few times as possible with the following requirement:
I have several input values to try
If the function returns a value below a given threshold, I don't want to try other inputs
if no result is below the threshold, I want to take the result with the minimal output
I could not find a nice solution using Iterator's takeWhile/dropWhile, because I want to have the first matching element included. just ended up with the following solution:
val pseudoResult = Map("a" -> 0.6,"b" -> 0.2, "c" -> 1.0)
def expensiveFunc(s:String) : Double = {
pseudoResult(s)
}
val inputsToTry = Seq("a","b","c")
val inputIt = inputsToTry.iterator
val results = mutable.ArrayBuffer.empty[(String, Double)]
val earlyAbort = 0.5 // threshold
breakable {
while (inputIt.hasNext) {
val name = inputIt.next()
val res = expensiveFunc(name)
results += Tuple2(name,res)
if (res<earlyAbort) break()
}
}
println(results) // ArrayBuffer((a,0.6), (b,0.2))
val (name, bestResult) = results.minBy(_._2) // (b, 0.2)
If i set val earlyAbort = 0.1, the result should still be (b, 0.2) without evaluating all the cases again.
You can make use of Stream to achieve what you are looking for, remember Stream is some kind of lazy collection, that evaluate operations on demand.
Here is the scala Stream documentation.
You only need to do this:
val pseudoResult = Map("a" -> 0.6,"b" -> 0.2, "c" -> 1.0)
val earlyAbort = 0.5
def expensiveFunc(s: String): Double = {
println(s"Evaluating for $s")
pseudoResult(s)
}
val inputsToTry = Seq("a","b","c")
val results = inputsToTry.toStream.map(input => input -> expensiveFunc(input))
val finalResult = results.find { case (k, res) => res < earlyAbort }.getOrElse(results.minBy(_._2))
If find does not get any value, you can use the same stream to find the min, and the function is not evaluated again, this is because of memoization:
The Stream class also employs memoization such that previously computed values are converted from Stream elements to concrete values of type A
Consider that this code will fail if the original collection was empty, if you want to support empty collections you should replace minBy with sortBy(_._2).headOption and getOrElse by orElse:
val finalResultOpt = results.find { case (k, res) => res < earlyAbort }.orElse(results.sortBy(_._2).headOption)
And the output for this is:
Evaluating for a
Evaluating for b
finalResult: (String, Double) = (b,0.2)
finalResultOpt: Option[(String, Double)] = Some((b,0.2))
The clearest, simplest, thing to do is fold over the input, passing forward only the current best result.
val inputIt :Iterator[String] = inputsToTry.iterator
val earlyAbort = 0.5 // threshold
inputIt.foldLeft(("",Double.MaxValue)){ case (low,name) =>
if (low._2 < earlyAbort) low
else Seq(low, (name, expensiveFunc(name))).minBy(_._2)
}
//res0: (String, Double) = (b,0.2)
It calls on expensiveFunc() only as many times as is needed, but it does walk through the entire input iterator. If that's still too onerous (lots of input) then I'd go with a tail-recursive method.
val inputIt :Iterator[String] = inputsToTry.iterator
val earlyAbort = 0.5 // threshold
def bestMin(low :(String,Double) = ("",Double.MaxValue)) :(String,Double) = {
if (inputIt.hasNext) {
val name = inputIt.next()
val res = expensiveFunc(name)
if (res < earlyAbort) (name, res)
else if (res < low._2) bestMin((name,res))
else bestMin(low)
} else low
}
bestMin() //res0: (String, Double) = (b,0.2)
Use view in your input list:
try the following:
val pseudoResult = Map("a" -> 0.6, "b" -> 0.2, "c" -> 1.0)
def expensiveFunc(s: String): Double = {
println(s"executed for ${s}")
pseudoResult(s)
}
val inputsToTry = Seq("a", "b", "c")
val earlyAbort = 0.5 // threshold
def doIt(): List[(String, Double)] = {
inputsToTry.foldLeft(List[(String, Double)]()) {
case (n, name) =>
val res = expensiveFunc(name)
if(res < earlyAbort) {
return n++List((name, res))
}
n++List((name, res))
}
}
val (name, bestResult) = doIt().minBy(_._2)
println(name)
println(bestResult)
The output:
executed for a
executed for b
b
0.2
As you can see, only a and b are evaluated, and not c.
This is one of the use-cases for tail-recursion:
import scala.annotation.tailrec
val pseudoResult = Map("a" -> 0.6,"b" -> 0.2, "c" -> 1.0)
def expensiveFunc(s:String) : Double = {
pseudoResult(s)
}
val inputsToTry = Seq("a","b","c")
val earlyAbort = 0.5 // threshold
#tailrec
def f(s: Seq[String], result: Map[String, Double] = Map()): Map[String, Double] = s match {
case Nil => result
case h::t =>
val expensiveCalculation = expensiveFunc(h)
val intermediateResult = result + (h -> expensiveCalculation)
if(expensiveCalculation < earlyAbort) {
intermediateResult
} else {
f(t, intermediateResult)
}
}
val result = f(inputsToTry)
println(result) // Map(a -> 0.6, b -> 0.2)
val (name, bestResult) = f(inputsToTry).minBy(_._2) // ("b", 0.2)
If you implement takeUntil and use it, you'd still have to go through the list once more to get the lowest one if you don't find what you are looking for. Probably a better approach would be to have a function that combines find with reduceOption, returning early if something is found or else returning the result of reducing the collection to a single item (in your case, finding the smallest one).
The result is comparable with what you could achieve using a Stream, as highlighted in a previous reply, but avoids leveraging memoization, which can be cumbersome for very large collections.
A possible implementation could be the following:
import scala.annotation.tailrec
def findOrElse[A](it: Iterator[A])(predicate: A => Boolean,
orElse: (A, A) => A): Option[A] = {
#tailrec
def loop(elseValue: Option[A]): Option[A] = {
if (!it.hasNext) elseValue
else {
val next = it.next()
if (predicate(next)) Some(next)
else loop(Option(elseValue.fold(next)(orElse(_, next))))
}
}
loop(None)
}
Let's add our inputs to test this:
def f1(in: String): Double = {
println("calling f1")
Map("a" -> 0.6, "b" -> 0.2, "c" -> 1.0, "d" -> 0.8)(in)
}
def f2(in: String): Double = {
println("calling f2")
Map("a" -> 0.7, "b" -> 0.6, "c" -> 1.0, "d" -> 0.8)(in)
}
val inputs = Seq("a", "b", "c", "d")
As well as our helpers:
def apply[IN, OUT](in: IN, f: IN => OUT): (IN, OUT) =
in -> f(in)
def threshold[A](a: (A, Double)): Boolean =
a._2 < 0.5
def compare[A](a: (A, Double), b: (A, Double)): (A, Double) =
if (a._2 < b._2) a else b
We can now run this and see how it goes:
val r1 = findOrElse(inputs.iterator.map(apply(_, f1)))(threshold, compare)
val r2 = findOrElse(inputs.iterator.map(apply(_, f2)))(threshold, compare)
val r3 = findOrElse(Map.empty[String, Double].iterator)(threshold, compare)
r1 is Some(b, 0.2), r2 is Some(b, 0.6) and r3 is (reasonably) None. In the first case, since we use a lazy iterator and terminate early, we only invoke f1 twice.
You can have a look at the results and can play with this code here on Scastie.
Related
I have a Scala list below :
val numList = List(1,2,3,4,5,1,2)
I want to get index of the same element pair in the list. The output should look like (0,5),(1,6)
How can I achieve using map?
def catchDuplicates(num : List[Int]) : (Int , Int) = {
val count = 0;
val emptyMap: HashMap[Int, Int] = HashMap.empty[Int, Int]
for (i <- num)
if (emptyMap.contains(i)) {
emptyMap.put(i, (emptyMap.get(i)) + 1) }
else {
emptyMap.put(i, 1)
}
}
Let's make the challenge a little more interesting.
val numList = List(1,2,3,4,5,1,2,1)
Now the result should be something like (0, 5, 7),(1, 6), which makes it pretty clear that returning one or more tuples is not going to be feasible. Returning a List of List[Int] would make much more sense.
def catchDuplicates(nums: List[Int]): List[List[Int]] =
nums.zipWithIndex //List[(Int,Int)]
.groupMap(_._1)(_._2) //Map[Int,List[Int]]
.values //Iterable[List[Int]]
.filter(_.lengthIs > 1)
.toList //List[List[Int]]
You might also add a .view in order to minimize the number of traversals and intermediate collections created.
def catchDuplicates(nums: List[Int]): List[List[Int]] =
nums.view
.zipWithIndex
.groupMap(_._1)(_._2)
.collect{case (_,vs) if vs.sizeIs > 1 => vs.toList}
.toList
How can I achieve using map?
You can't.
Because you only want to return the indexes of the elements that appear twice; which is a very different kind of transformation than the one that map expects.
You can use foldLeft thought.
object catchDuplicates {
final case class Result[A](elem: A, firstIdx: Int, secondIdx: Int)
private final case class State[A](seenElements: Map[A, Int], duplicates: List[Result[A]]) {
def next(elem: A, idx: Int): State[A] =
seenElements.get(key = elem).fold(
ifEmpty = this.copy(seenElements = this.seenElements + (elem -> idx))
) { firstIdx =>
State(
seenElements = this.seenElements.removed(key = elem),
duplicates = Result(elem, firstIdx, secondIdx = idx) :: this.duplicates
)
}
}
private object State {
def initial[A]: State[A] =
State(
seenElements = Map.empty,
duplicates = List.empty
)
}
def apply[A](data: List[A]): List[Result[A]] =
data.iterator.zipWithIndex.foldLeft(State.initial[A]) {
case (acc, (elem, idx)) =>
acc.next(elem, idx)
}.duplicates // You may add a reverse here if order is important.
}
Which can be used like this:
val numList = List(1,2,3,4,5,1,2)
val result = catchDuplicates(numList)
// result: List[Result] = List(Result(2,1,6), Result(1,0,5))
You can see the code running here.
I think returning tuple is not a good option instead you should try Map like -
object FindIndexOfDupElement extends App {
val numList = List(1, 2, 3, 4, 5, 1, 2)
#tailrec
def findIndex(elems: List[Int], res: Map[Int, List[Int]] = Map.empty, index: Int = 0): Map[Int, List[Int]] = {
elems match {
case head :: rest =>
if (res.get(head).isEmpty) {
findIndex(rest, res ++ Map(head -> (index :: Nil)), index + 1)
} else {
val updatedMap: Map[Int, List[Int]] = res.map {
case (key, indexes) if key == head => (key, (indexes :+ index))
case (key, indexes) => (key, indexes)
}
findIndex(rest, updatedMap, index + 1)
}
case _ => res
}
}
println(findIndex(numList).filter(x => x._2.size > 1))
}
you can clearly see the number(key) and respective index in the map -
HashMap(1 -> List(0, 5), 2 -> List(1, 6))
Following code does not compile with Scala 2.13.6:
val a = Map(0 -> "0")
val b = Map(1 -> "1")
val c = a.view ++ b.view
c.contains(0)
The error is:
value contains is not a member of scala.collection.View[(Int, String)]
A similar error is shown in Scala 3 or Scala 2.12.15.
I find this unexpected, as implementation of concat seem to suggest the result should be a map (mapFactory is used to produce the result).
How can I concatenate two MapViews to get a MapView again?
Based on Remove concat, ++ and + overloads from MapView it was removed by design, but perhaps you could still provide your own extension method that mimics previous implementation, something like so
implicit class ConcatMapView[K, +V](left: MapView[K, V]) {
def +++[V1 >: V](right: MapView[K, V1]): MapView[K, V1] =
new AbstractMapView[K, V1] {
def get(key: K): Option[V1] = right.get(key) match {
case s # Some(_) => s
case _ => left.get(key)
}
def iterator: Iterator[(K, V1)] = left.iterator
.filter { case (k, _) => !right.contains(k) }
.concat(right.iterator)
}
}
val c = a.view +++ b.view // : MapView[Int, String] = MapView((0, "0"), (1, "1"))
c.contains(0) // : Boolean = true
I don't know the intent behind concat not returning a MapView but you can achieve your goal like this:
val a = Map(0 -> "0")
val b = Map(1 -> "1")
val c = a.view ++ b.view
val contains = c.exists((k,v) => k == 0)
For example, I have a Map[Integer,String] like
val map = Map(1 -> "a", 2 -> "b", 3 -> "c", 5 -> "d", 9 -> "e", 100 -> "z")
If given key is 2, then "b" is expected to return.
If given key is 50, then "e" and "z" are expected to return.
If given key is 0, then "a" is expected to return.
In other words, if the key exists in the Map the corresponding value should be returned. Otherwise the values of the closest smaller and larger keys should be returned (in the case no other key is smaller only the value of the closest larger key should be returned and vice versa).
How can this be accomplished?
Map doesn't preserve order hence I would suggest creating a method that:
converts the Map into a TreeMap
generates the lower/upper Map entries as Options in a list using to(key).lastOption and from(key).headOption, respectively
flattens the list and extracts the Map values:
Sample code as follows:
val map = Map(1->"a", 2->"b", 100->"z", 9->"e", 3->"c", 5->"d")
def closestValues(m: Map[Int, String], key: Int): Seq[String] = {
import scala.collection.immutable.TreeMap
val tm = TreeMap(m.toSeq: _*)
Seq( tm.to(key).lastOption, tm.from(key).headOption ).
flatten.distinct.map{ case (k, v) => v }
}
closestValues(map, 0)
// res1: Seq[String] = List(a)
closestValues(map, 2)
// res2: Seq[String] = List(b)
closestValues(map, 50)
// res3: Seq[String] = List(e, z)
closestValues(map, 101)
// res4: Seq[String] = List(z)
UPDATE:
Starting Scala 2.13, methods to and from for TreeMap are replaced with rangeTo and rangeFrom, respectively.
My 2-cents worth.
def getClose[K](m: Map[Int,K], k: Int): Seq[K] =
if (m.get(k).nonEmpty) Seq(m(k))
else {
val (below,above) = m.keys.partition(_ < k)
Seq( if (below.isEmpty) None else Some(below.max)
, if (above.isEmpty) None else Some(above.min)
).flatten.map(m)
}
I would recommend first converting the Map to a SortedMap since the order of the keys needs to be taken into account.
val map = Map(1->"a",2->"b",3->"c",5->"d",9->"e",100->"z")
val sortedMap = SortedMap[Int, String]() ++ map
After that, use the following method to get the closest values. The result is returned as a List.
def getClosestValue(num: Int) = {
if (sortedMap.contains(num)) {
List(sortedMap(num))
} else {
lazy val larger = sortedMap.filterKeys(_ > num)
lazy val lower = sortedMap.filterKeys(_ < num)
if (larger.isEmpty) {
List(sortedMap.last._2)
} else if (lower.isEmpty) {
List(sortedMap.head._2)
} else {
List(lower.last._2, larger.head._2)
}
}
}
Testing it with the following values:
println(getClosestValue(2))
println(getClosestValue(50))
println(getClosestValue(0))
println(getClosestValue(101))
will give
List(b)
List(z, e)
List(a)
List(z)
This not an efficient solution but you can do something like below
val map =Map(1->"a",2->"b",3->"c",5->"d",9->"e",100->"z")
val keyset = map.keySet
def getNearestValues(key: Int) : Array[String] = {
if(keyset.contains(key)) Array(map(key))
else{
var array = Array.empty[String]
val less = keyset.filter(_ < key)
if(!less.isEmpty) array = array ++ Array(map(less.toList.sortWith(_ < _).last))
val greater = keyset.filter(_ > key)
if(!greater.isEmpty) array = array ++ Array(map(greater.toList.sortWith(_ < _).head))
array
}
}
A small bit of functional way
val map =Map(1->"a",2->"b",3->"c",5->"d",9->"e",100->"z")
val keyset = map.keySet
def getNearestValues(key: Int) : Array[String] = keyset.contains(key) match {
case true => Array(map(key))
case false => {
val (lower, upper) = keyset.toList.sortWith(_ < _).span(x => x < key)
val lowArray = if(lower.isEmpty) Array.empty[String] else Array(map(lower.last))
val upperArray = if(upper.isEmpty) Array.empty[String] else Array(map(upper.head))
lowArray ++ upperArray
}
}
getNearestValues(0) should return Array(a) and getNearestValues(50) should return Array(e, z) and getNearestValues(9) should return Array(e)
You can solve this problem with the complexity smaller that in any proposed solutions above . So, if performance is critical, check this answer.
Another Scala solution
val m = Map(1 -> "a", 2 -> "b", 3 -> "c", 5 -> "d", 9 -> "e", 100 -> "z")
List(0, 2, 50, 101).foreach { i => {
val inp = i
val (mn, mx) = if (m.get(inp).nonEmpty) (Map(inp -> m(inp)), Map(inp -> m(inp))) else m.partition(x => x._1 > inp)
(mn, mx) match {
case (x, y) if y.isEmpty => println(m(mn.keys.min))
case (x, y) if x.isEmpty => println(m(mx.keys.max))
case (x, y) if y == x => println(m(inp))
case (x, y) => println(m(mn.keys.min), m(mx.keys.max))
}
}
}
Results:
a
b
(z,e)
z
I am looking for a collections method which splits at a given pairwise condition, e.g.
val x = List("a" -> 1, "a" -> 2, "b" -> 3, "c" -> 4, "c" -> 5)
implicit class RichIterableLike[A, CC[~] <: Iterable[~]](it: CC[A]) {
def groupWith(fun: (A, A) => Boolean): Iterator[CC[A]] = new Iterator[CC[A]] {
def hasNext: Boolean = ???
def next(): CC[A] = ???
}
}
assert(x.groupWith(_._1 != _._1).toList ==
List(List("a" -> 1, "a" -> 2), List("b" -> 3), List("c" -> 4, "c" -> 5))
)
So this is sort of a recursive span.
While I'm capable of implementing the ???, I wonder
if something already exists in collections that I'm overseeing
what that method should be called; groupWith doesn't sound right. It should be concise, but somehow reflect that the function argument operates on pairs. groupWhere would be a bit closer I guess, but still not clear.
actually I guess when using groupWith, the predicate logic should be inverted, so I would use x.groupWith(_._1 == _._1)
thoughts about the types. Returning an Iterator[CC[A]] looks reasonable to me. Perhaps it should take a CanBuildFrom and return an Iterator[To]?
You can also write a version that uses tailrec/pattern matching:
def groupWith[A](s: Seq[A])(p: (A, A) => Boolean): Seq[Seq[A]] = {
#tailrec
def rec(xs: Seq[A], acc: Seq[Seq[A]] = Vector.empty): Seq[Seq[A]] = {
(xs.headOption, acc.lastOption) match {
case (None, _) => acc
case (Some(a), None) => rec(xs.tail, acc :+ Vector(a))
case (Some(a), Some(group)) if p(a, group.last) => rec(xs.tail, acc.init :+ (acc.last :+ a))
case (Some(a), Some(_)) => rec(xs.tail, acc :+ Vector(a))
}
}
rec(s)
}
So here is my suggestion. I sticked to groupWith, because spans is not very descriptive in my opinion. It is true that groupBy has very different semantics, however there is grouped(size: Int) which is similar.
I tried to create my iterator purely based on combining existing iterators, but this got messy, so here is the more low level version:
import scala.collection.generic.CanBuildFrom
import scala.annotation.tailrec
import language.higherKinds
object Extensions {
private final class GroupWithIterator[A, CC[~] <: Iterable[~], To](
it: CC[A], p: (A, A) => Boolean)(implicit cbf: CanBuildFrom[CC[A], A, To])
extends Iterator[To] {
private val peer = it.iterator
private var consumed = true
private var elem = null.asInstanceOf[A]
def hasNext: Boolean = !consumed || peer.hasNext
private def pop(): A = {
if (!consumed) return elem
if (!peer.hasNext)
throw new NoSuchElementException("next on empty iterator")
val res = peer.next()
elem = res
consumed = false
res
}
def next(): To = {
val b = cbf()
#tailrec def loop(pred: A): Unit = {
b += pred
consumed = true
if (!peer.isEmpty) {
val succ = pop()
if (p(pred, succ)) loop(succ)
}
}
loop(pop())
b.result()
}
}
implicit final class RichIterableLike[A, CC[~] <: Iterable[~]](val it: CC[A])
extends AnyVal {
/** Clumps the collection into groups based on a predicate which determines
* if successive elements belong to the same group.
*
* For example:
* {{
* val x = List("a", "a", "b", "a", "b", "b")
* x.groupWith(_ == _).to[Vector]
* }}
*
* produces `Vector(List("a", "a"), List("b"), List("a"), List("b", "b"))`.
*
* #param p a function which is evaluated with successive pairs of
* the input collection. As long as the predicate holds
* (the function returns `true`), elements are lumped together.
* When the predicate becomes `false`, a new group is started.
*
* #param cbf a builder factory for the group type
* #tparam To the group type
* #return an iterator over the groups.
*/
def groupWith[To](p: (A, A) => Boolean)
(implicit cbf: CanBuildFrom[CC[A], A, To]): Iterator[To] =
new GroupWithIterator(it, p)
}
}
That is, the predicate is inverted as opposed to the question.
import Extensions._
val x = List("a" -> 1, "a" -> 2, "b" -> 3, "c" -> 4, "c" -> 5)
x.groupWith(_._1 == _._1).to[Vector]
// -> Vector(List((a,1), (a,2)), List((b,3)), List((c,4), (c,5)))
You could achieve it with a fold too right? Here is an unoptimized version:
def groupWith[A](ls: List[A])(p: (A, A) => Boolean): List[List[A]] =
ls.foldLeft(List[List[A]]()) { (acc, x) =>
if(acc.isEmpty)
List(List(x))
else
if(p(acc.last.head, x))
acc.init ++ List(acc.last ++ List(x))
else
acc ++ List(List(x))
}
val x = List("a" -> 1, "a" -> 2, "b" -> 3, "c" -> 4, "c" -> 5, "a" -> 4)
println(groupWith(x)(_._1 == _._1))
//List(List((a,1), (a,2)), List((b,3)), List((c,4), (c,5)), List((a,4)))
I think this might be a common operation. So maybe it's inside the API but I can't find it. Also I'm interested in an efficient functional/simple solution if not.
Given a sequence of tuples ("a" -> 1, "b" ->2, "c" -> 3) I want to turn it into a map. That's easy using TraversableOnce.toMap. But I want to fail this construction if the resulting map "would contain a contradiction", i.e. different values assigned to the same key. Like in the sequence ("a" -> 1, "a" -> 2). But duplicates shall be allowed.
Currently I have this (very imperative) code:
def buildMap[A,B](in: TraversableOnce[(A,B)]): Option[Map[A,B]] = {
val map = new HashMap[A,B]
val it = in.toIterator
var fail = false
while(it.hasNext){
val next = it.next()
val old = map.put(next._1, next._2)
fail = old.isDefined && old.get != next._2
}
if(fail) None else Some(map.toMap)
}
Side Question
Is the final toMap really necessary? I get a type error when omitting it, but I think it should work. The implementation of toMap constructs a new map which I want to avoid.
As always when working with Seq[A] the optimal solution performance-wise depends on the concrete collection type.
A general but not very efficient solution would be to fold over an Option[Map[A,B]]:
def optMap[A,B](in: Iterable[(A,B)]): Option[Map[A,B]] =
in.iterator.foldLeft(Option(Map[A,B]())) {
case (Some(m),e # (k,v)) if m.getOrElse(k, v) == v => Some(m + e)
case _ => None
}
If you restrict yourself to using List[A,B]s an optimized version would be:
#tailrec
def rmap[A,B](in: List[(A,B)], out: Map[A,B] = Map[A,B]()): Option[Map[A,B]] = in match {
case (e # (k,v)) :: tail if out.getOrElse(k,v) == v =>
rmap(tail, out + e)
case Nil =>
Some(out)
case _ => None
}
Additionally a less idiomatic version using mutable maps could be implemented like this:
def mmap[A,B](in: Iterable[(A,B)]): Option[Map[A,B]] = {
val dest = collection.mutable.Map[A,B]()
for (e # (k,v) <- in) {
if (dest.getOrElse(k, v) != v) return None
dest += e
}
Some(dest.toMap)
}
Here is a fail-slowly solution (if creating the entire map and then discarding it is okay):
def uniqueMap[A,B](s: Seq[(A,B)]) = {
val m = s.toMap
if (m.size == s.length) Some(s) else None
}
Here is a mutable fail-fast solution (bail out as soon as the error is detected):
def uniqueMap[A,B](s: Seq[(A,B)]) = {
val h = new collection.mutable.HashMap[A,B]
val i = s.iterator.takeWhile(x => !(h contains x._1)).foreach(h += _)
if (h.size == s.length) Some(h) else None
}
And here's an immutable fail-fast solution:
def uniqueMap[A,B](s: Seq[(A,B)]) = {
def mapUniquely(i: Iterator[(A,B)], m: Map[A,B]): Option[Map[A,B]] = {
if (i.hasNext) {
val j = i.next
if (m contains j._1) None
else mapUniquely(i, m + j)
}
else Some(m)
}
mapUniquely(s.iterator, Map[A,B]())
}
Edit: and here's a solution using put for speed (hopefully):
def uniqueMap[A,B](s: Seq[(A,B)]) = {
val h = new collection.mutable.HashMap[A,B]
val okay = s.iterator.forall(x => {
val y = (h put (x._1,x._2))
y.isEmpty || y.get == x._2
})
if (okay) Some(h) else None
}
Edit: now tested, and it's ~2x as fast on input that works (returns true) than Moritz' or my straightforward solution.
Scala 2.9 is near, so why not to take advantage of the combinations method (inspired by Moritz's answer):
def optMap[A,B](in: List[(A,B)]) = {
if (in.combinations(2).exists {
case List((a,b),(c,d)) => a == c && b != d
case _ => false
}) None else Some(in.toMap)
}
scala> val in = List(1->1,2->3,3->4,4->5,2->3)
in: List[(Int, Int)] = List((1,1), (2,3), (3,4), (4,5), (2,3))
scala> optMap(in)
res29: Option[scala.collection.immutable.Map[Int,Int]] = Some(Map(1 -> 1, 2 -> 3, 3 -> 4, 4 -> 5))
scala> val in = List(1->1,2->3,3->4,4->5,2->3,1->2)
in: List[(Int, Int)] = List((1,1), (2,3), (3,4), (4,5), (2,3), (1,2))
scala> optMap(in)
res30: Option[scala.collection.immutable.Map[Int,Int]] = None
You can also use gourpBy as follows:
val pList = List(1 -> "a", 1 -> "b", 2 -> "c", 3 -> "d")
def optMap[A,B](in: Iterable[(A,B)]): Option[Map[A,B]] = {
Option(in.groupBy(_._1).map{case(_, list) => if(list.size > 1) return None else list.head})
}
println(optMap(pList))
It's efficiency is competitive to the above solutions.
In fact if you examine the gourpBy implementation you will see that it is very similar to some of the solutions suggested.