Scala: implementing Map with concrete types - scala

I'm running into some kind of quirk in the Scala type system that has me a bit stumped. I am trying to make a class that extends Map[String,String] and I can't quite figure out how to implement the + method in such a way that the compiler accepts it.
Here's the code I have now:
class ParamMap(val pairs:List[(String,String)] = Nil) extends Map[String,String] {
lazy val keyLookup = Map() ++ pairs
override def get(key: String): Option[String] = keyLookup.get(key)
override def iterator: Iterator[(String, String)] = pairs.reverseIterator
/**
* Add a key/value pair to the map
*/
override def + [B1 >: String](kv: (String, B1)) = new ParamMap(kv :: pairs)
/**
* Remove all values for the given key from the map
*/
override def -(key: String): ParamMap = new ParamMap(pairs.filterNot(_._1 == key))
/**
* Remove a specific pair from the map
*/
def -(kv: (String, String)) : ParamMap = new ParamMap(pairs - kv)
}
Scala tells me this:
type mismatch; found: (String, B1) required: (String, String)
I believe this is because B1 is allowed to be a subtype of String but my constructor expects just a String (?). My original attempt was:
override def +(kv: (String, String)) = new ParamMap(kv :: pairs)
But this complained because the type signature didn't match the trait:
class ParamMap needs to be abstract, since method + in trait Map of type [B1 >: String](kv: (String, B1))scala.collection.immutable.Map[String,B1] is not defined
method + overrides nothing
I'm new to Scala and I think I'm getting over my head here in terms of how the type system works. Perhaps I'll try messing with casting but I have a feeling there might be a "better way" that, if I know it, will save me a lot of trouble in the future.
Any ideas?

Some background about Scala's type system.
The syntax B1 >: String means that B1 is a supertype of String. So B1 is less specific, and can't be cast to a String. Conversely, B1 <: String would be a subtype relationship.
The definition of the Map trait is Map [A, +B], where A represents the type of the key and B the type of the value. The +B notation says that Map is covariant in the key type, which means that T <: S implies Map[A, T] <: Map[A, S].
The full type of the Map.+ method is + [B1 >: B] (kv: (A, B1)): Map[A, B1]. The covariance of B kind of forces the use of B1 >: B. Here's an example of how it works: given a map m: Map[String, String] adding a key-value pair with a less specific type kv : (String, Any) will result in a less specific map, (m + kv): Map[String, Any].
The last point illustrates the problem with your ParamMap definition. According to the Map interface, one should be able to add a key of type Any to a map of type ParamMap <: Map[String, String] and get back a Map[String, Any]. But you're trying to define ParamMap.+ to always return ParamMap[String, String], which is incompatible with Map.+.
One way to fix the problem is to give ParamMap an explicit type parameter, something like (warning untested),
class ParamMap[B](val pairs:List[(String,String)] = Nil) extends Map[String, B] {
...
override def + [B1 >: B](kv: (String, B1)) = new ParamMap[B1](kv :: pairs)
}
but this may not be what you want. I don't think there's a way to fix the value type as String and implement the Map[String, String] interface.
Given all the above, why does the code in your answer compile? You've actually uncovered a limitation (unsoundness) of Scala's pattern matching, and it can lead to run-time crashes. Here's a simplified example:
def foo [B1 >: String](x: B1): Int = {
val (s1: Int, s2: Int) = (x, x)
s1
}
Although this compiles, it doesn't do anything useful. In fact, it will always crash with a MatchError:
scala> foo("hello")
scala.MatchError: (hello,hello) (of class scala.Tuple2)
at .foo(<console>:9)
at .<init>(<console>:10)
at .<clinit>(<console>)
...
In your answer, you've basically told the compiler to convert a B1 instance to a String, and if the conversion doesn't work, you'll get a runtime crash. It's equivalent to an unsafe cast,
(value: B1).asInstanceOf[String]

You're correct that your constructor expects a value of type List[String, String], but the issue isn't that B1 could be a subclass of String, it's that it could be a superclass -- this is what the B1 :> String notation indicates.
At first glance, you might wonder why the parent Map class would have the method typed this way. In fact, the return type of the + method you're attempting to override there is Map[String, B1]. In the context of a general map, though, this makes sense. Suppose you had the following code:
class Parent
class Child extends Parent
val childMap = Map[String, Child]("Key" -> new Child)
val result = childMap + ("ParentKey" -> new Parent)
The type of result would then have to be Map[String, Parent]. In light of this, the type restrictions on the + method in Map makes sense, but your fixed-type map isn't capable of fulfilling what the method is designed to be able to do. Its signature allows you to pass in a value of e.g. type (String, AnyRef), but using the method definition you gave in your followup answer, you'll get a MatchError when it tries to perform the assignment to key and value.
Does that make sense?

I ran the same problem with a colleague, when trying to build a Bag[T] which is a Map[T,Int]. We found two different solutions:
Implement Traversable rather than Map with appropriate Builder and CanBuildFrom, and add the useful map methods (get,+,-). If you need to pass the collection to a function taking maps as arguments, you can use implicit conversions. Here is our full Bag implementation: https://gist.github.com/1136259
Stay simple:
object collection {
type ParamMap = Map[String,String]
object ParamMap {
def apply( pairs: List[(String,String)] = Nil ) = Map( pairs:_* )
}
}

The compiler does seem to accept this one:
override def + [B1 >: String](kv: (String, B1)) = {
val (key:String, value:String) = kv
new ParamMap((key,value) :: pairs)
}
But I don't know why that is better than the original. I suppose this is an acceptable solution if nobody has a better one.

Related

Scala polymorphism - covariant and type bound

Using Scala... I can't figure out how to use polymorphism in a way that mixes type bound and covariance.
In a nutshell, I think I need something like this type signature... but if you follow along with my dummy example, you'll see why I get here... and maybe I'm wrong.
def func[+T <: U](func: Seq[T] => T)(iter: Iterator[String]): Map[String, String] = ???
but this approach yields...
>> error: ']' expected but identifier found
Here's a dummy example that demonstrates what I'm trying to do... I could sidestep this by just making work only with the base trait Record... but I'd like to get it working with polymorphism baked in for other reasons in the real code.
setup
// underlying trait to hold key and value
trait Record {
def k: String
def v: String
def isDefined: Boolean
}
// companion object with apply method
object Record {
def apply(s: String): Record = s.split(",") match {
case Array(k,v) => new ValidRecord(k,v).asInstanceOf[Record]
case _ => EmptyRecord.asInstanceOf[Record]
}
}
// singleton for empty records
object EmptyRecord extends Record {
val k = ""
val v = ""
val isDefined = false
}
// class for actual data
class ValidRecord(val k: String, val v: String) extends Record {
val isDefined = true
}
polymorphic function
note - going from Iterator to Seq here looks questionable... I'm reading in a file from src/main/resources... it comes in as an Iterator... and I ultimately need to get it into a Map, so .toSeq and .groupBy seem like logical steps... it's only maybe 100MB and a million or so records, so this works fine... but if there's a smarter way to get from start to end, I'm open to that critique as well.
def iter_2_map[T <: Record](func: Seq[T] => T)(iter: Iterator[String]): Map[String, String] = {
iter // iterator of raw data
.map(Record.apply) // Iterator[Record]
.toSeq // gives .groupBy() method
.groupBy(_.k) // Map[k -> Seq[Record]]; one Seq of records per k
.mapValues(func) // <<< ERROR HERE //function to reduce Seq[Record] to 1 Record
.filter(_._2.isDefined) // get rid of empty results
.mapValues(_.v) // target of Map is just v
}
error
found : Seq[T] => T
required: Seq[Record] => ?
.mapValues(func)
^
If I break down all those steps and declare types at every relevant step... the error changes to this...
found : Seq[T] => T
required: Seq[Record] => Record
.mapValues(func)
^
So here's where I get stumped. I think making T covariant solves this... T is a declared subtype of Record, but maybe it's not recognizing Seq[T] as <: Seq[Record]?
But making this change yields the error at the top...
def iter_2_map[+T <% Record](func: Seq[T] => T)(iter: Iterator[String]): Map[String, String] = {
???
}
back to this...
>> error: ']' expected but identifier found
Am I even on the right track?
You are using + incorrectly. It's only used with type parameters of classes to signal that the class should be covariant in its parameter.
It does not make very much sense to use it with methods (Seq[T] actually is a subclass of Seq[Record] - because Seq is covariant, but that does not help you, because functions are contravariant in their argument type, so Function[Seq[T], T] is a superclass of Function[Seq[Record], T], not a subclass). Here is why:
After .groupBy(_.k) you have Map[String, Seq[Record]].
Now, you are doing .mapValues(func) on it, and are trying to pass a function to it, that takes a Seq[T]. This cannot work.
Imagine, that Record is Animal, and T is Dog ... and func is makeBark ... And now you are trying to pass a bunch of animals to it, some of which are Cats, some Birds, and some, maybe Fish. You can't make them all bark, can you?
You could just declare your reducer function to accept the Record sequence rather than T:
def iter_2_map[T <: Record](func: Seq[Record] => T)(iter: Iterator[String])
This will compile, but doesn't seem like it would be very useful for you anyway, because you appear to expect your func to be able to return both EmptyRecord and ValidRecord, and not just T (since you are filtering the empties out afterwards). So, it actually seems that you don't need the type parameter at all after all:
def iter_2_map(func: Seq[Record] => Record)(iter: Iterator[String])

unclear why my in-scope implicit conversions are not accepted as 'implicit evidence'

I've been experimenting with implicit conversions, and I have a decent understanding of the 'enrich-my-libray' pattern that uses these. I tried to combine my understanding of basic implicits with the use of implicit evidence... But I'm misunderstanding something crucial, as shown by the method below:
import scala.language.implicitConversions
object Moo extends App {
case class FooInt(i: Int)
implicit def cvtInt(i: Int) : FooInt = FooInt(i)
implicit def cvtFoo(f: FooInt) : Int = f.i
class Pair[T, S](var first: T, var second: S) {
def swap(implicit ev: T =:= S, ev2: S =:= T) {
val temp = first
first = second
second = temp
}
def dump() = {
println("first is " + first)
println("second is " + second)
}
}
val x = new Pair(FooInt(200), 100)
x.dump
x.swap
x.dump
}
When I run the above method I get this error:
Error:(31, 5) Cannot prove that nodescala.Moo.FooInt =:= Int.
x.swap
^
I am puzzled because I would have thought that my in-scope implict conversion would be sufficient 'evidence' that Int's can be converted to FooInt's and vice versa. Thanks in advance for setting me straight on this !
UPDATE:
After being unconfused by Peter's excellent answer, below, the light bulb went on for me one good reason you would want to use implicit evidence in your API. I detail that in my own answer to this question (also below).
=:= checks if the two types are equal and FooInt and Int are definitely not equal, although there exist implicit conversion for values of these two types.
I would create a CanConvert type class which can convert an A into a B :
trait CanConvert[A, B] {
def convert(a: A): B
}
We can create type class instances to transform Int into FooInt and vise versa :
implicit val Int2FooInt = new CanConvert[Int, FooInt] {
def convert(i: Int) = FooInt(i)
}
implicit val FooInt2Int = new CanConvert[FooInt, Int] {
def convert(f: FooInt) = f.i
}
Now we can use CanConvert in our Pair.swap function :
class Pair[A, B](var a: A, var b: B) {
def swap(implicit a2b: CanConvert[A, B], b2a: CanConvert[B, A]) {
val temp = a
a = b2a.convert(b)
b = a2b.convert(temp)
}
override def toString = s"($a, $b)"
def dump(): Unit = println(this)
}
Which we can use as :
scala> val x = new Pair(FooInt(200), 100)
x: Pair[FooInt,Int] = (FooInt(200), 100)
scala> x.swap
scala> x.dump
(FooInt(100), 200)
A =:= B is not evidence that A can be converted to B. It is evidence that A can be cast to B. And you have no implicit evidence anywhere that Int can be cast to FooInt vice versa (for good reason ;).
What you are looking for is:
def swap(implicit ev: T => S, ev2: S => T) {
After working through this excercise I think I have a better understanding of WHY you'd want to use implicit evidence serves in your API.
Implicit evidence can be very useful when:
you have a type parameterized class that provides various methods
that act on the types given by the parameters, and
when one or more of those methods only make sense when additional
constraints are placed on parameterized types.
So, in the case of the simple API given in my original question:
class Pair[T, S](var first: T, var second: S) {
def swap(implicit ev: T =:= S, ev2: S =:= T) = ???
def dump() = ???
}
We have a type Pair, which keeps two things together, and we can always call dump() to examine the two things. We can also, under certain conditions, swap the positions of the first and second items in the pair. And those conditions are given by the implicit evidence constraints.
The Programming in Scala book gives a nice example of how this technique
is used in Scala collections, specifically on the toMap method of Traversables.
The book points out that Map's constructor
wants key-value pairs, i.e., two-tuples, as arguments. If we have a
sequence [Traversable] of pairs, wouldn’t it be nice to create a Map
out of them in one step? That’s what toMap does, but we have a
dilemma. We can’t allow the user to call toMap if the sequence is not
a sequence of pairs.
So there's an example of a type [Traversable] that has a method [toMap] that can't be used in all situations... It can only be used when the compiler can 'prove' (via implicit evidence) that the items in the Traversable are pairs.

scala implicit convertion and arrow

I'm struggling with my DSL building attempt
My method should take multiple pairs of (A, B), ideally I'm looking for this kind of code:
myMethod(
a1 -> b1,
a2 -> b2,
a3 -> b3)
my A and B classes happened to often be constructed with a single string parameter, so i added implicit converters looking like this:
implict def stringToA(s: String): A = new A(s)
implict def stringToB(s: String): B = new B(s)
but arrow operator(for pair construction) doesn't pick up implicit converters:
object Test {
case class A(s: String)
case class B(s: String)
def myMethod(pairs: (A, B)*): Unit = Unit
implicit def stringToA(s: String): A = A(s)
implicit def stringToB(s: String): B = B(s)
myMethod(new Tuple2("a", "b")) // works just fine
myMethod("a" -> "b") // Error: Type mismatch, expected: (Test.A, Test.B), actual: (String, String)
}
any ideas?
The -> method is made available on all Scala types by an implicit conversion (any2ArrowAssoc in scala.Predef). For reasons of sanity, Scala doesn't allow multiple implicit conversions to "chain" on the same expression. You could:
Define an explicit -> method on your A class. Unfortunately since the default implicit conversion would conflict with this you might then have to build your code with -Yno-predef to avoid the conflict
Define a method on your A class with a visually similar name that doesn't conflict, e.g. ~> or »
Call myMethod(("a", "b")) if you like that syntax (this syntax is supported directly, not via an implicit conversion)
Make myMethod accept (String, String)* and do the conversion to A and B inside.
Edit: You could also define an implicit conversion from (String, String) to (A, B) as #BenReich suggests.
You can define an implicit conversion from (String, String) to (A, B):
implicit def toABTuple(in: (String, String)) = (A(in._1), B(in._2))
It's failing because its converting to a tuple before applying your implicits, so it's not a plain String -> A/B anymore.

How to subclass Scala immutable.Map with fixed type parameters?

I can't figure out how to deal with overriding "+" in an immutable map if the map can only store an invariant type for its values.
Something like:
class FixedMap(val impl : Map[String, Int])
extends immutable.Map[String, Int] with immutable.MapLike[String, Int, FixedMap] {
// This should return FixedMap if B1 is Int, and Map[String,B1]
// if B1 is a superclass of Int; but there's no way to do that.
// It is possible to return FixedMap here but then you have to
// throw at runtime if B1 is not Int
override def +[B1 >: Int](kv : (String, B1)) : Map[String, B1] = {
kv match {
case (k, v : Int) =>
new FixedMap(impl + Pair(k, v))
case _ =>
impl + kv
}
}
// ...
}
I'd like this to work like the methods that use CanBuildFrom and always keep the original type if possible. Is there a way? Or do Map subclasses always have to leave the value type as a type parameter?
Here's a complete compilable example:
import scala.collection.immutable
// pointless class that wraps another map and adds one method
class FixedMap(val impl : Map[String, Int])
extends immutable.Map[String, Int] with immutable.MapLike[String, Int, FixedMap] {
override val empty : FixedMap = FixedMap.empty
// This should return FixedMap if B1 is Int, and Map[String,B1]
// if B1 is a superclass of Int; but there's no way to do that.
// It is possible to return FixedMap here but then you have to
// throw at runtime if B1 is not Int
override def +[B1 >: Int](kv : (String, B1)) : Map[String, B1] = {
kv match {
case (k, v : Int) =>
new FixedMap(impl + Pair(k, v))
case _ =>
impl + kv
}
}
override def -(key : String) : FixedMap = {
new FixedMap(impl - key)
}
override def get(key : String) : Option[Int] = {
impl.get(key)
}
override def iterator : Iterator[(String, Int)] = {
impl.iterator
}
def somethingOnlyPossibleOnFixedMap() = {
println("FixedMap says hi")
}
}
object FixedMap {
val empty : FixedMap = new FixedMap(Map.empty)
}
object TestIt {
val empty = FixedMap.empty
empty.somethingOnlyPossibleOnFixedMap()
val one = empty + Pair("a", 1)
// Can't do the below because one is a Map[String,Int] not a FixedMap
// one.somethingOnlyPossibleOnFixedMap()
}
Here's what I'd try:
class FixedMap(val impl: immutable.Map[String, Int])
extends immutable.Map[String, Int] with immutable.MapLike[String, Int, FixedMap] {
override def +[B1 >: Int](kv: (String, B1)): immutable.Map[String, B1] = impl + kv
def +(kv: (String, Int))(implicit d: DummyImplicit): FixedMap = new FixedMap(impl + kv)
// ...
}
You're right: you cannot override the + that already exists, so you have to leave it there — otherwise your subclass wouldn't be able to do things that the superclasses can do, which violates the Liskov substitution principle. But you can add an additional method wich the exact arguments that you want (and you don't need a CanBuildFrom in this particular case).
The only problem is that the new method has the exact same type erasure as the one you were trying to override, but which has an incompatible signature. To solve this, you can add the DummyImplicit — defined in Predef as a class for which an implicit value is always available. Its main use is to work around type erasure in case of overloading like this.
Note that the static type of the map on which you want to call your overloaded method has to be FixedMap for this to work. If an object has a run-time type of FixedType but is statically typed to a regular Map[String, Int], the compiler won't call your new overloaded method.
It's possible to implement what you want using CanBuildFrom. The question is - do you really want/need to make it? The are several similar questions in SO, here is one of them (hope you will find answer there):
Extending Scala collections
Generally Scala gives use enough tools to avoid this (extending collections). And you really need a good reason to start with this.
It looks like it does work (best I can tell so far) if you add another + overload:
def +(kv : (String, Int))(implicit bf : CanBuildFrom[FixedMap, (String, Int), FixedMap]) : FixedMap = {
val b = bf(empty)
b ++= this
b += kv
b.result
}
The problem I was seeing doing this before was caused by returning FixedMap from the overridden +, thus preventing any upgrade to a generic map. I guess this allowed implicit conversion of the pair being +'d to work. But if you fix that overridden + method to return Map[String,B1] again, you can't use implicit conversion on the pair's value anymore. No way for the compiler to know whether to go to a superclass map i.e. Map[String,Any] or implicitly convert to Int in order to stick with FixedMap. Interesting that the return type of the method changes whether implicit conversions are used; I guess given a FixedMap return type the compiler can deduce that B1 is always just B (or something like that!).
Anyway this seems to be my bug; you just can't use implicit conversion on the pair._2 passed to + while being compatible with the Map interface, even conceptually. Now that I gave up on that, I think the overloaded + will work OK.

Mixing in generic traits in parameterized classes without duplicating type parameters

Let's assume I want to create a trait that I can mix in into any Traversable[T]. In the end, I want to be able to say things like:
val m = Map("name" -> "foo") with MoreFilterOperations
and have methods on MoreFilterOperations that are expressed in anything Traversable has to offer, such as:
def filterFirstTwo(f: (T) => Boolean) = filter(f) take 2
However, the problem is clearly that T is not defined as a type parameter on MoreFilterOperations. Once I do that, it's doable of course, but then my code would read:
val m = Map("name" -> "foo") with MoreFilterOperations[(String,String)]
or if I define a variable of this type:
var m2: Map[String,String] with MoreFilterOperations[(String,String)] = ...
which is way to verbose for my taste. I would like to have the trait defined in such a way that I could write the latter as:
var m2: Map[String,String] with MoreFilterOperations
I tried self types, abstract type members, but it hasn't resulted in anything useful. Any clues?
Map("name" -> "foo") is a function invocation and not a constructor, this means that you can't write:
Map("name" -> "foo") with MoreFilterOperations
any more that you can write
val m = Map("name" -> "foo")
val m2 = m with MoreFilterOperations
To get a mixin, you have to use a concrete type, a naive first attempt would be something like this:
def EnhMap[K,V](entries: (K,V)*) =
new collection.immutable.HashMap[K,V] with MoreFilterOptions[(K,V)] ++ entries
Using a factory method here to avoid having to duplicate the type params. However, this won't work, because the ++ method is just going to return a plain old HashMap, without the mixin!
The solution (as Sam suggested) is to use an implicit conversion to add the pimped method. This will allow you to transform the Map with all the usual techniques and still be able to use your extra methods on the resulting map. I'd normally do this with a class instead of a trait, as having constructor params available leads to a cleaner syntax:
class MoreFilterOperations[T](t: Traversable[T]) {
def filterFirstTwo(f: (T) => Boolean) = t filter f take 2
}
object MoreFilterOperations {
implicit def traversableToFilterOps[T](t:Traversable[T]) =
new MoreFilterOperations(t)
}
This allows you to then write
val m = Map("name"->"foo", "name2"->"foo2", "name3"->"foo3")
val m2 = m filterFirstTwo (_._1.startsWith("n"))
But it still doesn't play nicely with the collections framework. You started with a Map and ended up with a Traversable. That isn't how things are supposed to work. The trick here is to also abstract over the collection type using higher-kinded types
import collection.TraversableLike
class MoreFilterOperations[Repr <% TraversableLike[T,Repr], T] (xs: Repr) {
def filterFirstTwo(f: (T) => Boolean) = xs filter f take 2
}
Simple enough. You have to supply Repr, the type representing the collection, and T, the type of elements. I use TraversableLike instead of Traversable as it embeds its representation; without this, filterFirstTwo would return a Traversable regardless of the starting type.
Now the implicit conversions. This is where things get a bit trickier in the type notation. First, I'm using a higher-kinded type to capture the representation of the collection: CC[X] <: Traversable[X], this parameterises the CC type, which must be a subclass of Traversable (note the use of X as a placeholder here, CC[_] <: Traversable[_] does not mean the same thing).
There's also an implicit CC[T] <:< TraversableLike[T,CC[T]], which the compiler uses to statically guarantee that our collection CC[T] is genuinely a subclass of TraversableLike and so a valid argument for the MoreFilterOperations constructor:
object MoreFilterOperations {
implicit def traversableToFilterOps[CC[X] <: Traversable[X], T]
(xs: CC[T])(implicit witness: CC[T] <:< TraversableLike[T,CC[T]]) =
new MoreFilterOperations[CC[T], T](xs)
}
So far, so good. But there's still one problem... It won't work with maps, because they take two type parameters. The solution is to add another implicit to the MoreFilterOperations object, using the same principles as before:
implicit def mapToFilterOps[CC[KX,VX] <: Map[KX,VX], K, V]
(xs: CC[K,V])(implicit witness: CC[K,V] <:< TraversableLike[(K,V),CC[K,V]]) =
new MoreFilterOperations[CC[K,V],(K,V)](xs)
The real beauty comes in when you also want to work with types that aren't actually collections, but can be viewed as though they were. Remember the Repr <% TraversableLike in the MoreFilterOperations constructor? That's a view bound, and permits types that can be implicitly converted to TraversableLike as well as direct subclasses. Strings are a classic example of this:
implicit def stringToFilterOps
(xs: String)(implicit witness: String <%< TraversableLike[Char,String])
: MoreFilterOperations[String, Char] =
new MoreFilterOperations[String, Char](xs)
If you now run it on the REPL:
val m = Map("name"->"foo", "name2"->"foo2", "name3"->"foo3")
// m: scala.collection.immutable.Map[java.lang.String,java.lang.String] =
// Map((name,foo), (name2,foo2), (name3,foo3))
val m2 = m filterFirstTwo (_._1.startsWith("n"))
// m2: scala.collection.immutable.Map[java.lang.String,java.lang.String] =
// Map((name,foo), (name2,foo2))
"qaxfwcyebovjnbointofm" filterFirstTwo (_ < 'g')
//res5: String = af
Map goes in, Map comes out. String goes in, String comes out. etc...
I haven't tried it with a Stream yet, or a Set, or a Vector, but you can be confident that if you did, it would return the same type of collection that you started with.
It's not quite what you asked for, but you can solve this problem with implicits:
trait MoreFilterOperations[T] {
def filterFirstTwo(f: (T) => Boolean) = traversable.filter(f) take 2
def traversable:Traversable[T]
}
object FilterImplicits {
implicit def traversableToFilterOps[T](t:Traversable[T]) = new MoreFilterOperations[T] { val traversable = t }
}
object test {
import FilterImplicits._
val m = Map("name" -> "foo", "name2" -> "foo2", "name3" -> "foo3")
val r = m.filterFirstTwo(_._1.startsWith("n"))
}
scala> test.r
res2: Traversable[(java.lang.String, java.lang.String)] = Map((name,foo), (name2,foo2))
Scala standard library uses implicits for this purpose. E.g. "123".toInt. I think its the best way in this case.
Otherwise you'll have to go through full implementation of your "map with additional operations" since immutable collections require creation of new instances of your new mixed class.
With mutable collections you could do something like this:
object FooBar {
trait MoreFilterOperations[T] {
this: Traversable[T] =>
def filterFirstTwo(f: (T) => Boolean) = filter(f) take 2
}
object moreFilterOperations {
def ~:[K, V](m: Map[K, V]) = new collection.mutable.HashMap[K, V] with MoreFilterOperations[(K, V)] {
this ++= m
}
}
def main(args: Array[String]) {
val m = Map("a" -> 1, "b" -> 2, "c" -> 3) ~: moreFilterOperations
println(m.filterFirstTwo(_ => true))
}
}
I'd rather use implicits.