Two methods in one scala - scala

Starting my first project with Scala: a poker framework.
So I have the following class
class Card(rank1: CardRank, suit1: Suit){
val rank = rank1
val suit = suit1
}
And a Utils object which contains two methods that do almost the same thing: they count number of cards for each rank or suit
def getSuits(cards: List[Card]) = {
def getSuits(cards: List[Card], suits: Map[Suit, Int]): (Map[Suit, Int]) = {
if (cards.isEmpty)
return suits
val suit = cards.head.suit
val value = if (suits.contains(suit)) suits(suit) + 1 else 1
getSuits(cards.tail, suits + (suit -> value))
}
getSuits(cards, Map[Suit, Int]())
}
def getRanks(cards: List[Card]): Map[CardRank, Int] = {
def getRanks(cards: List[Card], ranks: Map[CardRank, Int]): Map[CardRank, Int] = {
if (cards isEmpty)
return ranks
val rank = cards.head.rank
val value = if (ranks.contains(rank)) ranks(rank) + 1 else 1
getRanks(cards.tail, ranks + (rank -> value))
}
getRanks(cards, Map[CardRank, Int]())
}
Is there any way I can "unify" these two methods in a single one with "field/method-as-parameter"?
Thanks

Yes, that would require high order function (that is, function that takes function as parameter) and type parameters/genericity
def groupAndCount[A,B](elements: List[A], toCount: A => B): Map[B, Int] = {
// could be your implementation, just note key instead of suit/rank
// and change val suit = ... or val rank = ...
// to val key = toCount(card.head)
}
then
def getSuits(cards: List[Card]) = groupAndCount(cards, {c : Card => c.suit})
def getRanks(cards: List[Card]) = groupAndCount(cards, {c: Card => c.rank})
You do not need type parameter A, you could force the method to work only on Card, but that would be a pity.
For extra credit, you can use two parameter lists, and have
def groupAndCount[A,B](elements: List[A])(toCount: A => B): Map[B, Int] = ...
that is a little peculiarity of scala with type inference, if you do with two parameters lists, you will not need to type the card argument when defining the function :
def getSuits(cards: List[Card]) = groupAndCount(cards)(c => c.suit)
or just
def getSuits(cards: List[Card] = groupAndCount(cards)(_.suit)
Of course, the library can help you with the implementation
def groupAndCount[A,B](l: List[A])(toCount: A => B) : Map[A,B] =
l.groupBy(toCount).map{case (k, elems) => (k, elems.length)}
although a hand made implementation might be marginally faster.
A minor note, Card should be declared a case class :
case class Card(rank: CardRank, suit: Suit)
// declaration done, nothing else needed

Related

Subtract Seq[A] from Seq[B]

I have two classes A and B. Both of them have the same property: id and many other different properties.
How can I subtract Seq[A] from Seq[B] by matching the id's?
This should work as long as the id field of both classes have the same type.
val as: Seq[A] = ???
val bs: Seq[B] = ???
val asSet = as.iterator.map(a => a.id).toSet
val substracted: Seq[B] = bs.filterNot(b => asSet(b.id))
Another feasible solution:
val seqSub = seqB.filterNot(x => seqA.exists(_.id == x.id))
Couldn't find an answer matching my definition of subtract, where duplicate elements aren't filtered, (e.g. Seq(1,2,2) subtract Seq(2) = Seq(1,2), det0's definition gives Seq(1) so posting it here.
trait IntId {
def id: Int
}
case class A(id: Int) extends IntId
case class B(id: Int) extends IntId
val seqB = Seq(B(1),B(4),B(7),B(7),B(7))
val seqA = Seq(A(7))
// BSubtractA = Seq(B(1),B(4),B(7),B(7)), only remove one instance of id 7
val BSubtractA = seqA.foldLeft(seqB){
case (seqBAccumulated, a) =>
val indexOfA = seqBAccumulated.map(_.id).indexOf(a.id)
if(indexOfA >= 0) {
seqBAccumulated.take(indexOfA) ++ seqBAccumulated.drop(indexOfA + 1)
}
else {
seqBAccumulated
}
}
Yes, there are shortcomings to this solution. For example, if seqA is larger than seqB , then it runs into null pointers (+ I haven't refactored it into a def). Also the performance could be improved to iterate fewer times over the input, however, this satisfied my use case.
That will be far more clean -
val seqSub = seqB.filterNot(x => seqA.contains(x))

Scala sort by unknown number of fields

I have simple class with N fields.
case class Book(a: UUID... z: String)
and function:
def sort(books:Seq[Book], fields:Seq[SortingFields]) = {...}
where
case class SortingField(field: String, asc: Boolean)
where field - a field of the Book class, asc - a sorting direction.
So, in advance I dont know which fields (from 0 to N) and sorting orders come into my function to sort a books collection. It may be just a single ID field or all exist fields of a class in a particular order.
How could it be implemented?
I would use the existing Ordering trait for this and use a function that maps from Book to a field, i.e. Ordering.by[Book, String](_.author). Then you can simply sort with books.sorted(myOrdering). If I define a helper method on Book's companion object, getting these orderings is very simple:
object Book {
def by[A: Ordering](fun: Book => A): Ordering[Book] = Ordering.by(fun)
}
case class Book(author: String, title: String, year: Int)
val xs = Seq(Book("Deleuze" /* and Guattari */, "A Thousand Plateaus", 1980),
Book("Deleuze", "Difference and Repetition", 1968),
Book("Derrida", "Of Grammatology", 1967))
xs.sorted(Book.by(_.title)) // A Thousand, Difference, Of Grammatology
xs.sorted(Book.by(_.year )) // Of Grammatology, Difference, A Thousand
Then to chain the ordering by multiple fields, you can create custom ordering that proceeds through the fields until one comparison is non-zero. For example, I can add an extension method andThen to Ordering like this:
implicit class OrderingAndThen[A](private val self: Ordering[A]) extends AnyVal {
def andThen(that: Ordering[A]): Ordering[A] = new Ordering[A] {
def compare(x: A, y: A): Int = {
val a = self.compare(x, y)
if (a != 0) a else that.compare(x, y)
}
}
}
So I can write:
val ayt = Book.by(_.author) andThen Book.by(_.year) andThen Book.by(_.title)
xs.sorted(ayt) // Difference, A Thousand, Of Grammatology
With the nice answer provided by #0__ I've come up to folowing:
def by[A: Ordering](e: Book => A): Ordering[Book] = Ordering.by(e)
with
implicit class OrderingAndThen[A](private val self: Ordering[A]) extends AnyVal {
def andThen(that: Ordering[A]): Ordering[A] = new Ordering[A] {
def compare(x: A, y: A): Int = {
val a = self.compare(x, y)
if (a != 0) a else that.compare(x, y)
}
}
}
next I map name of a class field with a direction to actual ordering
def toOrdering(name: String, r: Boolean): Ordering[Book] = {
(name match {
case "id" => Book.by(_.id)
case "name" => Book.by(_.name)
}) |> (o => if (r) o.reverse else o)
}
using a forward pipe operator:
implicit class PipedObject[A](value: A) {
def |>[B](f: A => B): B = f(value)
}
and finally I combine all the ordering with the reduce function:
val fields = Seq(SortedField("name", true), SortedField("id", false))
val order = fields.map(f => toOrdering(f.field, f.reverse)).reduce(combines(_,_))
coll.sorted(order)
where
val combine = (x: Ordering[Book], y: Ordering[Book]) => x andThen y
An aternate way is to use #tailrec:
def orderingSeq[T](os: Seq[Ordering[T]]): Ordering[T] = new Ordering[T] {
def compare(x: T, y: T): Int = {
#tailrec def compare0(rest: Seq[Ordering[T]], result: Int): Int = result match {
case 0 if rest.isEmpty => 0
case 0 => compare0(rest.tail, rest.head.compare(x, y))
case a => a
}
compare0(os, 0)
}
}
It is possible. But as far as I can see you will have to use reflection.
Additionally, you would have to change your SortingField class a bit as there is no way the scala compiler can figure out the right Ordering type class for each field.
Here is a simplified example.
import scala.reflect.ClassTag
/** You should be able to figure out the correct field ordering here. Use `reverse` to decide whether you want to sort ascending or descending. */
case class SortingField[T](field: String, ord: Ordering[T]) { type FieldType = T }
case class Book(a: Int, b: Long, c: String, z: String)
def sort[T](unsorted: Seq[T], fields: Seq[SortingField[_]])(implicit tag: ClassTag[T]): Seq[T] = {
val bookClazz = tag.runtimeClass
fields.foldLeft(unsorted) { case (sorted, currentField) =>
// keep in mind that scala generates a getter method for field 'a'
val field = bookClazz.getMethod(currentField.field)
sorted.sortBy[currentField.FieldType](
field.invoke(_).asInstanceOf[currentField.FieldType]
)(currentField.ord)
}
}
However, for sorting by multiple fields you would have to either sort the sequence multiple times or better yet compose the various orderings correctly.
So this is getting a bit more 'sophisticated' without any guarantees about correctness and completeness, but with a little test that it does not fail spectacularly:
def sort[T](unsorted: Seq[T], fields: Seq[SortingField[_]])(implicit tag: ClassTag[T]): Seq[T] = {
#inline def invokeGetter[A](field: Method, obj: T): A = field.invoke(obj).asInstanceOf[A]
#inline def orderingByField[A](field: Method)(implicit ord: Ordering[A]): Ordering[T] = {
Ordering.by[T, A](invokeGetter[A](field, _))
}
val bookClazz = tag.runtimeClass
if (fields.nonEmpty) {
val field = bookClazz.getMethod(fields.head.field)
implicit val composedOrdering: Ordering[T] = fields.tail.foldLeft {
orderingByField(field)(fields.head.ord)
} { case (ordering, currentField) =>
val field = bookClazz.getMethod(currentField.field)
val subOrdering: Ordering[T] = orderingByField(field)(currentField.ord)
new Ordering[T] {
def compare(x: T, y: T): Int = {
val upperLevelOrderingResult = ordering.compare(x, y)
if (upperLevelOrderingResult == 0) {
subOrdering.compare(x, y)
} else {
upperLevelOrderingResult
}
}
}
}
unsorted.sorted(composedOrdering)
} else {
unsorted
}
}
sort(
Seq[Book](
Book(1, 5L, "foo1", "bar1"),
Book(10, 50L, "foo10", "bar15"),
Book(2, 3L, "foo3", "bar3"),
Book(100, 52L, "foo4", "bar6"),
Book(100, 51L, "foo4", "bar6"),
Book(100, 51L, "foo3", "bar6"),
Book(11, 15L, "foo5", "bar7"),
Book(22, 45L, "foo6", "bar8")
),
Seq(
SortingField("a", implicitly[Ordering[Int]].reverse),
SortingField("b", implicitly[Ordering[Long]]),
SortingField("c", implicitly[Ordering[String]])
)
)
>> res0: Seq[Book] = List(Book(100,51,foo3,bar6), Book(100,51,foo4,bar6), Book(100,52,foo4,bar6), Book(22,45,foo6,bar8), Book(11,15,foo5,bar7), Book(10,50,foo10,bar15), Book(2,3,foo3,bar3), Book(1,5,foo1,bar1))
Case classes are Products, so you can iterate over all field values using instance.productIterator. This gives you the fields in order of declaration. You can also access them directly via their index. As far as I can see, there is however no way to get the field names. This would have to be done using reflection or macros. (Maybe some library as Shapeless can already do that).
An other way would be to not define fields to sort by with names but with functions:
case class SortingField[T](field: Book => T, asc: Boolean)(implicit ordering: Ordering[T])
new SortingField(_.fieldName, true)
And then declare sort as:
def sort(books: Seq[Book], fields: Seq[SortingField[_]]) = {...}
And use the following compare method to implement the combined ordering:
def compare[T](b1: Book, b2: Book, field: SortingField[T]) =
field.ordering.compare(field.field(b1), field.field(b2))

How to write this recursive groupBy function in Scala

Recently I have come across a very useful groupBy function that Groovy has made available on Iterable:
public static Map groupBy(Iterable self, List<Closure> closures)
Which you can use to perform recursive groupBy on Lists and even Maps see example by mrhaki here
I would like to write a function that does the same in Scala. But having just started my Scala journey, I am kind of lost on how I should going about defining and implementing this method. Especially the generics side of the functions and return type on this method's signature are way beyond my level.
I would need more experienced Scala developers to help me out here.
Is this following signature totally wrong or am I in the ball park?
def groupBy[A, K[_]](src: List[A], fs: Seq[(A) ⇒ K[_]]): Map[K[_], List[A]]
Also, how would I implement the recursion with the correct types?
This is simple multigroup implementation:
implicit class GroupOps[A](coll: Seq[A]) {
def groupByKeys[B](fs: (A => B)*): Map[Seq[B], Seq[A]] =
coll.groupBy(elem => fs map (_(elem)))
}
val a = 1 to 20
a.groupByKeys(_ % 3, _ % 2) foreach println
If you really need some recursive type you'll need a wrapper:
sealed trait RecMap[K, V]
case class MapUnit[K, V](elem: V) extends RecMap[K, V] {
override def toString = elem.toString()
}
case class MapLayer[K, V](map: Map[K, RecMap[K, V]]) extends RecMap[K, V] {
override def toString = map.toString()
}
out definition changes to:
implicit class GroupOps[A](coll: Seq[A]) {
def groupByKeys[B](fs: (A => B)*): Map[Seq[B], Seq[A]] =
coll.groupBy(elem => fs map (_(elem)))
def groupRecursive[B](fs: (A => B)*): RecMap[B, Seq[A]] = fs match {
case Seq() => MapUnit(coll)
case f +: fs => MapLayer(coll groupBy f mapValues {_.groupRecursive(fs: _*)})
}
}
and a.groupRecursive(_ % 3, _ % 2) yield something more relevant to question
And finally i rebuild domain definition from referred article:
case class User(name: String, city: String, birthDate: Date) {
override def toString = name
}
implicit val date = new SimpleDateFormat("yyyy-MM-dd").parse(_: String)
val month = new SimpleDateFormat("MMM").format (_:Date)
val users = List(
User(name = "mrhaki", city = "Tilburg" , birthDate = "1973-9-7"),
User(name = "bob" , city = "New York" , birthDate = "1963-3-30"),
User(name = "britt" , city = "Amsterdam", birthDate = "1980-5-12"),
User(name = "kim" , city = "Amsterdam", birthDate = "1983-3-30"),
User(name = "liam" , city = "Tilburg" , birthDate = "2009-3-6")
)
now we can write
users.groupRecursive(_.city, u => month(u.birthDate))
and get
Map(Tilburg -> Map(Mar -> List(liam), Sep -> List(mrhaki)), New York
-> Map(Mar -> List(bob)), Amsterdam -> Map(Mar -> List(kim), May -> List(britt)))
I decided add another answer, due to fully different approach.
You could, actually get non-wrapped properly typed maps with huge workarounds. I not very good at this, so it by the chance could be simplified.
Trick - is to create Sequence of typed functions, which is lately producing multi-level map using type classes and type path approach.
So here is the solution
sealed trait KeySeq[-V] {
type values
}
case class KeyNil[V]() extends KeySeq[V] {
type values = Seq[V]
}
case class KeyCons[K, V, Next <: KeySeq[V]](f: V => K, next: Next)
(implicit ev: RecGroup[V, Next]) extends KeySeq[V] {
type values = Map[K, Next#values]
def #:[K1](f: V => K1) = new KeyCons[K1, V, KeyCons[K, V, Next]](f, this)
}
trait RecGroup[V, KS <: KeySeq[V]] {
def group(seq: Seq[V], ks: KS): KS#values
}
implicit def groupNil[V]: RecGroup[V, KeyNil[V]] = new RecGroup[V, KeyNil[V]] {
def group(seq: Seq[V], ks: KeyNil[V]) = seq
}
implicit def groupCons[K, V, Next <: KeySeq[V]](implicit ev: RecGroup[V, Next]): RecGroup[V, KeyCons[K, V, Next]] =
new RecGroup[V, KeyCons[K, V, Next]] {
def group(seq: Seq[V], ks: KeyCons[K, V, Next]) = seq.groupBy(ks.f) mapValues (_ groupRecursive ks.next)
}
implicit def funcAsKey[K, V](f: V => K): KeyCons[K, V, KeyNil[V]] =
new KeyCons[K, V, KeyNil[V]](f, KeyNil[V]())
implicit class GroupOps[V](coll: Seq[V]) {
def groupRecursive[KS <: KeySeq[V]](ks: KS)(implicit g: RecGroup[V, KS]) =
g.group(coll, ks)
}
key functions are composed via #: right-associative operator
so if we define
def mod(m:Int) = (x:Int) => x % m
def even(x:Int) = x % 2 == 0
then
1 to 30 groupRecursive (even _ #: mod(3) #: mod(5) )
would yield proper Map[Boolean,Map[Int,Map[Int,Int]]] !!!
and if from previous question we would like to
users.groupRecursive(((u:User)=> u.city(0)) #: ((u:User) => month(u.birthDate)))
We are building Map[Char,Map[String,User]] !

scala custom map

I'm trying to implement a new type, Chunk, that is similar to a Map. Basically, a "Chunk" is either a mapping from String -> Chunk, or a string itself.
Eg it should be able to work like this:
val m = new Chunk("some sort of value") // value chunk
assert(m.getValue == "some sort of value")
val n = new Chunk("key" -> new Chunk("value"), // nested chunks
"key2" -> new Chunk("value2"))
assert(n("key").getValue == "value")
assert(n("key2").getValue == "value2")
I have this mostly working, except that I am a little confused by how the + operator works for immutable maps.
Here is what I have now:
class Chunk(_map: Map[String, Chunk], _value: Option[String]) extends Map[String, Chunk] {
def this(items: (String, Chunk)*) = this(items.toMap, None)
def this(k: String) = this(new HashMap[String, Chunk], Option(k))
def this(m: Map[String, Chunk]) = this(m, None)
def +[B1 >: Chunk](kv: (String, B1)) = throw new Exception(":( do not know how to make this work")
def -(k: String) = new Chunk(_map - k, _value)
def get(k: String) = _map.get(k)
def iterator = _map.iterator
def getValue = _value.get
def hasValue = _value.isDefined
override def toString() = {
if (hasValue) getValue
else "Chunk(" + (for ((k, v) <- this) yield k + " -> " + v.toString).mkString(", ") + ")"
}
def serialize: String = {
if (hasValue) getValue
else "{" + (for ((k, v) <- this) yield k + "=" + v.serialize).mkString("|") + "}"
}
}
object main extends App {
val m = new Chunk("message_info" -> new Chunk("message_type" -> new Chunk("boom")))
val n = m + ("c" -> new Chunk("boom2"))
}
Also, comments on whether in general this implementation is appropriate would be appreciated.
Thanks!
Edit: The algebraic data types solution is excellent, but there remains one issue.
def +[B1 >: Chunk](kv: (String, B1)) = Chunk(m + kv) // compiler hates this
def -(k: String) = Chunk(m - k) // compiler is pretty satisfied with this
The - operator here seems to work, but the + operator really wants me to return something of type B1 (I think)? It fails with the following issue:
overloaded method value apply with alternatives: (map: Map[String,Chunk])MapChunk <and> (elems: (String, Chunk)*)MapChunk cannot be applied to (scala.collection.immutable.Map[String,B1])
Edit2:
Xiefei answered this question -- extending map requires that I handle + with a supertype (B1) of Chunk, so in order to do this I have to have some implementation for that, so this will suffice:
def +[B1 >: Chunk](kv: (String, B1)) = m + kv
However, I don't ever really intend to use that one, instead, I will also include my implementation that returns a chunk as follows:
def +(kv: (String, Chunk)):Chunk = Chunk(m + kv)
How about an Algebraic data type approach?
abstract sealed class Chunk
case class MChunk(elems: (String, Chunk)*) extends Chunk with Map[String,Chunk] {
val m = Map[String, Chunk](elems:_*)
def +[B1 >: Chunk](kv: (String, B1)) = m + kv
def -(k: String) = m - k
def iterator = m.iterator
def get(s: String) = m.get(s)
}
case class SChunk(s: String) extends Chunk
// A 'Companion' object that provides 'constructors' and extractors..
object Chunk {
def apply(s: String) = SChunk(s)
def apply(elems: (String, Chunk)*) = MChunk(elems: _*)
// just a couple of ideas...
def unapply(sc: SChunk) = Option(sc).map(_.value)
def unapply(smc: (String, MChunk)) = smc match {
case (s, mc) => mc.get(s)
}
}
Which you can use like:
val simpleChunk = Chunk("a")
val nestedChunk = Chunk("b" -> Chunk("B"))
// Use extractors to get the values.
val Chunk(s) = simpleChunk // s will be the String "a"
val Chunk(c) = ("b" -> nestedChunk) // c will be a Chunk: Chunk("B")
val Chunk(c) = ("x" -> nestedChunk) // will throw a match error, because there's no "x"
// pattern matching:
("x" -> mc) match {
case Chunk(w) => Some(w)
case _ => None
}
The unapply extractors are just a suggestion; hopefully you can mess with this idea till you get what you want.
The way it's written, there's no way to enforce that it can't be both a Map and a String at the same time. I would be looking at capturing the value using Either and adding whatever convenience methods you require:
case class Chunk(value:Either[Map[String,Chunk],String]) {
...
}
That will also force you to think about what you really need to do in situations such as adding a key/value pair to a Chunk that represents a String.
Have you considered using composition instead of inheritance? So, instead of Chunk extending Map[String, Chunk] directly, just have Chunk internally keep an instance of Map[String, Chunk] and provide the extra methods that you need, and otherwise delegating to the internal map's methods.
def +(kv: (String, Chunk)):Chunk = new Chunk(_map + kv, _value)
override def +[B1 >: Chunk](kv: (String, B1)) = _map + kv
What you need is a new + method, and also implement the one declared in Map trait.

scala: adding a method to List?

I was wondering how to go about adding a 'partitionCount' method to Lists, e.g.:
(not tested, shamelessly based on List.scala):
Do I have to create my own sub-class and an implicit type converter?
(My original attempt had a lot of problems, so here is one based on #Easy's answer):
class MyRichList[A](targetList: List[A]) {
def partitionCount(p: A => Boolean): (Int, Int) = {
var btrue = 0
var bfalse = 0
var these = targetList
while (!these.isEmpty) {
if (p(these.head)) { btrue += 1 } else { bfalse += 1 }
these = these.tail
}
(btrue, bfalse)
}
}
and here is a little more general version that's good for Seq[...]:
implicit def seqToRichSeq[T](s: Seq[T]) = new MyRichSeq(s)
class MyRichList[A](targetList: List[A]) {
def partitionCount(p: A => Boolean): (Int, Int) = {
var btrue = 0
var bfalse = 0
var these = targetList
while (!these.isEmpty) {
if (p(these.head)) { btrue += 1 } else { bfalse += 1 }
these = these.tail
}
(btrue, bfalse)
}
}
You can use implicit conversion like this:
implicit def listToMyRichList[T](l: List[T]) = new MyRichList(l)
class MyRichList[T](targetList: List[T]) {
def partitionCount(p: T => Boolean): (Int, Int) = ...
}
and instead of this you need to use targetList. You don't need to extend List. In this example I create simple wrapper MyRichList that would be used implicitly.
You can generalize wrapper further, by defining it for Traversable, so that it will work for may other collection types and not only for Lists:
implicit def listToMyRichTraversable[T](l: Traversable[T]) = new MyRichTraversable(l)
class MyRichTraversable[T](target: Traversable[T]) {
def partitionCount(p: T => Boolean): (Int, Int) = ...
}
Also note, that implicit conversion would be used only if it's in scope. This means, that you need to import it (unless you are using it in the same scope where you have defined it).
As already pointed out by Easy Angel, use implicit conversion:
implicit def listTorichList[A](input: List[A]) = new RichList(input)
class RichList[A](val source: List[A]) {
def partitionCount(p: A => Boolean): (Int, Int) = {
val partitions = source partition(p)
(partitions._1.size, partitions._2.size)
}
}
Also note that you can easily define partitionCount in terms of partinion. Then you can simply use:
val list = List(1, 2, 3, 5, 7, 11)
val (odd, even) = list partitionCount {_ % 2 != 0}
If you are curious how it works, just remove implicit keyword and call the list2richList conversion explicitly (this is what the compiler does transparently for you when implicit is used).
val (odd, even) = list2richList(list) partitionCount {_ % 2 != 0}
Easy Angel is right, but the method seems pretty useless. You have already count in order to get the number of "positives", and of course the number of "negatives" is size minus count.
However, to contribute something positive, here a more functional version of your original method:
def partitionCount[A](iter: Traversable[A], p: A => Boolean): (Int, Int) =
iter.foldLeft ((0,0)) { ((x,y), a) => if (p(a)) (x + 1,y) else (x, y + 1)}