combining zip and map operations in Scala - scala

I understand map and flatten operations can be combined into flatMap, and filter and map into collect in Scala.
Is there anyway I can combine zip/zipwithIndex with map operation?

There is no single operation in the standard library, as far as I know, but there is an extension method on various tuples, called zipped. This method returns an object which provides methods like map and flatMap, which would perform zipping in step with mapping:
(xs, ys).zipped.map((x, y) => x * y)
This object also is implicitly convertible to Traversable, so you can call more complex methods like mkString or foldLeft.

If, for some reason, you really wanted a combined version you could write one yourself.
implicit class SeqOps[A](s: Seq[A]) {
def zipWithIndex2[A1 >: A, B >: Int, That](f: (A, Int) => (A1, B))(implicit bf: CanBuildFrom[Seq[A], (A1, B), That]): That = {
val b = bf(s)
var i = 0
for (x <- s) {
b += f(x, i)
i += 1
}
b.result()
}
}
Call it like:
s.zipWithIndex2 {
case (a, b) => (a + "2", b + 2)
}
I'd really think about this twice though and most likely go with any of the other approaches that have been suggested.

Related

Scala: different foldRight implementations in list

I've just figured out that scala (I'm on 2.12) provides completely different implementations of foldRight for immutable list and mutable list.
Immutable list (List.scala):
override def foldRight[B](z: B)(op: (A, B) => B): B =
reverse.foldLeft(z)((right, left) => op(left, right))
Mutable list (LinearSeqOptimized.scala):
def foldRight[B](z: B)(#deprecatedName('f) op: (A, B) => B): B =
if (this.isEmpty) z
else op(head, tail.foldRight(z)(op))
Now I'm just curious.
Could you please explain me why was it implemented so differently?
The override in List seems to override the foldRight in LinearSeqOptimized. The implementation in LinearSeqOptimized
def foldRight[B](z: B)(#deprecatedName('f) op: (A, B) => B): B =
if (this.isEmpty) z
else op(head, tail.foldRight(z)(op))
looks exactly like the canonical definition of foldRight as a catamorphism from your average theory book. However, as was noticed in SI-2818, this implementation is not stack-safe (throws unexpected StackOverflowError for long lists). Therefore, it was replaced by a stack-safe reverse.foldLeft in this commit. The foldLeft is stack-safe, because it has been implemented by a while loop:
def foldLeft[B](z: B)(#deprecatedName('f) op: (B, A) => B): B = {
var acc = z
var these = this
while (!these.isEmpty) {
acc = op(acc, these.head)
these = these.tail
}
acc
}
That hopefully explains why it was overridden in List. It doesn't explain why it was not overridden in other classes. I guess it's simply because the mutable data structures are used less often and quite differently anyway (often as buffers and accumulators during the construction of immutable ones).
Hint: there is a blame button in the top right corner over every file on Github, so you can always track down what was changed when, by whom, and why.

Scala: Difference between Map.map and Map.transform? Why Map.map requires pattern matching in its parameter?

For an immutable Map,
val original = Map("A"->1, "B"->2)
I can either use
original.map { case (k, v) => (k, v + 1) }
Or
original.transform((_, v) => v + 1)
to transform the values.
But why map() method requires case pattern matching but transform() doesn't? Is it because of these methods are defined in different implicit types?
Someone has marked my question as a duplicate of another question [Difference between mapValues and transform in Map. It is not the same. I am asking Map.map not Map.mapValues. Also I am asking the different way of using the two methods.
With map method you can change (don't want to use transform word here) whole Map converting it to another Map, List etc
val m = Map(1->"a")
m.map { case (k,v) => (k+1) -> (v + 1) } // Map(2 -> a1)
m.map { case (k,v) => k+v } // List(1a)
With transform method you can change only values considering their keys
m.transform { case (k, v) => v + 1 } // Map(1 -> a1)
Transform take a function that has two values as inputs, the first is the key and the second the value. Pattern matching is not needed since the two values are passed in individually.
On the other hand, the function passed to map takes in a single tuple containing the key and value of the element as an input. Pattern matching is used to break this tuple into it's components. You don't have to use pattern matching, but that would mean working with the tuple object instead of it's contents.
The difference is in the function they receive. As you can see in the API
def transform[W, That](f: (K, V) ⇒ W)(implicit bf: CanBuildFrom[Map[K, V], (K, W), That]): That
def map[B](f: (A) ⇒ B): Map[B]
transform's function receives a tuple f: (K, V) ⇒ W while map's function receives a single value (which can obviously be a Tuple) f: (A) ⇒ B
So if you want to treat differently and in a easy-to-read way you should use the case word.
You can also do something like this, but is way less readeable:
original.map(r => (r._1, r._2+1))

Iterable with two elements?

We have Option which is an Iterable over 0 or 1 elements.
I would like to have such a thing with two elements. The best I have is
Array(foo, bar).map{...}, while what I would like is:
(foo, bar).map{...}
(such that Scala recognized there are two elements in the Iterable).
Does such a construction exist in the standard library?
EDIT: another solution is to create a map method:
def map(a:Foo) = {...}
val (mappedFoo, mappedBar) = (map(foo), map(bar))
If all you want to do is map on tuples of the same type, a simple version is:
implicit class DupleOps[T](t: (T,T)) {
def map[B](f : T => B) = (f(t._1), f(t._2))
}
Then you can do the following:
val t = (0,1)
val (x,y) = t.map( _ +1) // x = 1, y = 2
There's no specific type in the scala standard library for mapping over exactly 2 elements.
I can suggest you the following thing (I suppose foo and bar has the same type T):
(foo, bar) // -> Tuple2[T,T]
.productIterator // -> Iterator[Any]
.map(_.asInstanceOf[T]) // -> Iterator[T]
.map(x => // some works)
No, it doesn't.
You could
Make one yourself.
Write an implicit conversion from 2-tuples to a Seq of the common supertype. But this won't yield 2-tuples from operations.
object TupleOps {
implicit def tupleToSeq[A <: C, B <: C](tuple: (A, B)): Seq[C] = Seq(tuple._1,tuple._2)
}
import TupleOps._
(0, 1).map(_ + 1)
Use HLists from shapeless. These provide operations on heterogenous lists, whereas you (probably?) have a homogeneous list, but it should work.

Scala: Implicit evidence of Map with Traversable values?

I'm trying to write a generic invert method that takes a Map from keys of type A to values that are collections of type B and converts it to a Map with keys of type B and collections of A using the same original collection type.
My goal is to make this method a member of a MyMap[A,B] class that offers extensions of the basic library methods, where Maps are implicitly converted to MyMaps. I am able to do this implicit conversion for a generic map, but I want to further specify that the invert method should only work in the case where B is a collection.
I lack the understanding of scala's collections framework to accomplish this - I've scoured the net for thorough introductory explanations of the signatures that look like a hodgepodge of Repr, CC,That, and CanBuildFrom, but I don't really understand how all these pieces fit together well enough to construct the method signature on my own. Please don't just give me the working signature for this case - I want to understand how the signatures of methods that use generic collections work in a broader sense so I'm able to do this independently going forward. Alternatively, feel free to reference an online resource that elaborates on this - I was unable to find one that was comprehensive and clear.
EDIT
I seem to have gotten it to work with the following code. If I did something wrong or you see something that can be improved, please comment & answer with a more optimal alternative.
class MyMap[A, B](val _map: Map[A, B]) {
def invert[E, CC[E]](
implicit ev1: B =:= CC[E],
ev2: CC[E] <:< TraversableOnce[E],
cbf: CanBuildFrom[CC[A], A, CC[A]]
): Map[E, CC[A]] = {
val inverted = scala.collection.mutable.Map.empty[E, Builder[A, CC[A]]]
for {
(key, values) <- _map
value <- values.asInstanceOf[CC[E]]
} {
if (!inverted.contains(value)) {
inverted += (value -> cbf())
}
inverted.get(value).foreach(_ += key)
}
return inverted.map({ case (k,v) => (k -> v.result) }).toMap
}
}
I started from your code and ended up with this:
implicit class MyMap[A, B, C[B] <: Traversable[B]](val _map: Map[A, C[B]]) {
def invert(implicit cbf: CanBuildFrom[C[A], A, C[A]]): Map[B, C[A]] = {
val inverted = scala.collection.mutable.Map.empty[B, Builder[A, C[A]]]
for ((k, vs) <- _map; v <- vs) {
inverted.getOrElseUpdate(v, cbf()) += k
}
inverted.map({ case (k, v) => (k -> v.result)}).toMap
}
}
val map = Map("a"-> List(1,2,3), "b" -> List(1,2))
println(map.invert) //Map(2 -> List(a, b), 1 -> List(a, b), 3 -> List(a))

Cartesian Product and Map Combined in Scala

This is a followup to: Expand a Set of Sets of Strings into Cartesian Product in Scala
The idea is you want to take:
val sets = Set(Set("a","b","c"), Set("1","2"), Set("S","T"))
and get back:
Set("a&1&S", "a&1&T", "a&2&S", ..., "c&2&T")
A general solution is:
def combine[A](f:(A, A) => A)(xs:Iterable[Iterable[A]]) =
xs.reduceLeft { (x, y) => x.view.flatMap {a => y.map(f(a, _)) } }
used as follows:
val expanded = combine{(x:String, y:String) => x + "&" + y}(sets).toSet
Theoretically, there should be a way to take input of type Set[Set[A]] and get back a Set[B]. That is, to convert the type while combining the elements.
An example usage would be to take in sets of strings (as above) and output the lengths of their concatenation. The f function in combine would something of the form:
(a:Int, b:String) => a + b.length
I was not able to come up with an implementation. Does anyone have an answer?
If you really want your combiner function to do the mapping, you can use a fold but as Craig pointed out you'll have to provide a seed value:
def combine[A, B](f: B => A => B, zero: B)(xs: Iterable[Iterable[A]]) =
xs.foldLeft(Iterable(zero)) {
(x, y) => x.view flatMap { y map f(_) }
}
The fact that you need such a seed value follows from the combiner/mapper function type (B, A) => B (or, as a curried function, B => A => B). Clearly, to map the first A you encounter, you're going to need to supply a B.
You can make it somewhat simpler for callers by using a Zero type class:
trait Zero[T] {
def zero: T
}
object Zero {
implicit object IntHasZero extends Zero[Int] {
val zero = 0
}
// ... etc ...
}
Then the combine method can be defined as:
def combine[A, B : Zero](f: B => A => B)(xs: Iterable[Iterable[A]]) =
xs.foldLeft(Iterable(implicitly[Zero[B]].zero)) {
(x, y) => x.view flatMap { y map f(_) }
}
Usage:
combine((b: Int) => (a: String) => b + a.length)(sets)
Scalaz provides a Zero type class, along with a lot of other goodies for functional programming.
The problem that you're running into is that reduce(Left|Right) takes a function (A, A) => A which doesn't allow you to change the type. You want something more like foldLeft which takes (B, A) ⇒ B, allowing you to accumulate an output of a different type. folds need a seed value though, which can't be an empty collection here. You'd need to take xs apart into a head and tail, map the head iterable to be Iterable[B], and then call foldLeft with the mapped head, the tail, and some function (B, A) => B. That seems like more trouble than it's worth though, so I'd just do all the mapping up front.
def combine[A, B](f: (B, B) => B)(g: (A) => B)(xs:Iterable[Iterable[A]]) =
xs.map(_.map(g)).reduceLeft { (x, y) => x.view.flatMap {a => y.map(f(a, _)) } }
val sets = Set(Set(1, 2, 3), Set(3, 4), Set(5, 6, 7))
val expanded = combine{(x: String, y: String) => x + "&" + y}{(i: Int) => i.toString}(sets).toSet