There's a great tutorial here that seems to suggest to me that the Writer Monad is basically a special case tuple object that does operations on behalf of (A,B). The writer accumulates values on the left (which is A) and that A has a corresponding Monoid with it (hence it can accumulate or mutate state). If A is a collection, then it accumulates.
The State Monad is also a object that deals with an internal tuple. They both can be flatMap'd, map'd, etc. And the operations seem the same to me. How are they different? (please respond back with a scala example, I'm not familiar with Haskel). Thanks!
Your intuition that these two monads are closely related is exactly right. The difference is that Writer is much more limited, in that it doesn't allow you to read the accumulated state (until you cash out at the end). The only thing you can do with the state in a Writer is tack more stuff onto the end.
More concisely, State[S, A] is a kind of wrapper for S => (S, A), while Writer[W, A] is a wrapper for (W, A).
Consider the following usage of Writer:
import scalaz._, Scalaz._
def addW(x: Int, y: Int): Writer[List[String], Int] =
Writer(List(s"$x + $y"), x + y)
val w = for {
a <- addW(1, 2)
b <- addW(3, 4)
c <- addW(a, b)
} yield c
Now we can run the computation:
scala> val (log, res) = w.run
log: List[String] = List(1 + 2, 3 + 4, 3 + 7)
res: Int = 10
We could do exactly the same thing with State:
def addS(x: Int, y: Int) =
State((log: List[String]) => (log |+| List(s"$x + $y"), x + y))
val s = for {
a <- addS(1, 2)
b <- addS(3, 4)
c <- addS(a, b)
} yield c
And then:
scala> val (log, res) = s.run(Nil)
log: List[String] = List(1 + 2, 3 + 4, 3 + 7)
res: Int = 10
But this is a little more verbose, and we could also do lots of other things with State that we couldn't do with Writer.
So the moral of the story is that you should use Writer whenever you can—your solution will be cleaner, more concise, and you'll get the satisfaction of having used the appropriate abstraction.
Very often Writer won't give you all the power you need, though, and in those cases State will be waiting for you.
tl;dr State is Read and Write while Writer is, well, only write.
With State you have access to the previous data stored and you can use this data in your current computation:
def myComputation(x: A) =
State((myState: List[A]) => {
val newValue = calculateNewValueBasedOnState(x,myState)
(log |+| List(newValue), newValue)
})
With Writer you can store data in some object you don't have access to, you can only write to that object.
Related
I can do elementwise operation like sum using Zipped function. Let I have two Lists L1 and L2 as shown below
val L1 = List(1,2,3,4)
val L2 = List(5,6,7,8)
I can take element wise sum in following way
(L1,L2).zipped.map(_+_)
and result is
List(6, 8, 10, 12)
as expected.
I am using Zipped function in my actual code but it takes too much time. In reality My List Size is more than 1000 and I have more than 1000 Lists and my algorithm is iterative where iterations could be up to one billion.
In code I have to do following stuff
list =( (L1,L2).zipped.map(_+_).map (_ * math.random) , L3).zipped.map(_+_)
size of L1,L2 and L3 is same. Moreover I have to execute my actual code on a cluster.
What is the fastest way to take elementwise sum of Lists in Scala?
One option would be to use a Streaming implementation, taking advantage of the lazyness may increase the performance.
An example using LazyList (introduced in Scala 2.13).
def usingLazyList(l1: LazyList[Double], l2: LazyList[Double], l3: LazyList[Double]): LazyList[Double] =
((l1 zip l2) zip l3).map {
case ((a, b), c) =>
((a + b) * math.random()) + c
}
And an example using fs2.Stream (introduced by the fs2 library).
import fs2.Stream
import cats.effect.IO
def usingFs2Stream(s1: Stream[IO, Double], s2: Stream[IO, Double], s3: Stream[IO, Double]): Stream[IO, Double] =
s1.zipWith(s2) {
case (a, b) =>
(a + b) * math.random()
}.zipWith(s3) {
case (acc, c) =>
acc + c
}
However, if those are still too slow, the best alternative would be to use plain arrays.
Here is an example using ArraySeq (introduced in Scala 2.13 too) which at least will preserve immutability. You may use raw arrays if you prefer but take care.
(if you want, you may also use the collections-parallel module to be even more performant)
import scala.collection.immutable.ArraySeq
import scala.collection.parallel.CollectionConverters._
def usingArraySeq(a1: ArraySeq[Double], a2: ArraySeq[Double], a3: ArraySeq[Double]): ArraySeq[Double] = {
val length = a1.length
val arr = Array.ofDim[Double](length)
(0 until length).par.foreach { i =>
arr(i) = ((a1(i) + a2(i)) * math.random()) + a3(i)
}
ArraySeq.unsafeWrapArray(arr)
}
This question already has answers here:
Scala - Combine two lists in an alternating fashion
(4 answers)
Closed 3 years ago.
The elements of the resulting list should alternate between the elements of the arguments. Assume that the two arguments have the same length.
USE RECURSION
My code as follows
val finalString = new ListBuffer[Int]
val buff2= new ListBuffer[Int]
def alternate(xs:List[Int], ys:List[Int]):List[Int] = {
while (xs.nonEmpty) {
finalString += xs.head + ys.head
alternate(xs.tail,ys.tail)
}
return finalString.toList
}
EXPECTED RESULT:
alternate ( List (1 , 3, 5) , List (2 , 4, 6)) = List (1 , 2, 3, 4, 6)
As far for the output, I don't get any output. The program is still running and cannot be executed.
Are there any Scala experts?
There are a few problems with the recursive solutions suggested so far (including yours, which would actually work, if you replace while with if): appending to end of list is a linear operation, making the whole thing quadratic (taking a .length of a list too, as well ас accessing elements by index), don't do that; also, if the lists are long, a recursion may require a lot of space on the stack, you should be using tail-recursion whenever possible.
Here is a solution that is free of both those problems: it builds the output backwards, by prepending elements to the list (constant time operation) rather than appending them, and reverses the result at the end. It is also tail-recursive: the recursive call is the last operation in the function, which allows the compiler to convert it into a loop, so that it will only use a single stack frame for execution regardless of the size of the lists.
#tailrec
def alternate(
a: List[Int],
b: List[Int],
result: List[Int] = Nil
): List[Int] = (a,b) match {
case (Nil, _) | (_, Nil) => result.reversed
case (ah :: at, bh :: bt) => alternate(at, bt, bh :: ah :: result)
}
(if the lists are of different lengths, the whole thing stops when the shortest one ends, and whatever is left in the longer one is thrown out. You may want to modify the first case (split it into two, perhaps) if you desire a different behavior).
BTW, your own solution is actually better than most suggested here: it is actually tail recursive (or rather can be made one if you add else after your if, which is now while), and appending to ListBuffer isn't actually as bad as to a List. But using mutable state is generally considered "code smell" in scala, and should be avoided (that's one of the main ideas behind using recursion instead of loops in the first place).
Condition xs.nonEmpty is true always so you have infinite while loop.
Maybe you meant if instead of while.
A more Scala-ish approach would be something like:
def alternate(xs: List[Int], ys: List[Int]): List[Int] = {
xs.zip(ys).flatMap{case (x, y) => List(x, y)}
}
alternate(List(1,3,5), List(2,4,6))
// List(1, 2, 3, 4, 5, 6)
A recursive solution using match
def alternate[T](a: List[T], b: List[T]): List[T] =
(a, b) match {
case (h1::t1, h2::t2) =>
h1 +: h2 +: alternate(t1, t2)
case _ =>
a ++ b
}
This could be more efficient at the cost of clarity.
Update
This is the more efficient solution:
def alternate[T](a: List[T], b: List[T]): List[T] = {
#annotation.tailrec
def loop(a: List[T], b: List[T], res: List[T]): List[T] =
(a, b) match {
case (h1 :: t1, h2 :: t2) =>
loop(t1, t2, h2 +: h1 +: res)
case _ =>
a ++ b ++ res.reverse
}
loop(a, b, Nil)
}
This retains the original function signature but uses an inner function that is an efficient, tail-recursive implementation of the algorithm.
You're accessing variables from outside the method, which is bad. I would suggest something like the following:
object Main extends App {
val l1 = List(1, 3, 5)
val l2 = List(2, 4, 6)
def alternate[A](l1: List[A], l2: List[A]): List[A] = {
if (l1.isEmpty || l2.isEmpty) List()
else List(l1.head,l2.head) ++ alternate(l1.tail, l2.tail)
}
println(alternate(l1, l2))
}
Still recursive but without accessing state from outside the method.
Assuming both lists are of the same length, you can use a ListBuffer to build up the alternating list. alternate is a pure function:
import scala.collection.mutable.ListBuffer
object Alternate extends App {
def alternate[T](xs: List[T], ys: List[T]): List[T] = {
val buffer = new ListBuffer[T]
for ((x, y) <- xs.zip(ys)) {
buffer += x
buffer += y
}
buffer.toList
}
alternate(List(1, 3, 5), List(2, 4, 6)).foreach(println)
}
Suppose that I use a sequence of various maps and/or flatMaps to generate a sequence of collections. Is it possible to access information about the "current" collection from within any of those methods? For example, without knowing anything specific about the functions used in the previous maps or flatMaps, and without using any intermediate declarations, how can I get the maximum value (or length, or first element, etc.) of the collection upon which the last map acts?
List(1, 2, 3)
.flatMap(x => f(x) /* some unknown function */)
.map(x => x + ??? /* what is the max element of the collection? */)
Edit for clarification:
In the example, I'm not looking for the max (or whatever) of the initial List. I'm looking for the max of the collection after the flatMap has been applied.
By "without using any intermediate declarations" I mean that I do not want to use any temporary collections en route to the final result. So, the example by Steve Waldman below, while giving the desired result, is not what I am seeking. (I include this condition is mostly for aesthetic reasons.)
Edit for clarification, part 2:
The ideal solution would be some magic keyword or syntactic sugar that lets me reference the current collection:
List(1, 2, 3)
.flatMap(x => f(x))
.map(x => x + theCurrentList.max)
I'm prepared to accept the fact, however, that this simply is not possible.
Maybe just define the list as a val, so you can name it? I don't know of any facility built into map(...) or flatMap(...) that would help.
val myList = List(1, 2, 3)
myList
.flatMap(x => f(x) /* some unknown function */)
.map(x => x + myList.max /* what is the max element of the List? */)
Update: By this approach at least, if you have multiple transformations and want to see the transformed version, you'd have to name that. You could get away with
val myList = List(1, 2, 3).flatMap(x => f(x) /* some unknown function */)
myList.map(x => x + myList.max /* what is the max element of the List? */)
Or, if there will be multiple transformations, get in the habit of naming the stages.
val rawList = List(1, 2, 3)
val smordified = rawList.flatMap(x => f(x) /* some unknown function */)
val maxified = smordified.map(x => x + smordified.max /* what is the max element of the List? */)
maxified
Update 2: Watch it work in the REPL even with heterogenous types:
scala> def f( x : Int ) : Vector[Double] = Vector(x * math.random, x * math.random )
f: (x: Int)Vector[Double]
scala> val rawList = List(1, 2, 3)
rawList: List[Int] = List(1, 2, 3)
scala> val smordified = rawList.flatMap(x => f(x) /* some unknown function */)
smordified: List[Double] = List(0.40730853571901315, 0.15151641399798665, 1.5305929709857609, 0.35211231420067435, 0.644241939254793, 0.15530230501048903)
scala> val maxified = smordified.map(x => x + smordified.max /* what is the max element of the List? */)
maxified: List[Double] = List(1.937901506704774, 1.6821093849837476, 3.0611859419715217, 1.8827052851864352, 2.1748349102405538, 1.6858952759962498)
scala> maxified
res3: List[Double] = List(1.937901506704774, 1.6821093849837476, 3.0611859419715217, 1.8827052851864352, 2.1748349102405538, 1.6858952759962498)
It is possible, but not pretty, and not likely something you want if you are doing it for "aesthetic reasons."
import scala.math.max
def f(x: Int): Seq[Int] = ???
List(1, 2, 3).
flatMap(x => f(x) /* some unknown function */).
foldRight((List[Int](),List[Int]())) {
case (x, (xs, Nil)) => ((x :: xs), List.fill(xs.size + 1)(x))
case (x, (xs, xMax :: _)) => ((x :: xs), List.fill(xs.size + 1)(max(x, xMax)))
}.
zipped.
map {
case (x, xMax) => x + xMax
}
// Or alternately, a slightly more efficient version using Streams.
List(1, 2, 3).
flatMap(x => f(x) /* some unknown function */).
foldRight((List[Int](),Stream[Int]())) {
case (x, (xs, Stream())) =>
((x :: xs), Stream.continually(x))
case (x, (xs, curXMax #:: _)) =>
val newXMax = max(x, curXMax)
((x :: xs), Stream.continually(newXMax))
}.
zipped.
map {
case (x, xMax) => x + xMax
}
Seriously though, I just took this on to see if I could do it. While the code didn't turn out as bad as I expected, I still don't think it's particularly readable. I'd discourage using this over something similar to Steve Waldman's answer. Sometimes, it's simply better to just introduce a val, rather than being dogmatic about it.
You could define a mapWithSelf (resp. flatMapWithSelf) operation along these lines and add it as an implicit enrichment to the collection. For List it might look like:
// Scala 2.13 APIs
object Enrichments {
implicit class WithSelfOps[A](val lst: List[A]) extends AnyVal {
def mapWithSelf[B](f: (A, List[A]) => B): List[B] =
lst.map(f(_, lst))
def flatMapWithSelf[B](f: (A, List[A]) => IterableOnce[B]): List[B] =
lst.flatMap(f(_, lst))
}
}
The enrichment basically fixes the value of the collection before the operation and threads it through. It should be possible to generify this (at least for the strict collections), though it would look a little different in 2.12 vs. 2.13+.
Usage would look like
import Enrichments._
val someF: Int => IterableOnce[Int] = ???
List(1, 2, 3)
.flatMap(someF)
.mapWithSelf { (x, lst) =>
x + lst.max
}
So at the usage site, it's aesthetically pleasant. Note that if you're computing something which traverses the list, you'll be traversing the list every time (leading to a quadratic runtime). You can get around that with some mutability or by just saving the intermediate list after the flatMap.
One somewhat-simple way of referencing prior output within the current map/collect operation is to use a named reference outside the map, then reference it from within the map block:
var prevOutput = ... // starting value of whatever is referenced within the map
myValues.map {
prevOutput = ... // expression that references prior `prevOutput`
prevOutput // return above computed value for the map to collect
}
This draws attention to the fact that we're referencing prior elements while building the new sequence.
This would be more messy, though, if you wanted to reference arbitrarily previous values, not just the previous one.
Whilst I understand what a partially applied/curried function is, I still don't fully understand why I would use such a function vs simply overloading a function. I.e. given:
def add(a: Int, b: Int): Int = a + b
val addV = (a: Int, b: Int) => a + b
What is the practical difference between
def addOne(b: Int): Int = add(1, b)
and
def addOnePA = add(1, _:Int)
// or currying
val addOneC = addV.curried(1)
Please note I am NOT asking about currying vs partially applied functions as this has been asked before and I have read the answers. I am asking about currying/partially applied functions VS overloaded functions
The difference in your example is that overloaded function will have hardcoded value 1 for the first argument to add, i.e. set at compile time, while partially applied or curried functions are meant to capture their arguments dynamically, i.e. at run time. Otherwise, in your particular example, because you are hardcoding 1 in both cases it's pretty much the same thing.
You would use partially applied/curried function when you pass it through different contexts, and it captures/fills-in arguments dynamically until it's completely ready to be evaluated. In FP this is important because many times you don't pass values, but rather pass functions around. It allows for higher composability and code reusability.
There's a couple reasons why you might prefer partially applied functions. The most obvious and perhaps superficial one is that you don't have to write out intermediate functions such as addOnePA.
List(1, 2, 3, 4) map (_ + 3) // List(4, 5, 6, 7)
is nicer than
def add3(x: Int): Int = x + 3
List(1, 2, 3, 4) map add3
Even the anonymous function approach (that the underscore ends up expanding out to by the compiler) feels a tiny bit clunky in comparison.
List(1, 2, 3, 4) map (x => x + 3)
Less superficially, partial application comes in handy when you're truly passing around functions as first-class values.
val fs = List[(Int, Int) => Int](_ + _, _ * _, _ / _)
val on3 = fs map (f => f(_, 3)) // partial application
val allTogether = on3.foldLeft{identity[Int] _}{_ compose _}
allTogether(6) // (6 / 3) * 3 + 3 = 9
Imagine if I hadn't told you what the functions in fs were. The trick of coming up with named function equivalents instead of partial application becomes harder to use.
As for currying, currying functions often lets you naturally express transformations of functions that produce other functions (rather than a higher order function that simply produces a non-function value at the end) which might otherwise be less clear.
For example,
def integrate(f: Double => Double, delta: Double = 0.01)(x: Double): Double = {
val domain = Range.Double(0.0, x, delta)
domain.foldLeft(0.0){case (acc, a) => delta * f(a) + acc
}
can be thought of and used in the way that you actually learned integration in calculus, namely as a transformation of a function that produces another function.
def square(x: Double): Double = x * x
// Ignoring issues of numerical stability for the moment...
// The underscore is really just a wart that Scala requires to bind it to a val
val cubic = integrate(square) _
val quartic = integrate(cubic) _
val quintic = integrate(quartic) _
// Not *utterly* horrible for a two line numerical integration function
cubic(1) // 0.32835000000000014
quartic(1) // 0.0800415
quintic(1) // 0.015449626499999999
Currying also alleviates a few of the problems around fixed function arity.
implicit class LiftedApply[A, B](fOpt: Option[A => B]){
def ap(xOpt: Option[A]): Option[B] = for {
f <- fOpt
x <- xOpt
} yield f(x)
}
def not(x: Boolean): Boolean = !x
def and(x: Boolean)(y: Boolean): Boolean = x && y
def and3(x: Boolean)(y: Boolean)(z: Boolean): Boolean = x && y && z
Some(not _) ap Some(false) // true
Some(and _) ap Some(true) ap Some(true) // true
Some(and3 _) ap Some(true) ap Some(true) ap Some(true) // true
By having curried functions, we've been able to "lift" a function to work on Option for as many arguments as we need. If our logic functions had not been curried, then we would have had to have separate functions to lift A => B to Option[A] => Option[B], (A, B) => C to (Option[A], Option[B]) => Option[C], (A, B, C) => D to (Option[A], Option[B], Option[C]) => Option[D] and so on for all the arities we cared about.
Currying also has some other miscellaneous benefits when it comes to type inference and is required if you have both implicit and non-implicit arguments for a method.
Finally, the answers to this question list out some more times you might want currying.
For example, how to merge two Streams of sorted Integers? I thought it's very basic, but just found it's non trivial at all. The below one is not tail-recursive and it will stack-overflow when the Streams are large.
def merge(as: Stream[Int], bs: Stream[Int]): Stream[Int] = {
(as, bs) match {
case (Stream.Empty, bss) => bss
case (ass, Stream.Empty) => ass
case (a #:: ass, b #:: bss) =>
if (a < b) a #:: merge(ass, bs)
else b #:: merge(as, bss)
}
}
We may want to turn it into a tail-recursive one by introducing a accumulator. However, if we pre-pend the accumulator, we will only get a stream of reversed order; if we append the accumulator with concatenation (#:::), it's NOT lazy (strict) any more.
What could be the solution here? Thanks
Turning a comment into an answer, there's nothing wrong with your merge.
It's not recursive at all - any one call to merge returns a new Stream without any other call to merge. a #:: merge(ass, bs) return a stream with first element a and where merge(ass, bs) will be called to evaluate the rest of the stream when required.
So
val m = merge(Stream.from(1,2), Stream.from(2, 2))
//> m : Stream[Int] = Stream(1, ?)
m.drop(10000000).take(1)
//> res0: scala.collection.immutable.Stream[Int] = Stream(10000001, ?)
works just fine. No stack overflow.