Implement while(true) in cats [closed] - scala

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
How do you implement the following loops in cats?
First (normal while(true) loop):
while(true) { doSomething() }
Second (while(true) loop with increment):
var i = 1
while(true) { i +=1; doSomething() }
Third (while(true) with several independent variables inside):
var x = 1
var y = 2
while(true) {
x = someCalculation()
y = otherCalculation()
doSomething()
}

I think your question is somewhat ill-posed, but it's ill-posed in an interesting way, so maybe I should try to explain what I mean by that.
Asking "how do I implement
var i = 1
while(true) { i +=1; doSomething() }
in Cats" is answered trivially: in exactly the same way as you would implement it in ordinary Scala without using any libraries. In this particular case, Cats will not enable you to implement anything that behaves drastically different at runtime. What it does enable you to do is to express more precisely what (side) effects each piece of your code has, and to encode it as type level information, which can be verified at compile time during static type checking.
So, the question should not be
"How do I do X in Cats?"
but rather
"How do I prove / make explicit that my code does (or does not) have certain side effects using Cats?".
The while loop in your example simply performs some side effects in doSomething(), and it messes with mutable state of the variable i whenever it wants to, without ever making it explicit in the types of the constituent subexpressions.
Now you might take something like effects.IO, and at least wrap the body of your doSomething in an IO, thereby making it explicit that it performs input/output operations (here: printing to StdOut):
// Your side-effectful `doSomething` function in IO
def doSomething: IO[Unit] = IO { println("do something") }
Now you might ask how to write down the loop in such a way that it also becomes obvious that it performs such IO-operations. You could do something like this:
// Literally `while(true) { ... }`
def whileTrue_1: IO[Unit] =
Monad[IO].whileM_(IO(true)) { doSomething }
// Shortcut using `foreverM` syntax
import cats.syntax.flatMap._
def whileTrue_2: IO[Nothing] = doSomething.foreverM
// Use `>>` from `syntax.flatMap._`
def whileTrue_3: IO[Unit] = doSomething >> whileTrue_3
Now, if you want to throw in the mutable variable i into the mix, you could treat writing/reading of mutable memory as yet another IO operation:
// Treat access to mutable variable `i` as
// yet another IO side effect, do something
// with value of `i` again and again.
def whileInc_1: IO[Unit] = {
var i = 1
def doSomethingWithI: IO[Unit] = IO {
println(s"doing sth. with $i")
}
Monad[IO].whileM_(IO(true)) {
for {
_ <- IO { i += 1 }
_ <- doSomethingWithI
} yield ()
}
}
Alternatively, you might decide that tracking all accesses / changes of the state of i is important enough that you want to make it explicit, for example using the StateT monad transformer that keeps track of the state of type Int:
// Explicitly track variable `i` in a state monad
import cats.data.StateT
import cats.data.StateT._
def whileInc_2: IO[Unit] = {
// Let's make `doSthWithI` not too boring,
// so that it at least accesses the state
// with variable `i`
def doSthWithI: StateT[IO, Int, Unit] =
for {
i <- get[IO, Int]
_ <- liftF(IO { println(i) })
} yield ()
// Define the loop
val loop = Monad[StateT[IO, Int, ?]].whileM_(
StateT.pure(true)
) {
for {
i <- get[IO, Int]
_ <- set[IO, Int](i + 1)
_ <- doSthWithI
} yield ()
}
// The `_._2` is there only to make the
// types match up, it's never actually used,
// because the loop runs forever.
loop.run(1).map(_._2)
}
It would function similarly with two variables x and y (just use (Int, Int) instead of Int as the state).
Admittedly, this code seems a bit more verbose, and especially the last example begins to look like enterprise-edition fizz-buzz, but the point is that if you consistently apply these techniques to your code base, you don't have to dig into the body of a function to get a fairly good idea of what it can (or cannot) do, based on its signature alone. This in turn is advantageous when you are trying to understand the code (you can understand what it does by skimming over signatures, without reading the code in the bodies), and it also forces you to write simpler code that is easier to test (at least that's the idea).

Functional alternative for while(true) loop in your case would be just recursive function:
#tailrec
def loop(x: Int, y: Int) { //every call of loop will receive updated state
doSomething()
loop(someCalculation(), otherCalculation()) //next iteration with updated state
}
loop(1,2); //start "loop" with initial state
The very important thing here is #tailrec annotation. It checks if the recursive call to loop is in the tail position and therefore tail-call optimalization can be applied. If that wouldn't be the case your "loop" would cause StackOverflowException.
An interesting fact is that optimized function will look in bytecode very similarly to while loop.
This approach is not really directly related to cats, but rather with functional programming. Recursion is very commonly used in FP.

Related

How does the cats-effect IO monad really work?

I'm new to functional programming and Scala, and I was checking out the Cats Effect framework and trying to understand what the IO monad does. So far what I've understood is that writing code in the IO block is just a description of what needs to be done and nothing happens until you explicitly run using the unsafe methods provided, and also a way to make code that performs side-effects referentially transparent by actually not running it.
I tried executing the snippet below just to try to understand what it means:
object Playground extends App {
var out = 10
var state = "paused"
def changeState(newState: String): IO[Unit] = {
state = newState
IO(println("Updated state."))
}
def x(string: String): IO[Unit] = {
out += 1
IO(println(string))
}
val tuple1 = (x("one"), x("two"))
for {
_ <- x("1")
_ <- changeState("playing")
} yield ()
println(out)
println(state)
}
And the output was:
13
paused
I don't understand why the assignment state = newState does not run, but the increment and assign expression out += 1 run. Am I missing something obvious on how this is supposed to work? I could really use some help. I understand that I can get this to run using the unsafe methods.
In your particular example, I think what is going on is that regular imperative Scala coded is unaffected by the IO monad--it runs when it normally would under the rules of Scala.
When you run:
for {
_ <- x("1")
_ <- changeState("playing")
} yield ()
this immediately calls x. That has nothing to do with the IO monad; it's just how for comprehensions are defined. The first step is to evaluate the first statement so you can call flatMap on it.
As you observe, you never "run" the monadic result, so the argument to flatMap, the monadic continuation, is never invoked, resulting in no call to changeState. This is specific to the IO monad, as, e.g., the List monad's flatMap would have immediately invoked the function (unless it were an empty list).

Reentrant locks within monads in Scala

A colleague of mine stated the following, about using a Java ReentrantReadWriteLock in some Scala code:
Acquiring the lock here is risky. It's "reentrant", but that internally depends on the thread context. F may run different stages of the same computation in different threads. You can easily cause a deadlock.
F here refers to some effectful monad.
Basically what I'm trying to do is to acquire the same reentrant lock twice, within the same monad.
Could somebody clarify why this could be a problem?
The code is split into two files. The outermost one:
val lock: Resource[F, Unit] = for {
// some other resource
_ <- store.writeLock
} yield ()
lock.use { _ =>
for {
// stuff
_ <- EitherT(store.doSomething())
// other stuff
} yield ()
}
Then, in the store:
import java.util.concurrent.locks.{Lock, ReentrantReadWriteLock}
import cats.effect.{Resource, Sync}
private def lockAsResource[F[_]](lock: Lock)(implicit F: Sync[F]): Resource[F, Unit] =
Resource.make {
F.delay(lock.lock())
} { _ =>
F.delay(lock.unlock())
}
private val lock = new ReentrantReadWriteLock
val writeLock: Resource[F, Unit] = lockAsResource(lock.writeLock())
def doSomething(): F[Either[Throwable, Unit]] = writeLock.use { _ =>
// etc etc
}
The writeLock in the two pieces of code is the same, and it's a cats.effect.Resource[F, Unit] wrapping a ReentrantReadWriteLock's writeLock. There are some reasons why I was writing the code this way, so I wouldn't want to dig into that. I would just like to understand why (according to my colleague, at least), this could potentially break stuff.
Also, I'd like to know if there is some alternative in Scala that would allow something like this without the risk for deadlocks.
IIUC your question:
You expect that for each interaction with the Resource lock.lock and lock.unlock actions happen in the same thread.
1) There is no guarantee at all since you are using arbitrary effect F here.
It's possible to write an implementation of F that executes every action in a new thread.
2) Even if we assume that F is IO then the body of doSomething someone could do IO.shift. So the next actions including unlock would happen in another thread. Probably it's not possible with the current signature of doSomething but you get the idea.
Also, I'd like to know if there is some alternative in Scala that would allow something like this without the risk for deadlocks.
You can take a look at scalaz zio STM.

Wrapping a class with side-effects

After reading chapter six of Functional Programming in Scala and trying to understand the State monad, I have a question regarding wrapping an side-effecting class.
Say I have a class that modifies itself in someway.
class SideEffect(x:Int) {
var value = x
def modifyValue(newValue:Int):Unit = { value = newValue }
}
My understanding is that if we wrapped this in a State monad as below, it would still modify the original making the point of wrapping it moot.
case class State[S,+A](run: S => (A, S)) { // See footnote
// map, flatmap, unit, utility functions
}
val sideEffect = new SideEffect(20)
println(sideEffect.value) // Prints "20"
val stateMonad = State[SideEffect,Int](state => {
state.modifyValue(10)
(state.value,state)
})
stateMonad.run(sideEffect) // run the modification
println(sideEffect.value) // Prints "10" i.e. we have modified the original state
The only solution to this that I can see is to make a copy of the class and modify that, but that seems computationally expensive as SideEffect grows. Plus if we wanted to wrap something like a Java class that doesn't implement Cloneable, we'd be out of luck.
val stateMonad = State[SideEffect,Int](state => {
val newState = SideEffect(state.value) // Easier if it was a case class but hypothetically if one was, say, working with a Java library, one would not have this luxury
newState.modifyValue(10)
(newState.value,newState)
})
stateMonad.run(sideEffect) // run the modification
println(sideEffect.value) // Prints "20", original state not modified
Am I using the State monad wrong? How would one go about wrapping a side-effecting class without having to copy it or is this the only way?
The implementation for the State monad I'm using here is from the book, and might differ from Scalaz implementation
You can't do anything with mutable object except hiding mutation in some wrapper. So the scope of program that requires more attention in testing will be much more smaller. Your first sample is good enough. One moment only. It would be better to hide outer reference at all. Instead stateMonad.run(sideEffect) use something like stateMonad.run(new SideEffect(20)) or
def initState: SideEffect = new SideEffect(20)
val (state, value) = stateMonad.run(initState)

Does Scala have an equivalent to golangs defer?

Does Scala have an equivelent to golangs defer?
from:
http://golang.org/doc/effective_go.html#defer
Go's defer statement schedules a function call (the deferred function) to be run immediately before the function executing the defer returns. It's an unusual but effective way to deal with situations such as resources that must be released regardless of which path a function takes to return. The canonical examples are unlocking a mutex or closing a file.
Scala does not offer defer by design, however you can create it yourself by wrapping
your function in another function, passing an object which keeps track of functions to call.
Example:
class DeferTracker() {
class LazyVal[A](val value:() => A)
private var l = List[LazyVal[Any]]()
def apply(f: => Any) = { l = new LazyVal(() => f) :: l }
def makeCalls() = l.foreach { x => x.value() }
}
def Deferrable[A](context: DeferTracker => A) = {
val dt = new DeferTracker()
val res = context(dt)
dt.makeCalls
res
}
In this example, Deferable would be the wrapping function which calls context and
returns it contents, giving it an object which tracks defer calls.
You can use this construct like this:
def dtest(x:Int) = println("dtest: " + x)
def someFunction(x:Int):Int = Deferrable { defer =>
defer(dtest(x))
println("before return")
defer(dtest(2*x))
x * 3
}
println(someFunction(3))
The output would be:
before return
dtest: 6
dtest: 3
3
I'm aware that this can be solved differently but it is really just an example that
Scala supports the concept of defer without too much fuss.
I can't think of a Scala specific way, but wouldn't this be equivalent (though not as pretty):
try {
// Do stuff
} finally {
// "defer"
}
No. Go has this construct precisely because it doesn't support exceptions and has no try...finally syntax.
Personally, I think it invites a maintenance nightmare; calls to defer can be buried anywhere in a function. Even where responsible coders put the defers right beside the thing to be cleaned up, I think it's less clear than a finally block and as for what it lets the messy coders do... at least finally blocks put all the clean-up in one place.
defer is the opposite of idiomatic Scala. Scala offers monadic ways to control program flow and defer magic does not contribute at all. Monads offer a functional improvement over try...finally which let you
Define your own error handling flow
Manipulate defined flows functionally
Make a function's anticipated errors part of its signature
defer has no place in this.

Should I use returns in multiline Scala methods?

Perhaps this is just my background in more imperative programming, but I like having return statements in my code.
I understand that in Scala, returns are not necessary in many methods, because whatever the last computed value is is returned by default. I understand that this makes perfect sense for a "one-liner", e.g.
def square(x) = x * x
I also understand the definitive case for using explicit returns (when you have several branches your code could take, and you want to break out of the method for different branches, e.g. if an error occurs). But what about multiline functions? Wouldn't it be more readable and make more sense if there was an explicit return, e.g.
def average(x: List[Int]) : Float = {
var sum = 0
x.foreach(sum += _)
return sum / x.length.toFloat
}
def average(x: List[Int]) : Float =
x.foldLeft(0)(_ + _) / x.length.toFloat
UPDATE: While I was aiming to show how an iterative code can be made into a functional expression, #soc rightly commented that an even shorter version is x.sum / x.length.toFloat
I usually find that in Scala I have less need to "return in the middle". Furthermore, a large function is broken into smaller expressions that are clearer to reason. So instead of
var x = ...
if (some condition) x = ...
else x = ...
I would write
if (some condition) ...
else ....
Similar thing happens using match expressions. And you can always have helper nested classes.
Once you are comfortable with many forms of expressions that evaluate to a result (e.g., 'if') without a return statement, then having one in your methods looks out of place.
At one place of work we had a rule of not having 'return' in the middle of a method since it is easy for a reader of your code to miss it. If 'return' is only at the last line, what's the point of having it at all?
The return doesn't tell you anything extra, so I find it actually clutters my understanding of what goes on. Of course it returns; there are no other statements!
And it's also a good idea to get away from the habit of using return because it's really unclear where you are returning to when you're using heavily functional code:
def manyFutures = xs.map(x => Futures.future { if (x<5) return Nil else x :: Nil })
Where should the return leave execution? Outside of manyFutures? Just the inner block?
So although you can use explicit returns (at least if you annotate the return type explicitly), I suggest that you try to get used to the last-statement-is-the-return-value custom instead.
Your sense that the version with the return is more readable is probably a sign that you are still thinking imperatively rather than declaratively. Try to think of your function from the perspective of what it is (i.e. how it is defined), rather than what it does (i.e. how it is computed).
Let's look at your average example, and pretend that the standard library doesn't have the features that make an elegant one-liner possible. Now in English, we would say that the average of a list of numbers is the sum of the numbers in the list divided by its length.
def average(xs: List[Int]): Float = {
var sum = 0
xs.foreach(sum += _)
sum / xs.length.toFloat
}
This description is fairly close to the English version, ignoring the first two lines of the function. We can factor out the sum function to get a more declarative description:
def average(xs: List[Int]): Float = {
def sum(xs: List[Int]): Int = // ...
sum(xs) / xs.length.toFloat
}
In the declarative definition, a return statement reads quite unnaturally because it explicitly puts the focus on doing something.
I don't miss the return-statement at the end of a method because it is unnecessary. Furthermore, in Scala each method does return a value - even methods which should not return something. Instead of void there is the type Unit which is returned:
def x { println("hello") }
can be written as:
def x { println("hello"); return Unit }
But println returns already the type Unit - thus when you explicitly want to use a return-statement you have to use it in methods which return Unit, too. Otherwise you do not have continuous identical build-on code.
But in Scala there is the possibility to break your code in a lot of small methods:
def x(...): ReturnType = {
def func1 = ... // use funcX
def func2 = ... // use funcX
def func3 = ... // use funcX
funcX
}
An explicitly used return-statement does not help to understand the code better.
Also Scala has a powerful core library which allows you to solve many problems with less code:
def average(x: List[Int]): Float = x.sum.toFloat / x.length
I believe you should not use return, I think it somehow goes against the elegant concept that in Scala everything is an expression that evaluates to something.
Like your average method: it's just an expression that evaluates to Float, no need to return anything to make that work.