Functional Alternative to Game Loop - scala

I'm just starting out with the Scala and am trying a little toy program - in this case a text based TicTacToe. I wrote a working version based on what I know about scala, but noticed it was mostly imperative and my classes were mutable.
I'm going through and trying to implement some functional idioms and have managed to at least make the classes representing the game state immutable. However, I'm left with a class responsible for performing the game loop relying on mutable state and imperative loop as follows:
var board: TicTacToeBoard = new TicTacToeBoard
def start() {
var gameState: GameState = new XMovesNext
outputState(gameState)
while (!gameState.isGameFinished) {
val position: Int = getSelectionFromUser
board = board.updated(position, gameState.nextTurn)
gameState = getGameState(board)
outputState(gameState)
}
}
What would be a more idiomatic way to program what I'm doing imperatively in this loop?
Full source code is here https://github.com/whaley/TicTacToe-in-Scala/tree/master/src/main/scala/com/jasonwhaley/tictactoe

imho for Scala, the imperative loop is just fine. You can always write a recursive function to behave like a loop. I also threw in some pattern matching.
def start() {
def loop(board: TicTacToeBoard) = board.state match {
case Finished => Unit
case Unfinished(gameState) => {
gameState.output()
val position: Int = getSelectionFromUser()
loop(board.updated(position))
}
}
loop(new TicTacToeBoard)
}
Suppose we had a function whileSome : (a -> Option[a]) a -> (), which runs the input function until its result is None. That would strip away a little boilerplate.
def start() {
def step(board: TicTacToeBoard) = {
board.gameState.output()
val position: Int = getSelectionFromUser()
board.updated(position) // returns either Some(nextBoard) or None
}
whileSome(step, new TicTacToeBoard)
}
whileSome should be trivial to write; it is simply an abstraction of the former pattern. I'm not sure if it's in any common Scala libs, but in Haskell you could grab whileJust_ from monad-loops.

You could implement it as a recursive method. Here's an unrelated example:
object Guesser extends App {
val MIN = 1
val MAX = 100
readLine("Think of a number between 1 and 100. Press enter when ready")
def guess(max: Int, min: Int) {
val cur = (max + min) / 2
readLine("Is the number "+cur+"? (y/n) ") match {
case "y" => println("I thought so")
case "n" => {
def smallerGreater() {
readLine("Is it smaller or greater? (s/g) ") match {
case "s" => guess(cur - 1, min)
case "g" => guess(max, cur + 1)
case _ => smallerGreater()
}
}
smallerGreater()
}
case _ => {
println("Huh?")
guess(max, min)
}
}
}
guess(MAX, MIN)
}

How about something like:
Stream.continually(processMove).takeWhile(!_.isGameFinished)
where processMove is a function that gets selection from user, updates board and returns new state.

I'd go with the recursive version, but here's a proper implementation of the Stream version:
var board: TicTacToeBoard = new TicTacToeBoard
def start() {
def initialBoard: TicTacToeBoard = new TicTacToeBoard
def initialGameState: GameState = new XMovesNext
def gameIterator = Stream.iterate(initialBoard -> initialGameState) _
def game: Stream[GameState] = {
val (moves, end) = gameIterator {
case (board, gameState) =>
val position: Int = getSelectionFromUser
val updatedBoard = board.updated(position, gameState.nextTurn)
(updatedBoard, getGameState(board))
}.span { case (_, gameState) => !gameState.isGameFinished }
(moves ::: end.take(1)) map { case (_, gameState) => gameState }
}
game foreach outputState
}
This looks weirder than it should. Ideally, I'd use takeWhile, and then map it afterwards, but it won't work as the last case would be left out!
If the moves of the game could be discarded, then dropWhile followed by head would work. If I had the side effect (outputState) instead the Stream, I could go that route, but having side-effect inside a Stream is way worse than a var with a while loop.
So, instead, I use span which gives me both takeWhile and dropWhile but forces me to save the intermediate results -- which can be real bad if memory is a concern, as the whole game will be kept in memory because moves points to the head of the Stream. So I had to encapsulate all that inside another method, game. That way, when I foreach through the results of game, there won't be anything pointing to the Stream's head.
Another alternative would be to get rid of the other side effect you have: getSelectionFromUser. You can get rid of that with an Iteratee, and then you can save the last move and reapply it.
OR... you could write yourself a takeTo method and use that.

Related

Functional Breadth First Search in Scala with the State Monad

I'm trying to implement a functional Breadth First Search in Scala to compute the distances between a given node and all the other nodes in an unweighted graph. I've used a State Monad for this with the signature as :-
case class State[S,A](run:S => (A,S))
Other functions such as map, flatMap, sequence, modify etc etc are similar to what you'd find inside a standard State Monad.
Here's the code :-
case class Node(label: Int)
case class BfsState(q: Queue[Node], nodesList: List[Node], discovered: Set[Node], distanceFromSrc: Map[Node, Int]) {
val isTerminated = q.isEmpty
}
case class Graph(adjList: Map[Node, List[Node]]) {
def bfs(src: Node): (List[Node], Map[Node, Int]) = {
val initialBfsState = BfsState(Queue(src), List(src), Set(src), Map(src -> 0))
val output = bfsComp(initialBfsState)
(output.nodesList,output.distanceFromSrc)
}
#tailrec
private def bfsComp(currState:BfsState): BfsState = {
if (currState.isTerminated) currState
else bfsComp(searchNode.run(currState)._2)
}
private def searchNode: State[BfsState, Unit] = for {
node <- State[BfsState, Node](s => {
val (n, newQ) = s.q.dequeue
(n, s.copy(q = newQ))
})
s <- get
_ <- sequence(adjList(node).filter(!s.discovered(_)).map(n => {
modify[BfsState](s => {
s.copy(s.q.enqueue(n), n :: s.nodesList, s.discovered + n, s.distanceFromSrc + (n -> (s.distanceFromSrc(node) + 1)))
})
}))
} yield ()
}
Please can you advice on :-
Should the State Transition on dequeue in the searchNode function be a member of BfsState itself?
How do I make this code more performant/concise/readable?
First off, I suggest moving all the private defs related to bfs into bfs itself. This is the convention for methods that are solely used to implement another.
Second, I suggest simply not using State for this matter. State (like most monads) is about composition. It is useful when you have many things that all need access to the same global state. In this case, BfsState is specialized to bfs, will likely never be used anywhere else (it might be a good idea to move the class into bfs too), and the State itself is always run, so the outer world never sees it. (In many cases, this is fine, but here the scope is too small for State to be useful.) It'd be much cleaner to pull the logic of searchNode into bfsComp itself.
Third, I don't understand why you need both nodesList and discovered, when you can just call _.toList on discovered once you've done your computation. I've left it in in my reimplementation, though, in case there's more to this code that you haven't displayed.
def bfsComp(old: BfsState): BfsState = {
if(old.q.isEmpty) old // You don't need isTerminated, I think
else {
val (currNode, newQ) = old.q.dequeue
val newState = old.copy(q = newQ)
adjList(curNode)
.filterNot(s.discovered) // Set[T] <: T => Boolean and filterNot means you don't need to write !s.discovered(_)
.foldLeft(newState) { case (BfsState(q, nodes, discovered, distance), adjNode) =>
BfsState(
q.enqueue(adjNode),
adjNode :: nodes,
discovered + adjNode,
distance + (adjNode -> (distance(currNode) + 1)
)
}
}
}
def bfs(src: Node): (List[Node], Map[Node, Int]) = {
// I suggest moving BfsState and bfsComp into this method
val output = bfsComp(BfsState(Queue(src), List(src), Set(src), Map(src -> 0)))
(output.nodesList, output.distanceFromSrc)
// Could get rid of nodesList and say output.discovered.toList
}
In the event that you think you do have a good reason for using State here, here are my thoughts.
You use def searchNode. The point of a State is that it is pure and immutable, so it should be a val, or else you reconstruct the same State every use.
You write:
node <- State[BfsState, Node](s => {
val (n, newQ) = s.q.dequeue
(n, s.copy(q = newQ))
})
First off, Scala's syntax was designed so that you don't need to have both a () and {} surrounding an anonymous function:
node <- State[BfsState, Node] { s =>
// ...
}
Second, this doesn't look quite right to me. One benefit of using for-syntax is that the anonymous functions are hidden from you and there is minimal indentation. I'd just write it out
oldState <- get
(node, newQ) = oldState.q.dequeue
newState = oldState.copy(q = newQ)
Footnote: would it make sense to make Node an inner class of Graph? Just a suggestion.

Does scala have a lazy evaluating wrapper?

I want to return a wrapper/holder for a result that I want to compute only once and only if the result is actually used. Something like:
def getAnswer(question: Question): Lazy[Answer] = ???
println(getAnswer(q).value)
This should be pretty easy to implement using lazy val:
class Lazy[T](f: () => T) {
private lazy val _result = Try(f())
def value: T = _result.get
}
But I'm wondering if there's already something like this baked into the standard API.
A quick search pointed at Streams and DelayedLazyVal but neither is quite what I'm looking for.
Streams do memoize the stream elements, but it seems like the first element is computed at construction:
def compute(): Int = { println("computing"); 1 }
val s1 = compute() #:: Stream.empty
// computing is printed here, before doing s1.take(1)
In a similar vein, DelayedLazyVal starts computing upon construction, even requires an execution context:
val dlv = new DelayedLazyVal(() => 1, { println("started") })
// immediately prints out "started"
There's scalaz.Need which I think you'd be able to use for this.

Are recursive computations with Apache Spark RDD possible?

I'm developing chess engine using Scala and Apache Spark (and I need to stress that my sanity is not the topic of this question). My problem is that Negamax algorithm is recursive in its essence and when I try naive approach:
class NegaMaxSparc(#transient val sc: SparkContext) extends Serializable {
val movesOrdering = new Ordering[Tuple2[Move, Double]]() {
override def compare(x: (Move, Double), y: (Move, Double)): Int =
Ordering[Double].compare(x._2, y._2)
}
def negaMaxSparkHelper(game: Game, color: PieceColor, depth: Int, previousMovesPar: RDD[Move]): (Move, Double) = {
val board = game.board
if (depth == 0) {
(null, NegaMax.evaluateDefault(game, color))
} else {
val moves = board.possibleMovesForColor(color)
val movesPar = previousMovesPar.context.parallelize(moves)
val moveMappingFunc = (m: Move) => { negaMaxSparkHelper(new Game(board.boardByMakingMove(m), color.oppositeColor, null), color.oppositeColor, depth - 1, movesPar) }
val movesWithScorePar = movesPar.map(moveMappingFunc)
val move = movesWithScorePar.min()(movesOrdering)
(move._1, -move._2)
}
}
def negaMaxSpark(game: Game, color: PieceColor, depth: Int): (Move, Double) = {
if (depth == 0) {
(null, NegaMax.evaluateDefault(game, color))
} else {
val movesPar = sc.parallelize(new Array[Move](0))
negaMaxSparkHelper(game, color, depth, movesPar)
}
}
}
class NegaMaxSparkBot(val maxDepth: Int, sc: SparkContext) extends Bot {
def nextMove(game: Game): Move = {
val nms = new NegaMaxSparc(sc)
nms.negaMaxSpark(game, game.colorToMove, maxDepth)._1
}
}
I get:
org.apache.spark.SparkException: RDD transformations and actions can only be invoked by the driver, not inside of other transformations; for example, rdd1.map(x => rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063.
The question is: can this algorithm be implemented recursively using Spark? If not, then what is the proper Spark-way to solve that problem?
Only the driver can launch computation on RDD. The reason is that even though RDD "feel" like regular collections of data, behind the scene they are still distributed collections, so launching operations on them requires coordinating execution of tasks on all remote slaves, which spark hides from us most of the time.
So recursing from the slaves, i.e. launching new distributed tasks dynamically directly from slaves is not possible: only the drive can take care of such coordination.
Here's a possible alternative of a simplification of your problem (if I get things correctly). The idea is to successively build instances of Moves, each one representing the full sequence of Move from initial state.
Each instance of Moves is able to transform itself into a set of Moves, each one corresponding to the same sequence of Move plus one possible next Move.
From there the driver just has to successively flatMap the Moves for as deep as we want, and the resulting RDD[Moves] will execute all operations in parallel for us.
The downside of the approach is that all depth level are kept synchronized, i.e. we have to compute all moves at level n (i.e. the RDD[Moves] for level n) before going to the next one.
The code below is not tested, it probably has flaws and does not even compile, but hopefully it provides an idea on how to approach the problem.
/* one modification to the board */
case class Move(from: String, to: String)
case class PieceColor(color: String)
/* state of the game */
case class Board {
// TODO
def possibleMovesForColor(color: PieceColor): Seq[Move] =
Move("here", "there") :: Move("there", "over there") :: Move("there", "here") :: Nil
// TODO: compute a new instance of board here, based on current + this move
def update(move: Move): Board = new Board
}
/** Solution, i.e. a sequence of moves*/
case class Moves(moves: Seq[Move], game: Board, color: PieceColor) {
lazy val score = NegaMax.evaluateDefault(game, color)
/** #return all valid next Moves */
def nextPossibleMoves: Seq[Moves] =
board.possibleMovesForColor(color).map {
nextMove =>
play.copy(moves = nextMove :: play.moves,
game = play.game.update(nextMove)
}
}
/** Driver code: negaMax: looks for the best next move from a give game state */
def negaMax(sc: SparkContext, game: Board, color: PieceColor, maxDepth: Int):Moves = {
val initialSolution = Moves(Seq[moves].empty, game, color)
val allPlays: rdd[Moves] =
(1 to maxDepth).foldLeft (sc.parallelize(Seq(initialSolution))) {
rdd => rdd.flatMap(_.nextPossibleMoves)
}
allPlays.reduce { case (m1, m2) => if (m1.score < m2.score) m1 else m2}
}
This is a limitation that makes sense in terms of the implementation, but it can be a pain to work with.
You can try pulling out the recursion to top level, just in the "driver" code that creates and operates with RDDs? Something like:
def step(rdd: Rdd[Move], limit: Int) =
if(0 == limit) rdd
else {
val newRdd = rdd.flatMap(...)
step(newRdd, limit - 1)
}
Alternately it's always possible to translate recursion into iteration, by managing the "stack" explicitly by hand (although it may result in more cumbersome code).

Scala extending while loops to do-until expressions

I'm trying to do some experiment with Scala. I'd like to repeat this experiment (randomized) until the expected result comes out and get that result. If I do this with either while or do-while loop, then I need to write (suppose 'body' represents the experiment and 'cond' indicates if it's expected):
do {
val result = body
} while(!cond(result))
It does not work, however, since the last condition cannot refer to local variables from the loop body. We need to modify this control abstraction a little bit like this:
def repeat[A](body: => A)(cond: A => Boolean): A = {
val result = body
if (cond(result)) result else repeat(body)(cond)
}
It works somehow but is not perfect for me since I need to call this method by passing two parameters, e.g.:
val result = repeat(body)(a => ...)
I'm wondering whether there is a more efficient and natural way to do this so that it looks more like a built-in structure:
val result = do { body } until (a => ...)
One excellent solution for body without a return value is found in this post: How Does One Make Scala Control Abstraction in Repeat Until?, the last one-liner answer. Its body part in that answer does not return a value, so the until can be a method of the new AnyRef object, but that trick does not apply here, since we want to return A rather than AnyRef. Is there any way to achieve this? Thanks.
You're mixing programming styles and getting in trouble because of it.
Your loop is only good for heating up your processor unless you do some sort of side effect within it.
do {
val result = bodyThatPrintsOrSomething
} until (!cond(result))
So, if you're going with side-effecting code, just put the condition into a var:
var result: Whatever = _
do {
result = bodyThatPrintsOrSomething
} until (!cond(result))
or the equivalent:
var result = bodyThatPrintsOrSomething
while (!cond(result)) result = bodyThatPrintsOrSomething
Alternatively, if you take a functional approach, you're going to have to return the result of the computation anyway. Then use something like:
Iterator.continually{ bodyThatGivesAResult }.takeWhile(cond)
(there is a known annoyance of Iterator not doing a great job at taking all the good ones plus the first bad one in a list).
Or you can use your repeat method, which is tail-recursive. If you don't trust that it is, check the bytecode (with javap -c), add the #annotation.tailrec annotation so the compiler will throw an error if it is not tail-recursive, or write it as a while loop using the var method:
def repeat[A](body: => A)(cond: A => Boolean): A = {
var a = body
while (cond(a)) { a = body }
a
}
With a minor modification you can turn your current approach in a kind of mini fluent API, which results in a syntax that is close to what you want:
class run[A](body: => A) {
def until(cond: A => Boolean): A = {
val result = body
if (cond(result)) result else until(cond)
}
}
object run {
def apply[A](body: => A) = new run(body)
}
Since do is a reserved word, we have to go with run. The result would now look like this:
run {
// body with a result type A
} until (a => ...)
Edit:
I just realized that I almost reinvented what was already proposed in the linked question. One possibility to extend that approach to return a type A instead of Unit would be:
def repeat[A](body: => A) = new {
def until(condition: A => Boolean): A = {
var a = body
while (!condition(a)) { a = body }
a
}
}
Just to document a derivative of the suggestions made earlier, I went with a tail-recursive implementation of repeat { ... } until(...) that also included a limit to the number of iterations:
def repeat[A](body: => A) = new {
def until(condition: A => Boolean, attempts: Int = 10): Option[A] = {
if (attempts <= 0) None
else {
val a = body
if (condition(a)) Some(a)
else until(condition, attempts - 1)
}
}
}
This allows the loop to bail out after attempts executions of the body:
scala> import java.util.Random
import java.util.Random
scala> val r = new Random()
r: java.util.Random = java.util.Random#cb51256
scala> repeat { r.nextInt(100) } until(_ > 90, 4)
res0: Option[Int] = Some(98)
scala> repeat { r.nextInt(100) } until(_ > 90, 4)
res1: Option[Int] = Some(98)
scala> repeat { r.nextInt(100) } until(_ > 90, 4)
res2: Option[Int] = None
scala> repeat { r.nextInt(100) } until(_ > 90, 4)
res3: Option[Int] = None
scala> repeat { r.nextInt(100) } until(_ > 90, 4)
res4: Option[Int] = Some(94)

Scala actor kills itself inconsistently

I am a newbie to scala and I am writing scala code to implement pastry protocol. The protocol itself does not matter. There are nodes and each node has a routing table which I want to populate.
Here is the part of the code:
def act () {
def getMatchingNode (initialMatch :String) : Int = {
val len = initialMatch.length
for (i <- 0 to noOfNodes-1) {
var flag : Int = 1
for (j <- 0 to len-1) {
if (list(i).key.charAt(j) == initialMatch(j)) {
continue
}
else {
flag = 0
}
}
if (flag == 1) {
return i
}
}
return -1
}
// iterate over rows
for (ii <- 0 to rows - 1) {
for (jj <- 0 to 15) {
var initialMatch = ""
for (k <- 0 to ii-1) {
initialMatch = initialMatch + key.charAt(k)
}
initialMatch += jj
println("initialMatch",initialMatch)
if (getMatchingNode(initialMatch) != -1) {
Routing(0)(jj) = list(getMatchingNode(initialMatch)).key
}
else {
Routing(0)(jj) = "NULL"
}
}
}
}// act
The problem is when the function call to getMatchingNode takes place then the actor dies suddenly by itself. 'list' is the list of all nodes. (list of node objects)
Also this behaviour is not consistent. The call to getMatchingNode should take place 15 times for each actor (for 10 nodes).
But while debugging the actor kills itself in the getMatchingNode function call after one call or sometimes after 3-4 calls.
The scala library code which gets executed is this :
def run() {
try {
beginExecution()
try {
if (fun eq null)
handler(msg)
else
fun()
} catch {
case _: KillActorControl =>
// do nothing
case e: Exception if reactor.exceptionHandler.isDefinedAt(e) =>
reactor.exceptionHandler(e)
}
reactor.kill()
}
Eclipse shows that this code has been called from the for loop in the getMatchingNode function
def getMatchingNode (initialMatch :String) : Int = {
val len = initialMatch.length
for (i <- 0 to noOfNodes-1)
The strange thing is that sometimes the loop behaves normally and sometimes it goes to the scala code which kills the actor.
Any inputs what wrong with the code??
Any help would be appreciated.
Got the error..
The 'continue' clause in the for loop caused the trouble.
I thought we could use continue in Scala as we do in C++/Java but it does not seem so.
Removing the continue solved the issue.
From the book: "Programming in Scala 2ed" by M.Odersky
You may have noticed that there has been no mention of break or continue.
Scala leaves out these commands because they do not mesh well with function
literals, a feature described in the next chapter. It is clear what continue
means inside a while loop, but what would it mean inside a function literal?
While Scala supports both imperative and functional styles of programming,
in this case it leans slightly towards functional programming in exchange
for simplifying the language. Do not worry, though. There are many ways to
program without break and continue, and if you take advantage of function
literals, those alternatives can often be shorter than the original code.
I really suggest reading the book if you want to learn scala
Your code is based on tons of nested for loops, which can be more often than not be rewritten using the Higher Order Functions available on the most appropriate Collection.
You can rewrite you function like the following [I'm trying to make it approachable for newcomers]:
//works if "list" contains "nodes" with an attribute "node.key: String"
def getMatchingNode (initialMatch :String) : Int = {
//a new list with the corresponding keys
val nodeKeys = list.map(node => node.key)
//zips each key (creates a pair) with the corresponding index in the list and then find a possible match
val matchOption: Option[(String, Int)] = (nodeKeys.zipWithIndex) find {case (key, index) => key == initialMatch}
//we convert an eventual result contained in the Option, with the right projection of the pair (which contains the index)
val idxOption = matchOption map {case (key, index) => index} //now we have an Option[Int] with a possible index
//returns the content of option if it's full (Some) or a default value of "-1" if there was no match (None). See Option[T] for more details
idxOption.getOrElse(-1)
}
The potential to easily transform or operate on the Collection's elements is what makes continues, and for loops in general, less used in Scala
You can convert the row iteration in a similar way, but I would suggest that if you need to work a lot with the collection's indexes, you want to use an IndexedSeq or one of its implementations, like ArrayBuffer.