Suppose I have this class:
case class Receipt(id: Long, state: String) {
def transitionTo(newState: String) = {
if (!canTransitionTo(newState)) {
throw new IllegalStateExcetion(s"cant transition from $state to $newState")
}
this.copy(state = newState)
}
}
I'd like to test the logic in canTransitionTo (not included here for the sake of simplicity) with scalachecks Commands but I'm having a bit of trouble on how to begin. Any ideas?
There are some tutorials how to test state machines with this framework but they test another property. Usually they create a Command for each valid transition and fire scalacheck to do any random combinations of them. The goal of such property is to verify that state machine behaves normally for any number of valid transitions.
This approach will not test canTransitionTo because it assumes all transitions are valid. Testing transitions between any pair of states will require to reimplement a notion of valid and invalid transitions in terms of scalacheck. This could be even more complex then original canTransitionTo function.
If one of transition sets is much smaller than other scalacheck can help to generate another one. For example if there are only handful of valid transitions and tenth of invalid then generators can help.
private val allStates: Gen[String] = Gen.oneOf("State1", "State2", "State3")
private val validTransitions: Set[(String, String)] = Set("State1" -> "State2", "State2" -> "State3", "State3" -> "State1")
private val validTransitionsGen: Gen[(String, String)] = Gen.oneOf(validTransitions.toSeq)
private val invalidTransition: Gen[(String, String)] = for {
from <- allStates
to <- allStates
if !validTransitions.contains(from -> to) //this is reimplementaion of canTransitionTo
} yield from -> to
property("valid transitions") = forAll(validTransitionsGen) { transition =>
Receipt(0, transition._1).canTransitionTo(transition._2)
}
property("invalid transitions") = forAll(invalidTransition) { transition =>
!Receipt(0, transition._1).canTransitionTo(transition._2)
}
Related
I come from a Java background and am taking over a Gatling project where I noticed what seems to me a bit of inconsistency when using what is a val or a def method. The picture below exemplifies that and I was wondering if there's any guidance on what is the best usage for these within the Gatling context please.
These are other examples where I'm not sure what should be used. I'm assuming a Switch makes sense being inside a method but not sure about the others?
private def teacherViewResources: ChainBuilder =
exec(viewResourcesFlow)
.randomSwitch(
70.0 -> pause(1,2).exec(teacherLaunchResource),
10.0 -> pause(1,2).exec(teacherAssignResource),
20.0 -> pause(1,2).exec(teacherResourcesNext)
)
private def teacherLaunchResource: ChainBuilder =
exec(launchResourcesFlow)
val rootTeacherScenario = scenario("Root Teacher Scenario " + currentScenario.toString)
.doIfOrElse(currentScenario == PossibleScenarios.BRANCH)(
feed(userFeederTeacher).during(EXECUTION_TIME_SEC) {
exec(teacherBranching)
}
//For use with atOnceUsers for debugging
//feed(userFeederTeacher).exec(simulationTeacherBranching)
)(
exec {
session =>
logger.debug("Invalid teacher scenario chosen")
session
}
)
val loginFlowWithExit = exec(loginFlow).exitHereIfFailed
val teacherBranching = group("teacherBranching") {
exec(loginFlow)
.exec(session => sessionSetSessionVariable(session))
.exec(execFlaggedScenario(teacherDashboard)) // First method to run for a teacher
.exec(logout())
}
Many thanks.
val is evaluated once while def is evaluated on every call.
Remember that Gatling DSL components are just builders, not what is executed when your test is running.
Everything that doesn't take a parameter could be a val, you just have to make sure you don't end up with using forward references, eg:
broken:
val foo = exec(???).exec(bar) // here, bar is still null because it's populated later in the code
val bar = exec(???)
correct:
val bar = exec(???)
val foo = exec(???).exec(bar) // fine because bar is already populated
I want to return a wrapper/holder for a result that I want to compute only once and only if the result is actually used. Something like:
def getAnswer(question: Question): Lazy[Answer] = ???
println(getAnswer(q).value)
This should be pretty easy to implement using lazy val:
class Lazy[T](f: () => T) {
private lazy val _result = Try(f())
def value: T = _result.get
}
But I'm wondering if there's already something like this baked into the standard API.
A quick search pointed at Streams and DelayedLazyVal but neither is quite what I'm looking for.
Streams do memoize the stream elements, but it seems like the first element is computed at construction:
def compute(): Int = { println("computing"); 1 }
val s1 = compute() #:: Stream.empty
// computing is printed here, before doing s1.take(1)
In a similar vein, DelayedLazyVal starts computing upon construction, even requires an execution context:
val dlv = new DelayedLazyVal(() => 1, { println("started") })
// immediately prints out "started"
There's scalaz.Need which I think you'd be able to use for this.
I'm developing chess engine using Scala and Apache Spark (and I need to stress that my sanity is not the topic of this question). My problem is that Negamax algorithm is recursive in its essence and when I try naive approach:
class NegaMaxSparc(#transient val sc: SparkContext) extends Serializable {
val movesOrdering = new Ordering[Tuple2[Move, Double]]() {
override def compare(x: (Move, Double), y: (Move, Double)): Int =
Ordering[Double].compare(x._2, y._2)
}
def negaMaxSparkHelper(game: Game, color: PieceColor, depth: Int, previousMovesPar: RDD[Move]): (Move, Double) = {
val board = game.board
if (depth == 0) {
(null, NegaMax.evaluateDefault(game, color))
} else {
val moves = board.possibleMovesForColor(color)
val movesPar = previousMovesPar.context.parallelize(moves)
val moveMappingFunc = (m: Move) => { negaMaxSparkHelper(new Game(board.boardByMakingMove(m), color.oppositeColor, null), color.oppositeColor, depth - 1, movesPar) }
val movesWithScorePar = movesPar.map(moveMappingFunc)
val move = movesWithScorePar.min()(movesOrdering)
(move._1, -move._2)
}
}
def negaMaxSpark(game: Game, color: PieceColor, depth: Int): (Move, Double) = {
if (depth == 0) {
(null, NegaMax.evaluateDefault(game, color))
} else {
val movesPar = sc.parallelize(new Array[Move](0))
negaMaxSparkHelper(game, color, depth, movesPar)
}
}
}
class NegaMaxSparkBot(val maxDepth: Int, sc: SparkContext) extends Bot {
def nextMove(game: Game): Move = {
val nms = new NegaMaxSparc(sc)
nms.negaMaxSpark(game, game.colorToMove, maxDepth)._1
}
}
I get:
org.apache.spark.SparkException: RDD transformations and actions can only be invoked by the driver, not inside of other transformations; for example, rdd1.map(x => rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063.
The question is: can this algorithm be implemented recursively using Spark? If not, then what is the proper Spark-way to solve that problem?
Only the driver can launch computation on RDD. The reason is that even though RDD "feel" like regular collections of data, behind the scene they are still distributed collections, so launching operations on them requires coordinating execution of tasks on all remote slaves, which spark hides from us most of the time.
So recursing from the slaves, i.e. launching new distributed tasks dynamically directly from slaves is not possible: only the drive can take care of such coordination.
Here's a possible alternative of a simplification of your problem (if I get things correctly). The idea is to successively build instances of Moves, each one representing the full sequence of Move from initial state.
Each instance of Moves is able to transform itself into a set of Moves, each one corresponding to the same sequence of Move plus one possible next Move.
From there the driver just has to successively flatMap the Moves for as deep as we want, and the resulting RDD[Moves] will execute all operations in parallel for us.
The downside of the approach is that all depth level are kept synchronized, i.e. we have to compute all moves at level n (i.e. the RDD[Moves] for level n) before going to the next one.
The code below is not tested, it probably has flaws and does not even compile, but hopefully it provides an idea on how to approach the problem.
/* one modification to the board */
case class Move(from: String, to: String)
case class PieceColor(color: String)
/* state of the game */
case class Board {
// TODO
def possibleMovesForColor(color: PieceColor): Seq[Move] =
Move("here", "there") :: Move("there", "over there") :: Move("there", "here") :: Nil
// TODO: compute a new instance of board here, based on current + this move
def update(move: Move): Board = new Board
}
/** Solution, i.e. a sequence of moves*/
case class Moves(moves: Seq[Move], game: Board, color: PieceColor) {
lazy val score = NegaMax.evaluateDefault(game, color)
/** #return all valid next Moves */
def nextPossibleMoves: Seq[Moves] =
board.possibleMovesForColor(color).map {
nextMove =>
play.copy(moves = nextMove :: play.moves,
game = play.game.update(nextMove)
}
}
/** Driver code: negaMax: looks for the best next move from a give game state */
def negaMax(sc: SparkContext, game: Board, color: PieceColor, maxDepth: Int):Moves = {
val initialSolution = Moves(Seq[moves].empty, game, color)
val allPlays: rdd[Moves] =
(1 to maxDepth).foldLeft (sc.parallelize(Seq(initialSolution))) {
rdd => rdd.flatMap(_.nextPossibleMoves)
}
allPlays.reduce { case (m1, m2) => if (m1.score < m2.score) m1 else m2}
}
This is a limitation that makes sense in terms of the implementation, but it can be a pain to work with.
You can try pulling out the recursion to top level, just in the "driver" code that creates and operates with RDDs? Something like:
def step(rdd: Rdd[Move], limit: Int) =
if(0 == limit) rdd
else {
val newRdd = rdd.flatMap(...)
step(newRdd, limit - 1)
}
Alternately it's always possible to translate recursion into iteration, by managing the "stack" explicitly by hand (although it may result in more cumbersome code).
To avoid X & Y problems, a little background:
I'm trying to set up a web project where I'm going to be duplicating business logic server and client side, client obviously in Javascript and the server in Scala. I plan to write business logic in Cucumber so I can make sure the tests and functionality line up on both sides. Finally, I'd like to have a crack at bringing ScalaCheck and JSCheck into this, generated input data rather than specified.
Basically, the statements would work like this:
Given statements select add generators.
When statements specify functions to act upon those values in sequence.
Then statements take the input data and the final result data and run a property.
The objective is to make this sort of thing composable so you could specify several generators, a set of actions to run on each of them, and then a set of properties that would each get run on the inputs and result.
Done this already in Javascript (technically Coffeescript), and of course with a dynamic language is straightforward to do. Basically what I want to be able to do in my scala step definitions is this, excuse the arbitrary test data:
class CucumberSteps extends ScalaDsl with EN
with ShouldMatchers with QuickCheckCucumberSteps {
Given("""^an list of integer between 0 and 100$""") {
addGenerator(Gen.containerOf[List, Int](Gen.choose(0,100)))
}
Given("""^an list of random string int 500 and 700$""") {
addGenerator(Gen.containerOf[List, Int](Gen.choose(500,700)))
}
When("""^we concatenate the two lists$""") {
addAction {(l1: List[Int], l2: List[Int]) => l1 ::: l2 }
}
Then("""^then the size of the result should equal the sum of the input sizes$""") {
runProperty { (inputs: (List[Int], List[Int]), result: (List[Int])) =>
inputs._1.size + inputs._2.size == result._1.size
}
}
}
So the key thing I want to do is create a trait QuickCheckCucumberSteps that will be the API, implementing addGenerator, addAction and runProperty.
Here's what I've roughed out so far, and where I get stuck:
trait QuickCheckCucumberSteps extends ShouldMatchers {
private var generators = ArrayBuffer[Gen[Any]]()
private var actions = ArrayBuffer[""AnyFunction""]()
def addGenerator(newGen: Gen[Any]): Unit =
generators += newGen
def addAction(newFun: => ""AnyFunction""): Unit =
actions += newFun
def buildPartialProp = {
val li = generators
generators.length match {
case 1 => forAll(li(0))_
case 2 => forAll(li(0), li(1))_
case 3 => forAll(li(0), li(1), li(2))_
case 4 => forAll(li(0), li(1), li(2), li(3))_
case _ => forAll(li(0), li(1), li(2), li(3), li(4))_
}
}
def runProperty(propertyFunc: => Any): Prop = {
val partial = buildPartialProp
val property = partial {
??? // Need a function that takes x number of generator inputs,
// applies each action in sequence
// and then applies the `propertyFunc` to the
// inputs and results.
}
val result = Test.check(new Test.Parameters.Default {},
property)
result.status match {
case Passed => println("passed all tests")
case Failed(a, l) => fail(format(pretty(result), "", "", 75))
case _ => println("other cases")
}
}
}
My key issue is this, I want to have the commented block become a function that takes all the added actions, apply them in order and then run and return the result of the property function. Is this possible to express with Scala's type system, and if so, how do I get started? Happy to do reading and earn this one, but I need at least a way forward as I don't know how to express it at this point. Happy to drop in my Javascript code if what I'm trying to make here isn't clear.
If I were you, I wouldn't put ScalaCheck generator code within your Cucumber Given/When/Then statements :). The ScalaCheck api calls are part of the "test rig" - so not under test. Try this (not compiled/tested):
class CucumberSteps extends ScalaDsl with EN with ShouldMatchers {
forAll(Gen.containerOf[List, Int](Gen.choose(0,100)),
Gen.containerOf[List, Int](Gen.choose(500,700)))
((l1: List[Int], l2: List[Int]) => {
var result: Int = 0
Given(s"""^a list of integer between 0 and 100: $l1 $""") { }
Given(s"""^a list of integer between 0 and 100: $l2 $""") { }
When("""^we concatenate the two lists$""") { result = l1 ::: l2 }
Then("""^the size of the result should equal the sum of the input sizes$""") {
l1.size + l2.size == result.size }
})
}
I'm just starting out with the Scala and am trying a little toy program - in this case a text based TicTacToe. I wrote a working version based on what I know about scala, but noticed it was mostly imperative and my classes were mutable.
I'm going through and trying to implement some functional idioms and have managed to at least make the classes representing the game state immutable. However, I'm left with a class responsible for performing the game loop relying on mutable state and imperative loop as follows:
var board: TicTacToeBoard = new TicTacToeBoard
def start() {
var gameState: GameState = new XMovesNext
outputState(gameState)
while (!gameState.isGameFinished) {
val position: Int = getSelectionFromUser
board = board.updated(position, gameState.nextTurn)
gameState = getGameState(board)
outputState(gameState)
}
}
What would be a more idiomatic way to program what I'm doing imperatively in this loop?
Full source code is here https://github.com/whaley/TicTacToe-in-Scala/tree/master/src/main/scala/com/jasonwhaley/tictactoe
imho for Scala, the imperative loop is just fine. You can always write a recursive function to behave like a loop. I also threw in some pattern matching.
def start() {
def loop(board: TicTacToeBoard) = board.state match {
case Finished => Unit
case Unfinished(gameState) => {
gameState.output()
val position: Int = getSelectionFromUser()
loop(board.updated(position))
}
}
loop(new TicTacToeBoard)
}
Suppose we had a function whileSome : (a -> Option[a]) a -> (), which runs the input function until its result is None. That would strip away a little boilerplate.
def start() {
def step(board: TicTacToeBoard) = {
board.gameState.output()
val position: Int = getSelectionFromUser()
board.updated(position) // returns either Some(nextBoard) or None
}
whileSome(step, new TicTacToeBoard)
}
whileSome should be trivial to write; it is simply an abstraction of the former pattern. I'm not sure if it's in any common Scala libs, but in Haskell you could grab whileJust_ from monad-loops.
You could implement it as a recursive method. Here's an unrelated example:
object Guesser extends App {
val MIN = 1
val MAX = 100
readLine("Think of a number between 1 and 100. Press enter when ready")
def guess(max: Int, min: Int) {
val cur = (max + min) / 2
readLine("Is the number "+cur+"? (y/n) ") match {
case "y" => println("I thought so")
case "n" => {
def smallerGreater() {
readLine("Is it smaller or greater? (s/g) ") match {
case "s" => guess(cur - 1, min)
case "g" => guess(max, cur + 1)
case _ => smallerGreater()
}
}
smallerGreater()
}
case _ => {
println("Huh?")
guess(max, min)
}
}
}
guess(MAX, MIN)
}
How about something like:
Stream.continually(processMove).takeWhile(!_.isGameFinished)
where processMove is a function that gets selection from user, updates board and returns new state.
I'd go with the recursive version, but here's a proper implementation of the Stream version:
var board: TicTacToeBoard = new TicTacToeBoard
def start() {
def initialBoard: TicTacToeBoard = new TicTacToeBoard
def initialGameState: GameState = new XMovesNext
def gameIterator = Stream.iterate(initialBoard -> initialGameState) _
def game: Stream[GameState] = {
val (moves, end) = gameIterator {
case (board, gameState) =>
val position: Int = getSelectionFromUser
val updatedBoard = board.updated(position, gameState.nextTurn)
(updatedBoard, getGameState(board))
}.span { case (_, gameState) => !gameState.isGameFinished }
(moves ::: end.take(1)) map { case (_, gameState) => gameState }
}
game foreach outputState
}
This looks weirder than it should. Ideally, I'd use takeWhile, and then map it afterwards, but it won't work as the last case would be left out!
If the moves of the game could be discarded, then dropWhile followed by head would work. If I had the side effect (outputState) instead the Stream, I could go that route, but having side-effect inside a Stream is way worse than a var with a while loop.
So, instead, I use span which gives me both takeWhile and dropWhile but forces me to save the intermediate results -- which can be real bad if memory is a concern, as the whole game will be kept in memory because moves points to the head of the Stream. So I had to encapsulate all that inside another method, game. That way, when I foreach through the results of game, there won't be anything pointing to the Stream's head.
Another alternative would be to get rid of the other side effect you have: getSelectionFromUser. You can get rid of that with an Iteratee, and then you can save the last move and reapply it.
OR... you could write yourself a takeTo method and use that.