ScalaCheck shrinking command data in stateful testing - scala

When doing stateful testing with ScalaCheck the library can shrink the commands needed to find a certain bug. Like in the counter example from the userguide: https://github.com/typelevel/scalacheck/blob/master/doc/UserGuide.md. But what if the commands took arguments and i wanted ScalaCheck to shrink the data inside the commands as well? See the scenario where i am testing the Counter below:
package Counter
case class Counter() {
private var n = 1
def increment(incrementAmount: Int) = {
if (n%100!=0) {
n += incrementAmount
}
}
def get(): Int = n
}
The counter is programmed with a bug. It should not increment with the given amount if n%100 == 0. So if the value of n is x*100, where x is any positive integer, the counter is not incremented. I am testing the counter with the ScalaCheck stateful test below:
import Counter.Counter
import org.scalacheck.commands.Commands
import org.scalacheck.{Gen, Prop}
import scala.util.{Success, Try}
object CounterCommands extends Commands {
type State = Int
type Sut = Counter
def canCreateNewSut(newState: State, initSuts: Traversable[State],
runningSuts: Traversable[Sut]): Boolean = true
def newSut(state: State): Sut = new Counter
def destroySut(sut: Sut): Unit = ()
def initialPreCondition(state: State): Boolean = true
def genInitialState: Gen[State] = Gen.const(1)
def genCommand(state: State): Gen[Command] = Gen.oneOf(Increment(Gen.chooseNum(1, 200000).sample.get), Get)
case class Increment(incrementAmount: Int) extends UnitCommand {
def run(counter: Sut) = counter.increment(incrementAmount)
def nextState(state: State): State = {state+incrementAmount}
def preCondition(state: State): Boolean = true
def postCondition(state: State, success: Boolean) = success
}
case object Get extends Command {
type Result = Int
def run(counter: Sut): Result = counter.get()
def nextState(state: State): State = state
def preCondition(state: State): Boolean = true
def postCondition(state: State, result: Try[Int]): Prop = result == Success(state)
}
}
Everytime the increment command is chosen it is given some arbitrary integer between 1 and 200000 as argument. Running the test gave the following output:
! Falsified after 28 passed tests.
> Labels of failing property:
initialstate = 1
seqcmds = (Increment(1); Increment(109366); Increment(1); Increment(1); Inc
rement(104970); Increment(27214); Increment(197045); Increment(1); Increm
ent(54892); Get => 438600)
> ARG_0: Actions(1,List(Increment(1), Increment(109366), Increment(1), Incr
ement(1), Increment(104970), Increment(27214), Increment(197045), Increme
nt(1), Increment(54892), Get),List())
> ARG_0_ORIGINAL: Actions(1,List(Get, Get, Increment(1), Increment(109366),
Get, Get, Get, Get, Increment(1), Get, Increment(1), Increment(104970),
Increment(27214), Get, Increment(197045), Increment(1), Increment(54892),
Get, Get, Get, Get, Get, Increment(172491), Get, Increment(6513), Get, I
ncrement(57501), Increment(200000)),List())
ScalaCheck did indeed shrink the commands needed to find the bug (as can be seen in ARG_0) but it did not shrink the data inside the commands. It ended up with a much larger counter value (438600) than what is actually needed to find the bug. If the first increment command was given 99 as argument the bug would be found.
Is there any way in ScalaCheck to shrink the data inside the commands when running stateful tests? The ScalaCheck version used is 1.14.1.
EDIT:
I tried simplifying the bug (and only incrementing if n!=10) and added the shrinker that was suggested by Levi and could still not get it to work. The whole runnable code can be seen below:
package LocalCounter
import org.scalacheck.commands.Commands
import org.scalacheck.{Gen, Prop, Properties, Shrink}
import scala.util.{Success, Try}
case class Counter() {
private var n = 1
def increment(incrementAmount: Int) = {
if (n!=10) {
n += incrementAmount
}
}
def get(): Int = n
}
object CounterCommands extends Commands {
type State = Int
type Sut = Counter
def canCreateNewSut(newState: State, initSuts: Traversable[State],
runningSuts: Traversable[Sut]): Boolean = true
def newSut(state: State): Sut = new Counter
def destroySut(sut: Sut): Unit = ()
def initialPreCondition(state: State): Boolean = true
def genInitialState: Gen[State] = Gen.const(1)
def genCommand(state: State): Gen[Command] = Gen.oneOf(Increment(Gen.chooseNum(1, 40).sample.get), Get)
case class Increment(incrementAmount: Int) extends UnitCommand {
def run(counter: Sut) = counter.increment(incrementAmount)
def nextState(state: State): State = {state+incrementAmount}
def preCondition(state: State): Boolean = true
def postCondition(state: State, success: Boolean) = success
}
case object Get extends Command {
type Result = Int
def run(counter: Sut): Result = counter.get()
def nextState(state: State): State = state
def preCondition(state: State): Boolean = true
def postCondition(state: State, result: Try[Int]): Prop = result == Success(state)
}
implicit val shrinkCommand: Shrink[Command] = Shrink({
case Increment(amt) => Shrink.shrink(amt).map(Increment(_))
case Get => Stream.empty
})
}
object CounterCommandsTest extends Properties("CounterCommands") {
CounterCommands.property().check()
}
Running the code gave the following output:
! Falsified after 4 passed tests.
> Labels of failing property:
initialstate = 1
seqcmds = (Increment(9); Increment(40); Get => 10)
> ARG_0: Actions(1,List(Increment(9), Increment(40), Get),List())
> ARG_0_ORIGINAL: Actions(1,List(Increment(9), Increment(34), Increment(40)
, Get),List())
Which is not the minimal example.

You should be able to define a custom Shrink for Command along these lines:
implicit val shrinkCommand: Shrink[Command] = Shrink({
case Increment(amt) => shrink(amt).map(Increment(_))
case Get => Stream.empty
}
Note that, because Stream is deprecated in Scala 2.13, you may need to disable warnings in Scala 2.13 (Scalacheck 1.15 will allow LazyList to be used to define shrinks).

Related

Scala print foo/bar alternately

I'm trying to code this LeetCode exercise of printing foo/bar alternately in Scala using conventional Runnables with wait(), notifyAll(), but can't get it to produce the wanted output, which should be:
foo bar foo bar foo bar foo bar foo bar
Here's the code:
import scala.concurrent.ExecutionContext.Implicits.global
class Foo extends Runnable {
#Override def run(): Unit = { print("foo ") }
}
class Bar extends Runnable {
#Override def run(): Unit = { print("bar ") }
}
val printFoo = new Foo
val printBar = new Bar
class FooBar {
private var foosLoop: Boolean = false
#throws(classOf[InterruptedException])
def foo: Unit = for (_ <- 1 to 5) synchronized {
while (foosLoop) { wait() }
printFoo.run()
foosLoop = true
notifyAll()
}
#throws(classOf[InterruptedException])
def bar: Unit = for (_ <- 1 to 5) synchronized {
while (!foosLoop) { wait() }
printBar.run()
foosLoop = false
notifyAll()
}
}
val fb = new FooBar
fb.foo
fb.bar
// Output:
// foo <=== prints only first "foo "
Could someone help me figure out what I did wrong?
My second question is: Can it be implemented with Scala Futures replacing Runnables?
UPDATE:
The posted code actually works as long as fb.foo and fb.bar are to be called from separate threads.
val tFoo = new Thread (new Runnable { #Override def run(): Unit = fb.foo })
val tBar = new Thread (new Runnable { #Override def run(): Unit = fb.bar })
tFoo.start()
tBar.start()
Could someone help me figure out what I did wrong?
No idea, I haven't used Runnables in my life, and they are not used in Scala.
(and I would say that are also not used anymore in Java too)
Can it be implemented with Scala Futures replacing Runnables?
Yes, something like this:
import java.util.concurrent.Semaphore
import scala.concurrent.{ExecutionContext, Future}
object RunAlternately {
/**
* Runs two taks concurrently and alternating between the two.
* #param n the amout of times to run each task.
* #param aTaks the first task.
* #param bTaks the second task.
*/
def apply(n: Int)(aTask: => Unit)(bTask: => Unit)(implicit ec: ExecutionContext): Future[Unit] ={
val aLock = new Semaphore(1)
val bLock = new Semaphore(0)
def runOne(task: => Unit, thisLock: Semaphore, thatLock: Semaphore): Future[Unit] =
Future {
var i = 0
while (i < n) {
thisLock.acquire()
task
thatLock.release()
i += 1
}
}
val aFuture = runOne(aTask, thisLock = aLock, thatLock = bLock)
val bFuture = runOne(bTask, thisLock = bLock, thatLock = aLock)
aFuture.flatMap(_ => bFuture)
}
}
See it running here.
However, these kind of things are usually better modelled with even higher-level APIs like IO or Streams.

Scala stream and ExecutionContext issue

I'm new in Scala and i'm facing a few problems in my assignment :
I want to build a stream class that can do 3 main tasks : filter,map,and forEach.
My streams data is an array of elements. Each of the 3 main tasks should run in 2 different threads on my streams array.
In addition, I need to divde the logic of the action and its actual run to two different parts. First declare all tasks in stream and only when I run stream.run() I want the actual actions to happen.
My code :
class LearningStream[A]() {
val es: ExecutorService = Executors.newFixedThreadPool(2)
val ec = ExecutionContext.fromExecutorService(es)
var streamValues: ArrayBuffer[A] = ArrayBuffer[A]()
var r: Runnable = () => "";
def setValues(streamv: ArrayBuffer[A]) = {
streamValues = streamv;
}
def filter(p: A => Boolean): LearningStream[A] = {
var ls_filtered: LearningStream[A] = new LearningStream[A]()
r = () => {
println("running real filter..")
val (l,r) = streamValues.splitAt(streamValues.length/2)
val a:ArrayBuffer[A]=es.submit(()=>l.filter(p)).get()
val b:ArrayBuffer[A]=es.submit(()=>r.filter(p)).get()
ms_filtered.setValues(a++b)
}
return ls_filtered
}
def map[B](f: A => B): LearningStream[B] = {
var ls_map: LearningStream[B] = new LearningStream[B]()
r = () => {
println("running real map..")
val (l,r) = streamValues.splitAt(streamValues.length/2)
val a:ArrayBuffer[B]=es.submit(()=>l.map(f)).get()
val b:ArrayBuffer[B]=es.submit(()=>r.map(f)).get()
ls_map.setValues(a++b)
}
return ls_map
}
def forEach(c: A => Unit): Unit = {
r=()=>{
println("running real forEach")
streamValues.foreach(c)}
}
def insert(a: A): Unit = {
streamValues += a
}
def start(): Unit = {
ec.submit(r)
}
def shutdown(): Unit = {
ec.shutdown()
}
}
my main :
def main(args: Array[String]): Unit = {
var factorial=0
val s = new LearningStream[String]
s.filter(str=>str.startsWith("-")).map(s=>s.toInt*(-1)).forEach(i=>factorial=factorial*i)
for(i <- -5 to 5){
s.insert(i.toString)
}
println(s.streamValues)
s.start()
println(factorial)
}
The main prints only the filter`s output and the factorial isnt changed (still 1).
What am I missing here ?
My solution: #Levi Ramsey left a few good hints in the comments if you want to get hints and not the real solution.
First problem: Only one command (filter) run and the other didn't. solution: insert to the runnable of each command a call for the next stream via:
ec.submit(ms_map.r)
In order to be able to close all sessions, we need to add another LearningStream data member to the class. However we can't add just a regular LearningStream object because it depends on parameter [A]. Therefore, I implemented a trait that has the close function and my data member was of that trait type.

How to unit test BroadcastProcessFunction in flink when processElement depends on broadcasted data

I implemented a flink stream with a BroadcastProcessFunction. From the processBroadcastElement I get my model and I apply it on my event in processElement.
I don't find a way to unit test my stream as I don't find a solution to ensure the model is dispatched prior to the first event.
I would say there are two ways for achieving this:
1. Find a solution to have the model pushed in the stream first
2. Have the broadcast state filled with the model prio to the execution of the stream so that it is restored
I may have missed something, but I have not found an simple way to do this.
Here is a simple unit test with my issue:
import org.apache.flink.api.common.state.MapStateDescriptor
import org.apache.flink.streaming.api.functions.co.BroadcastProcessFunction
import org.apache.flink.streaming.api.functions.sink.SinkFunction
import org.apache.flink.streaming.api.scala._
import org.apache.flink.util.Collector
import org.scalatest.Matchers._
import org.scalatest.{BeforeAndAfter, FunSuite}
import scala.collection.mutable
class BroadCastProcessor extends BroadcastProcessFunction[Int, (Int, String), String] {
import BroadCastProcessor._
override def processElement(value: Int,
ctx: BroadcastProcessFunction[Int, (Int, String), String]#ReadOnlyContext,
out: Collector[String]): Unit = {
val broadcastState = ctx.getBroadcastState(broadcastStateDescriptor)
if (broadcastState.contains(value)) {
out.collect(broadcastState.get(value))
}
}
override def processBroadcastElement(value: (Int, String),
ctx: BroadcastProcessFunction[Int, (Int, String), String]#Context,
out: Collector[String]): Unit = {
ctx.getBroadcastState(broadcastStateDescriptor).put(value._1, value._2)
}
}
object BroadCastProcessor {
val broadcastStateDescriptor: MapStateDescriptor[Int, String] = new MapStateDescriptor[Int, String]("int_to_string", classOf[Int], classOf[String])
}
class CollectSink extends SinkFunction[String] {
import CollectSink._
override def invoke(value: String): Unit = {
values += value
}
}
object CollectSink { // must be static
val values: mutable.MutableList[String] = mutable.MutableList[String]()
}
class BroadCastProcessTest extends FunSuite with BeforeAndAfter {
before {
CollectSink.values.clear()
}
test("add_elem_to_broadcast_and_process_should_apply_broadcast_rule") {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
val dataToProcessStream = env.fromElements(1)
val ruleToBroadcastStream = env.fromElements(1 -> "1", 2 -> "2", 3 -> "3")
val broadcastStream = ruleToBroadcastStream.broadcast(BroadCastProcessor.broadcastStateDescriptor)
dataToProcessStream
.connect(broadcastStream)
.process(new BroadCastProcessor)
.addSink(new CollectSink())
// execute
env.execute()
CollectSink.values should contain("1")
}
}
Update thanks to David Anderson
I went for the buffer solution. I defined a process function for the synchronization:
class SynchronizeModelAndEvent(modelNumberToWaitFor: Int) extends CoProcessFunction[Int, (Int, String), Int] {
val eventBuffer: mutable.MutableList[Int] = mutable.MutableList[Int]()
var modelEventsNumber = 0
override def processElement1(value: Int, ctx: CoProcessFunction[Int, (Int, String), Int]#Context, out: Collector[Int]): Unit = {
if (modelEventsNumber < modelNumberToWaitFor) {
eventBuffer += value
return
}
out.collect(value)
}
override def processElement2(value: (Int, String), ctx: CoProcessFunction[Int, (Int, String), Int]#Context, out: Collector[Int]): Unit = {
modelEventsNumber += 1
if (modelEventsNumber >= modelNumberToWaitFor) {
eventBuffer.foreach(event => out.collect(event))
}
}
}
And so I need to add it to my stream:
dataToProcessStream
.connect(ruleToBroadcastStream)
.process(new SynchronizeModelAndEvent(3))
.connect(broadcastStream)
.process(new BroadCastProcessor)
.addSink(new CollectSink())
Thanks
There isn't an easy way to do this. You could have processElement buffer all of its input until the model has been received by processBroadcastElement. Or run the job once with no event traffic and take a savepoint once the model has been broadcast. Then restore that savepoint into the same job, but with its event input connected.
By the way, the capability you are looking for is often referred to as "side inputs" in the Flink community.
Thanks to David Anderson and Matthieu I wrote this generic CoFlatMap function that makes the requested delay on the event stream:
import org.apache.flink.streaming.api.functions.co.CoProcessFunction
import org.apache.flink.util.Collector
import scala.collection.mutable
class SynchronizeEventsWithRules[A,B](rulesToWait: Int) extends CoProcessFunction[A, B, A] {
val eventBuffer: mutable.MutableList[A] = mutable.MutableList[A]()
var processedRules = 0
override def processElement1(value: A, ctx: CoProcessFunction[A, B, A]#Context, out: Collector[A]): Unit = {
if (processedRules < rulesToWait) {
println("1 item buffered")
println(rulesToWait+"--"+processedRules)
eventBuffer += value
return
}
eventBuffer.clear()
println("send input to output without buffering:")
out.collect(value)
}
override def processElement2(value: B, ctx: CoProcessFunction[A, B, A]#Context, out: Collector[A]): Unit = {
processedRules += 1
println("1 rule processed processedRules:"+processedRules)
if (processedRules >= rulesToWait && eventBuffer.length>0) {
println("send buffered data to output")
eventBuffer.foreach(event => out.collect(event))
eventBuffer.clear()
}
}
}
but unfortunately, it does not help at all in my case, because the subject under the test was a KeyedBroadcastProcessFunction that makes the delay on event data irrelevant because of that I tried to apply a flatmap that makes the rule stream n times larger, n was the number of CPUs. so I will sure that the resulting event stream will be always sync with the rule stream and will be arrived after that but it does not help either.
after all, I came to this simple solution that of course it is not deterministic but because of the nature of parallelism and concurrency the problem itself is not deterministic either.
If we set the delayMilis big enough (>100) the result will be deterministic:
val delayMilis = 100
val synchronizedInput = inputEventStream.map(x=>{
Thread.sleep(delayMilis)
x
}).keyBy(_.someKey)
you can also change the mapping function to this to apply the delay only on the first element:
package util
import org.apache.flink.api.common.functions.MapFunction
class DelayEvents[T](delayMilis: Int) extends MapFunction[T,T] {
var delayed = false
override def map(value: T): T = {
if (!delayed) {
delayed=true
Thread.sleep(delayMilis)
}
value
}
}
val delayMilis = 100
val synchronizedInput = inputEventStream.map(new DelayEvents(100)).keyBy(_.someKey)

Scalacheck - Add parameters to commands

In the Scalacheck documentation for stateful testing an ATM maschine is mentioned as a use case. For it to work, the commands need parameters, for example the PIN or the withdrawal amount. In the given example the methods in the class Counter don't have parameters.
Now my question is how I could test a method like this in scalachecks stateful testing:
class Counter {
private var n = 0
def inc(i: Int) = n += i
...
}
The run and nextState methods of a command don't offer a parameter. Adding a Random.nextInt wouldn't work, because in run and nextState the value would differ and the test fails:
case object Inc extends UnitCommand {
def run(sut: Sut): Unit = sut.inc(Random.nextInt)
def nextState(state: State): State = state + Random.nextInt
...
}
Is there any way to pass a parameter to the Sut?
As you may notice from how genCommand, ScalaCheck Commands actually does something like a Cartesian product between initial states generated by genInitialState and series of commands generated by genCommand. So if some of your commands actually need a parameter, you need to convert them into classes from objects and provide a Gen for them. So modifying the example from the docs you will need something like this:
/** A generator that, given the current abstract state, should produce
* a suitable Command instance. */
def genCommand(state: State): Gen[Command] = {
val incGen = for (v <- arbitrary[Int]) yield Inc(v)
val decGen = for (v <- arbitrary[Int]) yield Dec(v)
Gen.oneOf(incGen, decGen, Gen.oneOf(Get, Reset))
}
// A UnitCommand is a command that doesn't produce a result
case class Inc(dif: Int) extends UnitCommand {
def run(sut: Sut): Unit = sut.inc(dif)
def nextState(state: State): State = state + dif
// This command has no preconditions
def preCondition(state: State): Boolean = true
// This command should always succeed (never throw an exception)
def postCondition(state: State, success: Boolean): Prop = success
}
case class Dec(dif: Int) extends UnitCommand {
def run(sut: Sut): Unit = sut.dec(dif)
def nextState(state: State): State = state - dif
def preCondition(state: State): Boolean = true
def postCondition(state: State, success: Boolean): Prop = success
}
Note that if your parameters are just constants rather than variables (as is in the case of PIN-code), you should either hard-code them in the commands or make the whole specification class rather than object and pass those parameters from the outside.

Asynchronous Iterable over remote data

There is some data that I have pulled from a remote API, for which I use a Future-style interface. The data is structured as a linked-list. A relevant example data container is shown below.
case class Data(information: Int) {
def hasNext: Boolean = ??? // Implemented
def next: Future[Data] = ??? // Implemented
}
Now I'm interested in adding some functionality to the data class, such as map, foreach, reduce, etc. To do so I want to implement some form of IterableLike such that it inherets these methods.
Given below is the trait Data may extend, such that it gets this property.
trait AsyncIterable[+T]
extends IterableLike[Future[T], AsyncIterable[T]]
{
def hasNext : Boolean
def next : Future[T]
// How to implement?
override def iterator: Iterator[Future[T]] = ???
override protected[this] def newBuilder: mutable.Builder[Future[T], AsyncIterable[T]] = ???
override def seq: TraversableOnce[Future[T]] = ???
}
It should be a non-blocking implementation, which when acted on, starts requesting the next data from the remote data source.
It is then possible to do cool stuff such as
case class Data(information: Int) extends AsyncIterable[Data]
val data = Data(1) // And more, of course
// Asynchronously print all the information.
data.foreach(data => println(data.information))
It is also acceptable for the interface to be different. But the result should in some way represent asynchronous iteration over the collection. Preferably in a way that is familiar to developers, as it will be part of an (open source) library.
In production I would use one of following:
Akka Streams
Reactive Extensions
For private tests I would implement something similar to following.
(Explanations are below)
I have modified a little bit your Data:
abstract class AsyncIterator[T] extends Iterator[Future[T]] {
def hasNext: Boolean
def next(): Future[T]
}
For it we can implement this Iterable:
class AsyncIterable[T](sourceIterator: AsyncIterator[T])
extends IterableLike[Future[T], AsyncIterable[T]]
{
private def stream(): Stream[Future[T]] =
if(sourceIterator.hasNext) {sourceIterator.next #:: stream()} else {Stream.empty}
val asStream = stream()
override def iterator = asStream.iterator
override def seq = asStream.seq
override protected[this] def newBuilder = throw new UnsupportedOperationException()
}
And if see it in action using following code:
object Example extends App {
val source = "Hello World!";
val iterator1 = new DelayedIterator[Char](100L, source.toCharArray)
new AsyncIterable(iterator1).foreach(_.foreach(print)) //prints 1 char per 100 ms
pause(2000L)
val iterator2 = new DelayedIterator[String](100L, source.toCharArray.map(_.toString))
new AsyncIterable(iterator2).reduceLeft((fl: Future[String], fr) =>
for(l <- fl; r <- fr) yield {println(s"$l+$r"); l + r}) //prints 1 line per 100 ms
pause(2000L)
def pause(duration: Long) = {println("->"); Thread.sleep(duration); println("\n<-")}
}
class DelayedIterator[T](delay: Long, data: Seq[T]) extends AsyncIterator[T] {
private val dataIterator = data.iterator
private var nextTime = System.currentTimeMillis() + delay
override def hasNext = dataIterator.hasNext
override def next = {
val thisTime = math.max(System.currentTimeMillis(), nextTime)
val thisValue = dataIterator.next()
nextTime = thisTime + delay
Future {
val now = System.currentTimeMillis()
if(thisTime > now) Thread.sleep(thisTime - now) //Your implementation will be better
thisValue
}
}
}
Explanation
AsyncIterable uses Stream because it's calculated lazily and it's simple.
Pros:
simplicity
multiple calls to iterator and seq methods return same iterable with all items.
Cons:
could lead to memory overflow because stream keeps all prevously obtained values.
first value is eagerly gotten during creation of AsyncIterable
DelayedIterator is very simplistic implementation of AsyncIterator, don't blame me for quick and dirty code here.
It's still strange for me to see synchronous hasNext and asynchronous next()
Using Twitter Spool I've implemented a working example.
To implement spool I modified the example in the documentation.
import com.twitter.concurrent.Spool
import com.twitter.util.{Await, Return, Promise}
import scala.concurrent.{ExecutionContext, Future}
trait AsyncIterable[+T <: AsyncIterable[T]] { self : T =>
def hasNext : Boolean
def next : Future[T]
def spool(implicit ec: ExecutionContext) : Spool[T] = {
def fill(currentPage: Future[T], rest: Promise[Spool[T]]) {
currentPage foreach { cPage =>
if(hasNext) {
val nextSpool = new Promise[Spool[T]]
rest() = Return(cPage *:: nextSpool)
fill(next, nextSpool)
} else {
val emptySpool = new Promise[Spool[T]]
emptySpool() = Return(Spool.empty[T])
rest() = Return(cPage *:: emptySpool)
}
}
}
val rest = new Promise[Spool[T]]
if(hasNext) {
fill(next, rest)
} else {
rest() = Return(Spool.empty[T])
}
self *:: rest
}
}
Data is the same as before, and now we can use it.
// Cool stuff
implicit val ec = scala.concurrent.ExecutionContext.global
val data = Data(1) // And others
// Print all the information asynchronously
val fut = data.spool.foreach(data => println(data.information))
Await.ready(fut)
It will trow an exception on the second element, because the implementation of next was not provided.