I'm having trouble mocking the MVar2 method make in my Unit test (using mockito).
I tried this:
private val sq = mock[MVar2[IO, String]]
and when I tried to mock the method like:
when(sq.take).thenReturn(
"testString".pure[IO],
IO.never
)
I get the infinite loop in the test.
I was positive I could mock it like this.
The actual code calling the make is:
def run: F[Unit] =
fs2.Stream
.repeatEval[F, String](signalQ.take)
.map(encode)
.unNone
.evalTap(logMessage)
.evalMap(send)
.compile
.drain
I see two issues.
The first thing, there is no reason to mock IO. You can pretty much do this:
// define this wherever you create sq
val sq = MVar2[IO, String]
// then DI sq and
sq.put("testString") >> run
Then the second thing, how does .repeatEval(io) works? It calls io repeatedly, every time the stream needs another element. So at first it would evaluate io with your mock to:
"testString".pure[IO]
which would compute the first element to "testString" and later to
IO.never
which means that the second element would never appear as this io would never return - neither value nor exception. And your stream halts as you can see. (Which would be according to the spec but apparently not what you want).
If you wanted to get Some(string) if value is present and None if it's absent (to terminate stream on None) you should try:
def run: F[Unit] =
fs2.Stream
.repeatEval[F, Option[String]](signalQ.tryTake)// <-- try to get None on empty
.unNoneTerminate // <-- terminate on None
.map(encode)
.evalTap(logMessage)
.evalMap(send)
.compile
.drain
Related
I have a function that returns a fs2.Stream of Measurements.
import cats.effect._
import fs2._
def apply(sds: SerialPort, interval: Int)(implicit cs: ContextShift[IO]): Stream[IO, SdsMeasurement] =
for {
blocker <- Stream.resource(Blocker[IO])
stream <- io.readInputStream(IO(sds.getInputStream), 1, blocker)
.through(SdsStateMachine.collectMeasurements())
} yield stream
Normally it is an infinite Stream, unless I pass it a test flag, in which case it should output one value and halt.
val infiniteSource: Stream[IO, SdsMeasurement] = ...
val source = if (isTest) infiniteSource.take(1) else infiniteSource
source.compile.drain
The infinite Stream works fine. It gives me all Measurements infinitely. The test Stream indeed gives me only the first measurement, nothing more. The problem I have is that the Stream does not return after this last measurements. It blocks forever. What am I doing wrong?
Note: I think I abstracted the essential code, but for more context, please take a look at my project: https://github.com/jkransen/fijnstof/blob/ZIO/src/main/scala/nl/kransen/fijnstof/Main.scala
The code you've presented here looks fine, I don't think the issue lies within that code. If it blocks, then presumably one of the underlying APIs blocks, for instance it might be the close method of the InputStream. What I usually do in such situations is to add log statements before and after every function call that might block.
I am using scanamo to query a dynamodb and all i want to do is check that the db actually exists. I'm not really concerned with the record I get back, just that there were no errors. For the query part I'm using this:
trait DynamoTestTrait extends AbstractDynamoConfig { def test(): Future[List[Either[DynamoReadError, T]]] = ScanamoAsync.exec(client)table.consistently.limit(1).scan())}
that returns the Future List. I want to evaluate the first? item in the list and just return true if it is not a read error.
I thought this would work but it doesn't:
val result = test() match {
case r: DynamoReadError => Future.successful(false)
case r: Registration => Future.successful(true)
}
I'm new to scala so struggling with return types and things. This is a Play api call so i need to evaluate that boolen future at some point. With something like this:
def health = Action {
val isHealthy = h.testDynamo()
val b: Boolean = Await.result(isHealthy, scala.concurrent.duration.Duration(5, "seconds"))
Ok(Json.toJson(TestResponse(b.toString)))
}
I think this is probably wrong also as i don't want to use Await but i can't get async to work either.
Sorry, i'm kind of lost.
When i try to evaluate result i only get a message about the Future:
{
"status": 500,
"message": "Future(<not completed>) (of class scala.concurrent.impl.Promise$DefaultPromise)"
}
The result is a Future so you can't test the result without doing something like Await.result (as you do later). What you can do is modify the result returned by the Future to be the result you need.
In your case you can do this:
test().map(_.headOption.forall(_.isRight))
This will return Future[Boolean] which you can then use in your Await.result call.
Here is how it works:
map calls a function on the result of the Future, which is type List[Either[DynamoReadError, T]] and returns a new Future that gives the result of that function call.
_.headOption takes the head of the list and returns an Option[Either[DynamoReadError, T]]. This is Some(...) if there are one or more elements in the list, or None if the list is empty.
forall checks the contents of the Option and returns the result of the test on that option. If the Option is None then it returns true.
_.isRight tests the value Either[...] and returns true if the value is Right[...] and false if it is Left[...].
This does what you specified, but perhaps it would be better to check if any of the results failed, rather than just the first one? If so, it is actually a bit simpler:
test().map(_.forall(_.isRight))
This checks that all the entries in the List are Right, and fails as soon as a Left is found.
The problem with returning this from Play is a separate issue and should probably be in a separate question.
I'm trying to make an asyncronous call using scalaz Task.
For some strange reason whenever I attempt to call methodA, although I get a scalaz.concurrent.Task returned I never see the 2nd print statement which leads me to believe that the asynchronous call is never returned. Is there something that I'm missing that is required in order for me to start the task?
for {
foo <- println("we are in this for loop!").asAsyncSuccess()
authenticatedUser <- methodA(parameterA)
foo2 <- println("we are out of this this call!").asAsyncSuccess()
} yield {
authenticatedUser
}
def methodA(parameterA: SomeType): Task[User]
First off, Scalaz Task is lazy: it won't start when you create it or get from some method. It's by design and allows Tasks to combine well. It's not intended to start until you fully assemble your entire program into single main Task, so when you're done, you need to manually start an aggregated Task using one of the perform methods, like unsafePerformSync that will wait for the result of aggregate async computation on a current thread:
(for {
foo <- println("we are in this for loop!").asAsyncSuccess()
authenticatedUser <- methodA(parameterA)
foo2 <- println("we are out of this this call!").asAsyncSuccess()
} yield {
authenticatedUser
}).unsafePerformSync
As you can see from your original code, you've never started any Task. However, you've mentioned that first println printed its message to console. This may be related to asAsyncSuccess method: I've not found it in Scalaz, so I assume it's in implicit class somewhere in your code:
implicit class TaskOps[A](val a: A) extends AnyVal {
def asAsyncSuccess(): Task[A] = Task(a)
}
If you write a helper implicit class like this, it won't convert an effectful expression into lazy Task action, despite the fact that Task.apply
is call-by-name. It will only convert its result, which happens to be just Unit, because its own parameter a is not call-by-name. To work as intended, it needs to be like the following:
implicit class TaskOps[A](a: => A) {
def asAsyncSuccess(): Task[A] = Task(a)
}
You may also ask why first println prints something while second isn't. This is related to for-comprehensions desugaring into method calls. Specifically, compiler turns your code into this:
println("we are in this for loop!").asAsyncSuccess().flatMap(foo =>
methodA(parameterA).flatMap(authenticatedUser =>
println("we are out of this this call!").asAsyncSuccess().map(foo2 =>
authenticatedUser)))
If asAsyncSuccess doesn't respect intended Task semantics, first println gets executed immediately. But its result is then wrapped into Task that is, essentially, just a wrapper over start function, and flatMap implementation on it just combines such functions together without running them. So, methodA or second println won't be executed until you call unsafePerformSync or other perform helper method. Note, however, that second println will still be executed earlier than it should according to Task semantics, just not that earlier.
I'm attempting to write a method which accepts multiple generic types and takes as an argument a unit of work to execute.
The idea is that the unit of work is a common function that itself is generic. For the sake of example, let's say it's something like the following:
def loadModelRdd[T: TypeTag](sc: SparkContext): RDD[T] = {
...
}
loadModelRdd() will construct an RDD of the given type after some internal processing like loading the Model information, etc.
A prototype method I've been hacking on looks something like the following (non-working):
def forkAll[A : Manifest, B : Manifest](work: => RDD[_]): (RDD[A], RDD[B]) = {
def aFuture = Future { work } // How can I notify that this work call returns type A?
def bFuture = Future { work } // How can I notify that this work call returns type B?
val res = for {
a <- aFuture
b <- bFuture
} yield (a.asInstanceOf[A], b.asInstanceOf[B])
Await.result(res, 10.seconds)
}
This is a shortened version of the code I'm working on as I'm actually looking at accepting as many as 10 different types.
As you can see, the overall goal of the forkAll method is to wrap the unit of work in a Future, fork-join the execution of the unit of work for each type, then return the results as a Tuple'd result. An example consumer statement would be:
val (a, b) = forkAll[ClassA, ClassB](loadModelRdd)
i.e I want to fork-join at this point and wait for the results, but I want the executions to be executed in parallel and then collected back to the Driver (Spark Driver to be specific).
The problem is I'm not sure how to coerce the type returned by the unit of work within forkAll when constructing the Future {} blocks. Without the forkAll, the implementation looked like the following:
val resA = loadModelRdd[ClassA](sc)
val resB = loadModelRdd[ClassB](sc)
...
I am looking at doing this for two reasons:
To abstract the details of fork-join for any unit of work which matches this model.
A version of this code, which explicitly states what the unit of work is, is working in Production and was responsible for cutting execution of a long-running block by close to half. I have a couple of execution steps where this pattern could be applied
Is this something that is possible in Scala's type system? Or should I look at this problem from a different perspective? I've tried a couple of implementations (including one described here) but I haven't quite found one that fits my current view of the problem
Please let me know if there is any additional information needed.
Thanks!
Short answer: Scala does not allow functions with type parameters, so what you want is not exactly possible.
You are attempting to pass a method with a type parameter. Although methods are allowed to have type parameters, functions are not. When you try to pass a method, it acts like an anonymous function, so you must specify a type.
However, since methods do allow type parameters, you can take advantage of this by creating an abstract class that will do your fork/join
abstract class ForkJoin {
protected def work[T]: RDD[T]
def apply[A, B]: (RDD[A], RDD[B]) = {
// Write implementation of fork/join here
(work[A], work[B])
}
}
then overriding the type generic work method so that it does what you want, such as calling some other pre-defined method.
val forkJoin = new ForkJoin {
override protected def work[T]: RDD[T] =
loadModelRdd[T](sc)
}
val (intRdd, stringRdd) = forkJoin[Int, String]
Check out this for a prototype implementation that compiles and runs without issues.
I am using the getOrElseUpdate method of scala.collection.concurrent.TrieMap (from 2.11.6)
// simplified for clarity
val trie = new TrieMap[Int, Future[String]]
def foo(): String = ... // a very long process
val fut: Future[String] = trie.getOrElseUpdate(id, Future(foo()))
As I understand, if I invoke the getOrElseUpdate in multiple threads without any synchronization the foo is invoked just once.
Is it correct ?
The current implementation is that it will be invoked zero or one times. It may be invoked without the result being inserted, however. (This is standard behavior for CAS-based maps as opposed to ones that use synchronized.)