How to extend the TestEnvironment of a ZIO Test - scala

I want to test the following function:
def curl(host: String, attempt: Int = 200): ZIO[Loggings with Clock, Throwable, Unit]
If the environment would just use standard ZIO environments, like Console with Clock, the test would work out of the box:
testM("curl on valid URL") {
(for {
r <- composer.curl("https://google.com")
} yield
assert(r, isUnit))
}
The Test environment would be provided by zio-test.
So the question is, how to extend the TestEnvironment with my Loggings module?

Note that this answer is for RC17 and will change significantly in RC18. You're right that as in other cases of composing environments we need to implement a function to build our total environment from the modules we have. Spec has several combinators built in such as provideManaged to do this so you don't need to do it within your test itself. All of these have "normal" variants that will provide a separate copy of the environment to each test in a suite and "shared" variants that will create one copy of the environment for the entire suite when it is a resource that is expensive to create like a Kafka service.
You can see an example below of using provideSomeManaged to provide an environment that extends the test environment to a test.
In RC18 there will be a variety of other provide variants equivalent to those on ZIO as well as a new concept of layers to make it much easier to build composed environments for ZIO applications.
import zio._
import zio.clock._
import zio.test._
import zio.test.environment._
import ExampleSpecUtil._
object ExampleSpec
extends DefaultRunnableSpec(
suite("ExampleSpec")(
testM("My Test") {
for {
time <- clock.nanoTime
_ <- Logging.logLine(
s"The TestClock says the current time is $time"
)
} yield assertCompletes
}
).provideSomeManaged(testClockWithLogging)
)
object ExampleSpecUtil {
trait Logging {
def logging: Logging.Service
}
object Logging {
trait Service {
def logLine(line: String): UIO[Unit]
}
object Live extends Logging {
val logging: Logging.Service =
new Logging.Service {
def logLine(line: String): UIO[Unit] =
UIO(println(line))
}
}
def logLine(line: String): URIO[Logging, Unit] =
URIO.accessM(_.logging.logLine(line))
}
val testClockWithLogging
: ZManaged[TestEnvironment, Nothing, TestClock with Logging] =
ZIO
.access[TestEnvironment] { testEnvironment =>
new TestClock with Logging {
val clock = testEnvironment.clock
val logging = Logging.Live.logging
val scheduler = testEnvironment.scheduler
}
}
.toManaged_
}

This is what I came up:
testM("curl on valid URL") {
(for {
r <- composer.curl("https://google.com")
} yield
assert(r, isUnit))
.provideSome[TestEnvironment](env => new Loggings.ConsoleLogger
with TestClock {
override val clock: TestClock.Service[Any] = env.clock
override val scheduler: TestClock.Service[Any] = env.scheduler
override val console: TestLogger.Service[Any] = MyLogger()
})
}
Using the TestEnvironment with provideSome to setup my environment.

Related

Starting two Scala Finagle ListeningServers at once

I need to start two Finagle ListeningServers at once because I have to implement two different traits that extend ListeningServer.
/* A simplified example to give you an idea of what I'm trying to do */
trait FirstListeningServer extends ListeningServer {
def buildFirstServer() = ???
def main(): Unit = {
val server = buildFirstServer()
closeOnExit(server)
Await.ready(server)
}
}
trait SecondListeningServer extends ListeningServer {
def buildSecondServer() = ???
def main(): Unit = {
val server = buildSecondServer()
closeOnExit(server)
Await.ready(server)
}
}
Basically, each ListeningServer is a com.twitter.util.Awaitable and whenever I have to instantiate a new ListeningServer I use Await.ready(myListeningServer).
class MyServer extends FirstListeningServer with SecondListeningServer {
override def main(): Unit = {
val firstServer = buildFirstServer()
closeOnExit(firstServer)
val secondServer = buildSecondServer()
closeOnExit(secondServer)
Await.all(firstServer, secondServer)
}
}
Now I'm not sure if using Await.all() is the right choice in order to start several ListeningServers concurrently. I would have used com.twitter.util.Future.collect() but I have two Awaitables.
def all(awaitables: Awaitable[_]*): Unit
Returns after all actions have completed.
I'm using Scala 2.12 and Twitter 20.3.0.

Integration tests for Http4s Client/Resource

I'm implementing a Vault client in Scala using Http4s client.
And I'm now starting to write integration tests. So far I have this:
abstract class Utils extends AsyncWordSpec with Matchers {
implicit override def executionContext = ExecutionContext.global
implicit val timer: Timer[IO] = IO.timer(executionContext)
implicit val cs: ContextShift[IO] = IO.contextShift(executionContext)
val vaultUri = Uri.unsafeFromString(Properties.envOrElse("VAULT_ADDR", throw IllegalArgumentException))
val vaultToken = Properties.envOrElse("VAULT_TOKEN", throw IllegalArgumentException)
val clientResource = BlazeClientBuilder[IO](global)
.withCheckEndpointAuthentication(false)
.resource
def usingClient[T](f: VaultClient[IO] => IO[Assertion]): Future[Assertion] = {
clientResource.use { implicit client =>
f(new VaultClient[IO](vaultUri, vaultToken))
}.unsafeToFuture()
}
}
Then my tests look like this (just showing one test):
class SysSpec extends Utils {
"The auth endpoint" should {
"successfully mount an authentication method" in {
usingClient { client =>
for {
result <- client.sys.auth.create("test", AuthMethod(
"approle", "some nice description", config = TuneOptions(defaultLeaseTtl = 60.minutes)
))
} yield result should be (())
}
}
}
}
This approach works, however it doesn't feel right. For each test I'm opening the connection (clientResource.use) and recreating the VaultClient.
Is there a way for me to reuse the same connection and client for all the tests in SysSpec.
Please note these are integration tests and not unit tests.
This is the best I could come up with.
abstract class Utils extends AsyncWordSpec with Matchers with BeforeAndAfterAll {
implicit override def executionContext = ExecutionContext.global
implicit val timer: Timer[IO] = IO.timer(executionContext)
implicit val cs: ContextShift[IO] = IO.contextShift(executionContext)
val (httpClient, finalizer) = BlazeClientBuilder[IO](global)
.withCheckEndpointAuthentication(false)
.resource.allocated.unsafeRunSync()
override protected def afterAll(): Unit = finalizer.unsafeRunSync()
private implicit val c = httpClient
val client = new VaultClient[IO](uri"http://[::1]:8200", "the root token fetched from somewhere")
}
Then the tests just use the client directly:
class SysSpec extends Utils {
"The auth endpoint" should {
"successfully mount an authentication method" in {
client.sys.auth.create("test", AuthMethod(
"approle", "some nice description",
config = TuneOptions(defaultLeaseTtl = 60.minutes))
).map(_ shouldBe ()).unsafeToFuture()
}
}
}
My two main problems with this approach are the two unsafeRunSyncs in the code. The first one is to create the client and the second one to clean the resource. However it is a much better approach then repeatedly creating and destroy the client.
I would also like not to use the unsafeToFuture but that would require ScalaTest to support Cats-Effect directly.

Play 2.1 Unit Test With Slick and Postgres

I want to run unit tests for a Play 2 Scala app using the same database setup as used in production: Slick with Postgres. The following fails with "java.sql.SQLException: Attempting to obtain a connection from a pool that has already been shutdown." on the 2nd test.
package controllers
import org.specs2.mutable._
import play.api.db.DB
import play.api.Play.current
import play.api.test._
import play.api.test.Helpers._
import scala.slick.driver.PostgresDriver.simple._
class BogusTest extends Specification {
def postgresDatabase(name: String = "default",
options: Map[String, String] = Map.empty): Map[String, String] =
Map(
"db.test.driver" -> "org.postgresql.Driver",
"db.test.user" -> "postgres",
"db.test.password" -> "blah",
"db.test.url" -> "jdbc:postgresql://localhost/blah"
)
def fakeApp[T](block: => T): T =
running(FakeApplication(additionalConfiguration =
postgresDatabase("test") ++ Map("evolutionplugin" -> "disabled"))) {
def database = Database.forDataSource(DB.getDataSource("test"))
database.withSession { implicit s: Session => block }
}
"Fire 1" should {
"do something" in fakeApp {
success
}
}
"Fire 2" should {
"do something else" in fakeApp {
success
}
}
}
I run the test like this:
$ play -Dconfig.file=`pwd`/conf/dev.conf "test-only controllers.BogusTest"
Two other mysteries:
1) All tests run, even though I ask for just BogusTest to run
2) application.conf is always used, not def.conf, and the driver information comes from application.conf, not the info configured in the code.
This is a tentative answer as I have currently tested on play 2.2.0 and I can't reproduce your bug, using a MYSQL database.
I feel there might be a very tricky bug in your code. First of all, if you explore the DBPlugin implementation provided by Play, BoneCPPPlugin:
/**
* Closes all data sources.
*/
override def onStop() {
dbApi.datasources.foreach {
case (ds, _) => try {
dbApi.shutdownPool(ds)
} catch { case NonFatal(_) => }
}
val drivers = DriverManager.getDrivers()
while (drivers.hasMoreElements) {
val driver = drivers.nextElement
DriverManager.deregisterDriver(driver)
}
}
You see that the onStop() method closes the connection pool. So it's clear, you are providing to the second test example an application which has already been stopped (and therefore its plugins are stopped and the db connectin pool closed).
Scalatests and specs2 run the test in parallel, and you can rely on the test helper because it's thread-safe:
def running[T](fakeApp: FakeApplication)(block: => T): T = {
synchronized {
try {
Play.start(fakeApp)
block
} finally {
Play.stop()
play.api.libs.ws.WS.resetClient()
}
}
}
However, when you do
DB.getDataSource("test")
From the source code of Play:
def getDataSource(name: String = "default")(implicit app: Application): DataSource = app.plugin[DBPlugin].map(_.api.getDataSource(name)).getOrElse(error)
So here there is an implicit, does which not get resolved to FakeApplication (it is not an implicit in scope!!!), but to Play.current and it appears that in the second case, this is not what you were expecting it to be, Play.current still point to the previous instance of FakeApplication: it probably depends on how implicit are captured in closures
If you however, refactor the fakeApp method, you can ensure the application you just created is used to resolve the implicit (you can always make explicit the value for an implicit parameter)
def fakeApp[T](block: => T): T = {
val fakeApplication = FakeApplication(additionalConfiguration =
postgresDatabase("test") ++ Map("evolutionplugin" -> "disabled"))
running(fakeApplication) {
def database = Database.forDataSource(DB.getDataSource("test")(fakeApplication))
database.withSession { implicit s: Session => block }
}
}

Kill or timeout a Future in Scala 2.10

Hi,
I'm using Scala 2.10 with the new futures library and I'm trying to write some code to test an infinite loop. I use a scala.concurrent.Future to run the code with the loop in a separate thread. I would then like to wait a little while to do some testing and then kill off the separate thread/future. I have looked at Await.result but that doesn't actually kill the future. Is there any way to timeout or kill the new Scala 2.10 futures?
I would prefer not having to add external dependencies such as Akka just for this simple part.
Do not try it at home.
import scala.concurrent._
import scala.concurrent.duration._
class MyCustomExecutionContext extends AnyRef with ExecutionContext {
import ExecutionContext.Implicits.global
#volatile var lastThread: Option[Thread] = None
override def execute(runnable: Runnable): Unit = {
ExecutionContext.Implicits.global.execute(new Runnable() {
override def run() {
lastThread = Some(Thread.currentThread)
runnable.run()
}
})
}
override def reportFailure(t: Throwable): Unit = ???
}
implicit val exec = new MyCustomExecutionContext()
val f = future[Int]{ do{}while(true); 1 }
try {
Await.result(f, 10 seconds) // 100% cpu here
} catch {
case e: TimeoutException =>
println("Stopping...")
exec.lastThread.getOrElse(throw new RuntimeException("Not started"))
.stop() // 0% cpu here
}
No - you will have to add a flag that your loop checks. If the flag is set, stop the loop. Make sure the flag is at least volatile.
See Java Concurrency in Practice, p 135-137.
I had a similar problem and wrote the following nonblocking future op:
class TerminationToken(var isTerminated: Boolean)
object TerminationToken { def apply() = new TerminationToken(false) }
implicit class FutureOps[T](future: Future[Option[T]]) {
def terminate(timeout: FiniteDuration, token: TerminationToken): Future[Option[T]] = {
val timeoutFuture = after[Option[T]](timeout, using = context.system.scheduler) {
Future[Option[T]] { token.isTerminated = true; None } }
Future.firstCompletedOf[Option[T]](Seq (future recover { case _ => None }, timeoutFuture))
}
}
Then just create a future that returns an option, and use .terminate(timeout, token) on it

Is it OK to use blocking actor messages when they are wrapped in a future?

My current application is based on akka 1.1. It has multiple ProjectAnalysisActors each responsible for handling analysis tasks for a specific project. The analysis is started when such an actor receives a generic start message. After finishing one step it sends itself a message with the next step as long one is defined. The executing code basically looks as follows
sealed trait AnalysisEvent {
def run(project: Project): Future[Any]
def nextStep: AnalysisEvent = null
}
case class StartAnalysis() extends AnalysisEvent {
override def run ...
override def nextStep: AnalysisEvent = new FirstStep
}
case class FirstStep() extends AnalysisEvent {
override def run ...
override def nextStep: AnalysisEvent = new SecondStep
}
case class SecondStep() extends AnalysisEvent {
...
}
class ProjectAnalysisActor(project: Project) extends Actor {
def receive = {
case event: AnalysisEvent =>
val future = event.run(project)
future.onComplete { f =>
self ! event.nextStep
}
}
}
I have some difficulties how to implement my code for the run-methods for each analysis step. At the moment I create a new future within each run-method. Inside this future I send all follow-up messages into the different subsystems. Some of them are non-blocking fire-and-forget messages, but some of them return a result which should be stored before the next analysis step is started.
At the moment a typical run-method looks as follows
def run(project: Project): Future[Any] = {
Future {
progressActor ! typicalFireAndForget(project.name)
val calcResult = (calcActor1 !! doCalcMessage(project)).getOrElse(...)
val p: Project = ... // created updated project using calcResult
val result = (storage !! updateProjectInformation(p)).getOrElse(...)
result
}
}
Since those blocking messages should be avoided, I'm wondering if this is the right way. Does it make sense to use them in this use case or should I still avoid it? If so, what would be a proper solution?
Apparently the only purpose of the ProjectAnalysisActor is to chain future calls. Second, the runs methods seems also to wait on results to continue computations.
So I think you can try refactoring your code to use Future Composition, as explained here: http://akka.io/docs/akka/1.1/scala/futures.html
def run(project: Project): Future[Any] = {
progressActor ! typicalFireAndForget(project.name)
for(
calcResult <- calcActor1 !!! doCalcMessage(project);
p = ... // created updated project using calcResult
result <- storage !!! updateProjectInformation(p)
) yield (
result
)
}