What functional technique enables not having to pass configuration through functions - scala

As I'm delving more into FP, I'm curious about the 'best' way to store settings that are loaded from config files. I've just been creating a case class with all the necessary config variables and setting that on app start. I then pass that case class into whatever function requires info from it.
However, it seems quite annoying especially when that settings case class has to propagate through many functions. Is there a better way to do this?

Reader monad provides a way of propagating configuration without having to pass it as a parameter throughout all functions that need it. Contrast the following two implementations:
 Config available from context via Reader[Config, String]
object ConfigFunctional extends App {
case class Config(username: String, password: String, host: String)
def encodeCredentials: Reader[Config, String] = Reader { config =>
Base64.getEncoder.encodeToString(s"${config.username}:${config.password}".getBytes())
}
def basicAuth(credentials: String): Reader[Config, String] = Reader { config =>
Http(s"${config.host}/HTTP/Basic/")
.header("Authorization", s"Basic $credentials")
.asString
.body
}
def validateResponse(body: String): Reader[Config, Either[String, String]] = Reader { _ =>
if (body.contains("Your browser made it"))
Right("Credentials are valid!")
else
Left("Wrong credentials")
}
def program: Reader[Config, Either[String, String]] = for {
credentials <- encodeCredentials
response <- basicAuth(credentials)
validation <- validateResponse(response)
} yield validation
val config = Config("guest", "guest", "https://jigsaw.w3.org")
println(program.run(config))
}
Config passed in as an argument
object ConfigImperative extends App {
case class Config(username: String, password: String, host: String)
def encodeCredentials(config: Config): String = {
Base64.getEncoder.encodeToString(s"${config.username}:${config.password}".getBytes())
}
def basicAuth(credentials: String, config: Config): String = {
Http(s"${config.host}/HTTP/Basic/")
.header("Authorization", s"Basic $credentials")
.asString
.body
}
def validateResponse(body: String): Either[String, String] = {
if (body.contains("Your browser made it"))
Right("Credentials are valid!")
else
Left("Wrong credentials")
}
def program(config: Config): Either[String, String] = {
val credentials = encodeCredentials(config)
val response = basicAuth(credentials, config)
val validation = validateResponse(response)
validation
}
val config = Config("guest", "guest", "https://jigsaw.w3.org")
println(program(config))
}
Both implementations should output Right(Credentials are valid!), however notice how in the first implementation config: Config is not a method parameter, for example, contrast encodeCredentials:
def encodeCredentials: Reader[Config, String]
def encodeCredentials(config: Config): String
Config appears in the return type instead of being a parameter. We can interpret this as meaning
"When encodeCredentials runs in the context that provides a
Config, then it will produce a String result."
The "context" here is represented by Reader monad.
Furthermore, notice how Config is not a parameter even in the main business logic
def program: Reader[Config, Either[String, String]] = for {
credentials <- encodeCredentials
response <- basicAuth(credentials)
validation <- validateResponse(response)
} yield validation
We let the methods evaluate in the context containing Config via run function:
program.run(config)
To run above examples we need the following dependencies
scalacOptions += "-Ypartial-unification",
libraryDependencies ++= Seq(
"org.typelevel" %% "cats-core" % "1.6.0",
"org.scalaj" %% "scalaj-http" % "2.4.1"
)
and imports
import cats.data.Reader
import java.util.Base64
import scalaj.http.Http

Related

What is Scala 3 equivalent to this Scala 2 code that uses Enumeration and play-json?

I have some code that works in Scala 2.{10,11,12,13} that I'm now trying to convert to Scala 3. Scala 3 does Enumeration differently than Scala 2. I'm trying to figure out how to convert the following code that interacts with play-json so that it will work with Scala 3. Any tips or pointers to code from projects that have already crossed this bridge?
// Scala 2.x style code in EnumUtils.scala
import play.api.libs.json._
import scala.language.implicitConversions
// see: http://perevillega.com/enums-to-json-in-scala
object EnumUtils {
def enumReads[E <: Enumeration](enum: E): Reads[E#Value] =
new Reads[E#Value] {
def reads(json: JsValue): JsResult[E#Value] = json match {
case JsString(s) => {
try {
JsSuccess(enum.withName(s))
} catch {
case _: NoSuchElementException =>
JsError(s"Enumeration expected of type: '${enum.getClass}', but it does not appear to contain the value: '$s'")
}
}
case _ => JsError("String value expected")
}
}
implicit def enumWrites[E <: Enumeration]: Writes[E#Value] = new Writes[E#Value] {
def writes(v: E#Value): JsValue = JsString(v.toString)
}
implicit def enumFormat[E <: Enumeration](enum: E): Format[E#Value] = {
Format(EnumUtils.enumReads(enum), EnumUtils.enumWrites)
}
}
// ----------------------------------------------------------------------------------
// Scala 2.x style code in Xyz.scala
import play.api.libs.json.{Reads, Writes}
object Xyz extends Enumeration {
type Xyz = Value
val name, link, unknown = Value
implicit val enumReads: Reads[Xyz] = EnumUtils.enumReads(Xyz)
implicit def enumWrites: Writes[Xyz] = EnumUtils.enumWrites
}
As an option you can switch to jsoniter-scala.
It supports enums for Scala 2 and Scala 3 out of the box.
Also it has handy derivation of safe and efficient JSON codecs.
Just need to add required libraries to your dependencies:
libraryDependencies ++= Seq(
// Use the %%% operator instead of %% for Scala.js and Scala Native
"com.github.plokhotnyuk.jsoniter-scala" %% "jsoniter-scala-core" % "2.13.5",
// Use the "provided" scope instead when the "compile-internal" scope is not supported
"com.github.plokhotnyuk.jsoniter-scala" %% "jsoniter-scala-macros" % "2.13.5" % "compile-internal"
)
And then derive a codec and use it:
import com.github.plokhotnyuk.jsoniter_scala.core._
import com.github.plokhotnyuk.jsoniter_scala.macros._
implicit val codec: JsonValueCodec[Xyz.Xyz] = JsonCodecMaker.make
println(readFromString[Xyz.Xyz]("\"name\""))
BTW, you can run the full code on Scastie: https://scastie.scala-lang.org/Evj718q6TcCZow9lRhKaPw

Could not find implicit value of org.json4s.AsJsonInput in json4s 4.0.4

json4s version
In sbt:
"org.json4s" %% "json4s-jackson" % "4.0.4"
scala version
2.12.15
jdk version
JDK8
My problem
When I learnt to use json4s to read a json file "file.json".
(In book "Scala design patterns")
import org.json4s._
import org.json4s.jackson.JsonMethods._
trait DataReader {
def readData(): List[Person]
def readDataInefficiently(): List[Person]
}
class DataReaderImpl extends DataReader {
implicit val formats = DefaultFormats
private def readUntimed(): List[Person] =
parse(StreamInput(getClass.getResourceAsStream("file.json"))).extract[List[Person]]
override def readData(): List[Person] = readUntimed()
override def readDataInefficiently(): List[Person] = {
(1 to 10000).foreach(_ =>
readUntimed())
readUntimed()
}
}
object DataReaderExample {
def main(args: Array[String]): Unit = {
val dataReader = new DataReaderImpl
println(s"I just read the following data efficiently:${dataReader.readData()}")
println(s"I just read the following data inefficiently:${dataReader.readDataInefficiently()}")
}
}
It cannot compile correctly, and throw:
could not find implicit value for evidence parameter of type org.json4s.AsJsonInput[org.json4s.StreamInput]
Error occurred in an application involving default arguments.
parse(StreamInput(getClass.getResourceAsStream("file.json"))).extract[List[Person]]
when I change json4s version in 3.6.0-M2 in sbt:
"org.json4s" %% "json4s-jackson" % "3.6.0-M2"
It works well.
Why would this happen? How should I fix it in 4.0.4 or higher version?
Thank you for your Help.
I tried many ways to solve this problem.
And finally:
remove StreamInput in :
private def readUntimed(): List[Person] = {
val inputStream: InputStream = getClass.getResourceAsStream("file.json")
// parse(StreamInput(inputStream)).extract[List[Person]] // this will work in 3.6.0-M2
parse(inputStream).extract[List[Person]]
}
and now it works !

Extending DefaultParamsReadable and DefaultParamsWritable not allowing reading of custom model

Good day,
I have been struggling for a few days to save a custom transformer that is part of a large pipeline of stages. I have a transformer that is completely defined by its params. I have an estimator which in it's fit method will generate a matrix and then set the transformer parameters accordingly so that I can use DefaultParamsReadable and DefaultParamsReadable to take advantage of the serialisation/deserialisation already present in util.ReadWrite.scala.
My summarised code is as follows (includes important aspects):
...
import org.apache.spark.ml.util._
...
// trait to implement in Estimator and Transformer for params
trait NBParams extends Params {
final val featuresCol= new Param[String](this, "featuresCol", "The input column")
setDefault(featuresCol, "_tfIdfOut")
final val labelCol = new Param[String](this, "labelCol", "The labels column")
setDefault(labelCol, "P_Root_Code_Index")
final val predictionsCol = new Param[String](this, "predictionsCol", "The output column")
setDefault(predictionsCol, "NBOutput")
final val ratioMatrix = new Param[DenseMatrix](this, "ratioMatrix", "The transformation matrix")
def getfeaturesCol: String = $(featuresCol)
def getlabelCol: String = $(labelCol)
def getPredictionCol: String = $(predictionsCol)
def getRatioMatrix: DenseMatrix = $(ratioMatrix)
}
// Estimator
class CustomNaiveBayes(override val uid: String, val alpha: Double)
extends Estimator[CustomNaiveBayesModel] with NBParams with DefaultParamsWritable {
def copy(extra: ParamMap): CustomNaiveBayes = {
defaultCopy(extra)
}
def setFeaturesCol(value: String): this.type = set(featuresCol, value)
def setLabelCol(value: String): this.type = set(labelCol, value)
def setPredictionCol(value: String): this.type = set(predictionsCol, value)
def setRatioMatrix(value: DenseMatrix): this.type = set(ratioMatrix, value)
override def transformSchema(schema: StructType): StructType = {...}
override def fit(ds: Dataset[_]): CustomNaiveBayesModel = {
...
val model = new CustomNaiveBayesModel(uid)
model
.setRatioMatrix(ratioMatrix)
.setFeaturesCol($(featuresCol))
.setLabelCol($(labelCol))
.setPredictionCol($(predictionsCol))
}
}
// companion object for Estimator
object CustomNaiveBayes extends DefaultParamsReadable[CustomNaiveBayes]{
override def load(path: String): CustomNaiveBayes = super.load(path)
}
// Transformer
class CustomNaiveBayesModel(override val uid: String)
extends Model[CustomNaiveBayesModel] with NBParams with DefaultParamsWritable {
def this() = this(Identifiable.randomUID("customnaivebayes"))
def copy(extra: ParamMap): CustomNaiveBayesModel = {defaultCopy(extra)}
def setFeaturesCol(value: String): this.type = set(featuresCol, value)
def setLabelCol(value: String): this.type = set(labelCol, value)
def setPredictionCol(value: String): this.type = set(predictionsCol, value)
def setRatioMatrix(value: DenseMatrix): this.type = set(ratioMatrix, value)
override def transformSchema(schema: StructType): StructType = {...}
}
def transform(dataset: Dataset[_]): DataFrame = {...}
}
// companion object for Transformer
object CustomNaiveBayesModel extends DefaultParamsReadable[CustomNaiveBayesModel]
When I add this Model as part of a pipeline and fit the pipeline, all runs ok. When I save the pipeline, there are no errors. However, when I attempt to load the pipeline in I get the following error:
NoSuchMethodException: $line3b380bcad77e4e84ae25a6bfb1f3ec0d45.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$$$6fa979eb27fa6bf89c6b6d1b271932c$$$$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$CustomNaiveBayesModel.read()
To save the pipeline, which includes a number of other transformers related to NLP pre-processing, I run
fittedModelRootCode.write.save("path")
and to then load it (where the failure occurs) I run
import org.apache.spark.ml.PipelineModel
val fittedModelRootCode = PipelineModel.load("path")
The model itself appears to be working well but I cannot afford to retrain the model on a dataset every time I wish to use it. Does anyone have any ideas why even with the companion object, the read() method appears to be unavailable?
Notes:
I am running on Databricks Runtime 8.3 (Spark 3.1.1, Scala 2.12)
My model is in a separate package so is external to Spark
I have reproduced this based on a number of existing examples all of which appear to work fine so I am unsure why my code is failing
I am aware there is a Naive Bayes model available in Spark ML, however, I have been tasked with making a large number of customizations so it is not worth modifying the existing version (plus I would like to learn how to get this right)
Any help would be greatly appreciated.
Since you extend the CustomNaiveBayesModel companion object by DefaultParamsReadable, I think you should use the companion object CustomNaiveBayesModel for loading the model. Here I write some code for saving and loading models and it works properly:
import org.apache.spark.SparkConf
import org.apache.spark.ml.{Pipeline, PipelineModel}
import org.apache.spark.sql.SparkSession
import path.to.CustomNaiveBayesModel
object SavingModelApp extends App {
val spark: SparkSession = SparkSession.builder().config(
new SparkConf()
.setMaster("local[*]")
.setAppName("Test app")
.set("spark.driver.host", "localhost")
.set("spark.ui.enabled", "false")
).getOrCreate()
val training = spark.createDataFrame(Seq(
(0L, "a b c d e spark", 1.0),
(1L, "b d", 0.0),
(2L, "spark f g h", 1.0),
(3L, "hadoop mapreduce", 0.0)
)).toDF("id", "text", "label")
val fittedModelRootCode: PipelineModel = new Pipeline().setStages(Array(new CustomNaiveBayesModel())).fit(training)
fittedModelRootCode.write.save("path/to/model")
val mod = PipelineModel.load("path/to/model")
}
I think your mistake is using PipelineModel.load for loading the concrete model.
My environment:
scalaVersion := "2.12.6"
scalacOptions := Seq(
"-encoding", "UTF-8", "-target:jvm-1.8", "-deprecation",
"-feature", "-unchecked", "-language:implicitConversions", "-language:postfixOps")
libraryDependencies += "org.apache.spark" %% "spark-core" % "3.1.1",
libraryDependencies += "org.apache.spark" %% "spark-sql" % "3.1.1"
libraryDependencies += "org.apache.spark" %% "spark-mllib" % "3.1.1"

Case Class Instantiation From Typesafe Config

Suppose I have a scala case class with the ability to serialize into json (using json4s or some other library):
case class Weather(zip : String, temp : Double, isRaining : Boolean)
If I'm using a HOCON config file:
allWeather {
BeverlyHills {
zip : 90210
temp : 75.0
isRaining : false
}
Cambridge {
zip : 10013
temp : 32.0
isRainging : true
}
}
Is there any way to use typesafe config to automatically instantiate a Weather object?
I'm looking for something of the form
val config : Config = ConfigFactory.parseFile(new java.io.File("weather.conf"))
val bevHills : Weather = config.getObject("allWeather.BeverlyHills").as[Weather]
The solution could leverage the fact that the value referenced by "allWeather.BeverlyHills" is a json "blob".
I could obviously write my own parser:
def configToWeather(config : Config) =
Weather(config.getString("zip"),
config.getDouble("temp"),
config.getBoolean("isRaining"))
val bevHills = configToWeather(config.getConfig("allWeather.BeverlyHills"))
But that seems inelegant since any change to the Weather definition would also require a change to configToWeather.
Thank you in advance for your review and response.
typesafe config library has API to instantiate object from config that uses java bean convention. But as I understand case class does not follow those rules.
There are several scala libraries that wrap typesafe config and provide scala specific functionality that you are looking for.
For example using pureconfig reading config could look like
val weather:Try[Weather] = loadConfig[Weather]
where Weather is a case class for values in config
Expanding on Nazarii's answer, the following worked for me:
import scala.beans.BeanProperty
//The #BeanProperty and var are both necessary
case class Weather(#BeanProperty var zip : String,
#BeanProperty var temp : Double,
#BeanProperty var isRaining : Boolean) {
//needed by configfactory to conform to java bean standard
def this() = this("", 0.0, false)
}
import com.typesafe.config.ConfigFactory
val config = ConfigFactory.parseFile(new java.io.File("allWeather.conf"))
import com.typesafe.config.ConfigBeanFactory
val bevHills =
ConfigBeanFactory.create(config.getConfig("allWeather.BeverlyHills"), classOf[Weather])
Follow up: based on the comments below it may be the case that only Java Collections, and not Scala Collections, are viable options for the parameters of the case class (e.g. Seq[T] will not work).
A simple solution without external libraries, inspired from playframework Configuration.scala
trait ConfigLoader[A] { self =>
def load(config: Config, path: String = ""): A
def map[B](f: A => B): ConfigLoader[B] = (config, path) => f(self.load(config, path))
}
object ConfigLoader {
def apply[A](f: Config => String => A): ConfigLoader[A] = f(_)(_)
implicit val stringLoader: ConfigLoader[String] = ConfigLoader(_.getString)
implicit val booleanLoader: ConfigLoader[Boolean] = ConfigLoader(_.getBoolean)
implicit val doubleLoader: ConfigLoader[Double] = ConfigLoader(_.getDouble)
}
object Implicits {
implicit class ConfigOps(private val config: Config) extends AnyVal {
def apply[A](path: String)(implicit loader: ConfigLoader[A]): A = loader.load(config, path)
}
implicit def configLoader[A](f: Config => A): ConfigLoader[A] = ConfigLoader(_.getConfig).map(f)
}
Usage:
import Implicits._
case class Weather(zip: String, temp: Double, isRaining: Boolean)
object Weather {
implicit val loader: ConfigLoader[Weather] = (c: Config) => Weather(
c("zip"), c("temp"), c("isRaining")
)
}
val config: Config = ???
val bevHills: Weather = config("allWeather.BeverlyHills")
Run the code in Scastie
Another option is to use circe.config with the code below. See https://github.com/circe/circe-config
import io.circe.generic.auto._
import io.circe.config.syntax._
def configToWeather(conf: Config): Weather = {
conf.as[Weather]("allWeather.BeverlyHills") match {
case Right(c) => c
case _ => throw new Exception("invalid configuration")
}
}
Another tried-and-tested solution is to use com.fasterxml.jackson.databind.ObjectMapper. You don't need to tag #BeanProperty to any of your case class parameters but you will have to define a no-arg constructor.
case class Weather(zip : String, temp : Double, isRaining : Boolean) {
def this() = this(null, 0, false)
}
val mapper = new ObjectMapper().registerModule(DefaultScalaModule)
val bevHills = mapper.convertValue(config.getObject("allWeather.BeverlyHills").unwrapped, classOf[Weather])
Using config loader
implicit val configLoader: ConfigLoader[Weather] = (rootConfig: Config, path: String) => {
val config = rootConfig.getConfig(path)
Weather(
config.getString("zip"),
config.getDouble("temp"),
config.getBoolean("isRaining")
)
}

Scala compiler error: package api does not have a member materializeWeakTypeTag

I am new to scala, so I am quite prepared to accept that I am doing something wrong!
I am playing around with Akka, and have a test using scalatest and the akka-testkit. Here is my build.sbt config
name := """EventHub"""
version := "1.0"
scalaVersion := "2.10.3"
libraryDependencies ++= Seq(
"com.typesafe.akka" % "akka-actor_2.10" % "2.2.3",
"com.typesafe.akka" % "akka-testKit_2.10" % "2.2.3" % "test",
"org.scalatest" % "scalatest_2.10.0-M4" % "1.9-2.10.0-M4-B2" % "test",
"com.ning" % "async-http-client" % "1.8.1"
)
When I compile, I get a message that I don't understand. I have google for this and have found related scala compiler issues and bugs. I have no idea if that is what I am seeing or if I am making a basic mistake somewhere. Here is a summary of the output (I have removed alot of "noise" for brevity; can add more detail if required!):
scalac:
while compiling: /Users/robert/Documents/Programming/Scala/Projects/EventHub/src/test/scala/Hub/Subscription/SubscriberSpec.scala
during phase: typer
library version: version 2.10.3
compiler version: version 2.10.3
...
...
== Expanded type of tree ==
TypeRef(
TypeSymbol(
class SubscriberSpec extends TestKit with WordSpec with BeforeAndAfterAll with ImplicitSender
)
)
uncaught exception during compilation: scala.reflect.internal.FatalError
And:
scalac: Error: package api does not have a member materializeWeakTypeTag
scala.reflect.internal.FatalError: package api does not have a member materializeWeakTypeTag
at scala.reflect.internal.Definitions$DefinitionsClass.scala$reflect$internal$Definitions$DefinitionsClass$$fatalMissingSymbol(Definitions.scala:1037)
at scala.reflect.internal.Definitions$DefinitionsClass.getMember(Definitions.scala:1055)
at scala.reflect.internal.Definitions$DefinitionsClass.getMemberMethod(Definitions.scala:1090)
at scala.reflect.internal.Definitions$DefinitionsClass.materializeWeakTypeTag(Definitions.scala:518)
at scala.tools.reflect.FastTrack$class.fastTrack(FastTrack.scala:34)
at scala.tools.nsc.Global$$anon$1.fastTrack$lzycompute(Global.scala:493)
at scala.tools.nsc.Global$$anon$1.fastTrack(Global.scala:493)
at scala.tools.nsc.typechecker.Namers$Namer.methodSig(Namers.scala:1144)
at scala.tools.nsc.typechecker.Namers$Namer.getSig$1(Namers.scala:1454)
at scala.tools.nsc.typechecker.Namers$Namer.typeSig(Namers.scala:1466)
at scala.tools.nsc.typechecker.Namers$Namer$$anonfun$monoTypeCompleter$1$$anonfun$apply$1.apply$mcV$sp(Namers.scala:731)
at scala.tools.nsc.typechecker.Namers$Namer$$anonfun$monoTypeCompleter$1$$anonfun$apply$1.apply(Namers.scala:730)
at scala.tools.nsc.typechecker.Namers$Namer$$anonfun$monoTypeCompleter$1$$anonfun$apply$1.apply(Namers.scala:730)
at scala.tools.nsc.typechecker.Namers$Namer.scala$tools$nsc$typechecker$Namers$Namer$$logAndValidate(Namers.scala:1499)
at scala.tools.nsc.typechecker.Namers$Namer$$anonfun$monoTypeCompleter$1.apply(Namers.scala:730)
at scala.tools.nsc.typechecker.Namers$Namer$$anonfun$monoTypeCompleter$1.apply(Namers.scala:729)
at scala.tools.nsc.typechecker.Namers$$anon$1.completeImpl(Namers.scala:1614)
...
...
I am using IntelliJ as the ide. There are a couple scala files; one contains an actor, the other a webclient:
package Hub.Subscription
import scala.concurrent.{Promise, Future}
import com.ning.http.client.{AsyncCompletionHandler, AsyncHttpClient, Response}
trait WebClient {
def postUpdate(url: String, payload: Any, topic: String): Future[Int]
def postUnSubscribe(url: String, topic: String): Future[Int]
}
case class PostUpdateFailed(status: Int) extends RuntimeException
object AsyncWebClient extends WebClient{
private val client = new AsyncHttpClient
override def postUpdate(url: String, payload: Any, topic: String): Future[Int] = {
val request = client.preparePost(url).build()
val result = Promise[Int]()
client.executeRequest(request, new AsyncCompletionHandler[Response]() {
override def onCompleted(response: Response) = {
if (response.getStatusCode / 100 < 4)
result.success(response.getStatusCode)
else
result.failure(PostUpdateFailed(response.getStatusCode))
response
}
override def onThrowable(t: Throwable) {
result.failure(t)
}
})
result.future
}
override def postUnSubscribe(url: String, topic: String): Future[Int] = {
val request = client.preparePost(url).build()
val result = Promise[Int]
client.executeRequest(request, new AsyncCompletionHandler[Response] {
override def onCompleted(response: Response) = {
if (response.getStatusCode / 100 < 4)
result.success(response.getStatusCode)
else
result.failure(PostUpdateFailed(response.getStatusCode))
response
}
override def onThrowable(t: Throwable) {
result.failure(t)
}
})
result.future
}
def shutdown(): Unit = client.close()
}
And my actor:
package Hub.Subscription
import akka.actor.Actor
import Hub.Subscription.Subscriber.{Failed, Update, UnSubscribe}
import scala.concurrent.ExecutionContext
import java.util.concurrent.Executor
object Subscriber {
object UnSubscribe
case class Update(payload: Any)
case class Failed(callbackUrl: String)
}
class Subscriber(callbackUrl: String, unSubscribeUrl: String, topic: String) extends Actor{
implicit val executor = context.dispatcher.asInstanceOf[Executor with ExecutionContext]
def client: WebClient = AsyncWebClient
def receive = {
case Update(payload) => doUpdate(payload)
case UnSubscribe => doUnSubscribe
case Failed(clientUrl) => //log?
}
def doUpdate(payload: Any): Unit = {
val future = client.postUpdate(callbackUrl, payload, topic)
future onFailure {
case err: Throwable => sender ! Failed(callbackUrl)
}
}
def doUnSubscribe: Unit = {
//tell the client that they have been un-subscribed
val future = client.postUnSubscribe(unSubscribeUrl, topic)
future onFailure {
case err: Throwable => //log
}
}
}
And finally my test spec:
package Hub.Subscription
import akka.testkit.{ImplicitSender, TestKit}
import akka.actor.{ActorRef, Props, ActorSystem}
import org.scalatest.{WordSpec, BeforeAndAfterAll}
import scala.concurrent.Future
import scala.concurrent.duration._
object SubscriberSpec {
def buildTestSubscriber(url: String, unSubscribeUrl: String, topic: String, webClient: WebClient): Props =
Props(new Subscriber(url, unSubscribeUrl, topic) {
override def client = webClient
})
object FakeWebClient extends WebClient {
override def postUpdate(url: String, payload: Any, topic: String): Future[Int] = Future.successful(201)
override def postUnSubscribe(url: String, topic: String): Future[Int] = Future.failed(PostUpdateFailed(500))
}
}
class SubscriberSpec extends TestKit(ActorSystem("SubscriberSpec"))
with WordSpec
with BeforeAndAfterAll
with ImplicitSender {
import SubscriberSpec._
"A subscriber" must {
"forward the update to the callback url" in {
val fakeClient = FakeWebClient
val callbackUrl = "http://localhost:9000/UserEvents"
val subscriber: ActorRef = system.actorOf(buildTestSubscriber(callbackUrl, "unSubscribeUrl", "aTopic", fakeClient))
subscriber ! Subscriber.Update(Nil)
within(200 millis) {
expectNoMsg
}
}
}
override def afterAll(): Unit = {
system.shutdown()
}
}
Thanks in advance for any help / pointers!
Update: I should have noted that if I do not include the test spec, then all is well. But when I add the test spec, I get the errors above.
Btw I just realized that you're using scalatest compiled for 2.10.0-M4.
Scala's final releases aren't supposed to be binary compatible with corresponding milestone releases, so weird things might happen, including crashes.
If you change scalatest's version to "org.scalatest" %% "scalatest" % "1.9.1", everything is going to work just fine.