Hi I am trying to test this functionality within the controller, I need to mock "MyActor" for doing the unit test.
def populateArraylist[T](hashSet: HashSet[T]): util.ArrayList[T] = {
val list = new util.ArrayList[T]()
hashSet.foreach(x => list.add(x))
list
}
#ApiOperation("Get the state of a something”)
def get(ID: String, dateID: String): Action[AnyContent] = Action.async
{
implicit request =>
(MyShardProvider.shard ? MyActor.EntityPayload(
Id,
MySecondActor.GetStateRequest(dateId)))
.mapTo[GetStateResponse]
.map(x => {
Ok(new String(JacksonSerializer.toBytes(new GetResponse(
x.state.identifier,
populateArraylist(x.data.transactionList.processedKeys)
))))
})
}
I think what you want to do is to mock the shard actor, or else you will have to actually run cluster and sharding when the unit test executes.
Easiest way is probably to either make the MyShardProvider.shard something you inject or can override (depending on how you are doing injection in your play app) in the test case to provide the ActorRef of a TestProbe instead.
That you have MyShardProvider.shard at all looks a bit fishy though, you should never have a singleton that contains an actor system, instead you should inject instances as shown in the Play docs here: https://www.playframework.com/documentation/2.6.x/ScalaAkka
Related
My goal is to do some database queries from the async controller, then return the answer.
I'm playing with the example project, for now just simulating the DB queries with a sleep, but what I noticed is that whatever I do, the REST interface won't even start the sleep of the second query until the first one finishes.
E.g.: If I call the REST interface from one tab in the browser, then 1 second later again from an another tab, I'd expect that the second one gets the reply too in 10 seconds, but actually it's 19.
Also it doesn't seem to use the "database-io" pool either:
1: application-akka.actor.default-dispatcher-2
2: application-akka.actor.default-dispatcher-5
My code:
#Singleton
class AsyncController #Inject()(cc: ControllerComponents, actorSystem: ActorSystem) extends AbstractController(cc) {
implicit val executionContext = actorSystem.dispatchers.lookup("database-io")
def message = Action.async {
getFutureMessage().map { msg => Ok(msg) }
}
private def getFutureMessage(): Future[String] = {
val defaultThreadPool = Thread.currentThread().getName;
println(s"""1: $defaultThreadPool""")
val promise: Promise[String] = Promise[String]()
actorSystem.scheduler.scheduleOnce(0 second) {
val blockingPool = Thread.currentThread().getName;
println(s"""2: $blockingPool""")
Thread.sleep(10000)
promise.success("Hi!")
}(actorSystem.dispatcher)
promise.future
}
}
It could be two reasons for this behavior:
You use the development mode (1 thread), or your product configuration is configured only for one thread.
The browser blocks the second request until receiving the response from the first. This phrase: "If I call the REST interface from one tab in the browser." Try to do the same from different browsers.
You need to avoid blocking code. Basically:
You can a method that returns the Future.
You map into it.
You recover any failure the Future result might bring.
Lets say I have:
def userAge (userId: String): Future[Int] = ???
Then you map into it:
userAge.map{
age => ??? //everything is ok
}.recover{ case e: Throwable => ??? //Do something when it fails
Note that if you have more than one call the other map becomes flatMap because you want Future[...] something and not Future[Future[...]].
I have an old Scala/Akka Http project that I'm trying to simplify and refactor. I was wondering if there's a better way to organize routes and perhaps split them across actors. Here's what I have at the moment (far from ideal):
```
object MyAPI {
def props(): Props = Props(new MyAPI())
val routes = pathPrefix("api") {
pathPrefix("1") {
SomeActor.route //More routes can be appended here using ~
}
}
}
final class MyAPI extends Actor with ActorLogging {
implicit lazy val materializer = ActorMaterializer()
implicit lazy val executionContext = context.dispatcher
Http(context.system)
.bindAndHandleAsync(Route.asyncHandler(MyAPI.routes), MyHttpServer.httpServerHostName, MyHttpServer.httpServerPort)
.pipeTo(self)
override def receive: Receive = {
case serverBinding: ServerBinding =>
log.info(s"Server started on ${serverBinding.localAddress}")
context.become(Actor.emptyBehavior)
case Status.Failure(t) =>
log.error(t, "Error binding to network interface")
context.stop(self)
}
}
```
```
object SomeActor {
def props(): Props = Props[SomeActor]
val route = get {
pathPrefix("actor") {
pathEnd {
complete("Completed") //Is there a clean way 'ask' the actor below?
}
}
}
}
class SomeActor extends Actor with ActorLogging {
implicit lazy val executionContext = context.dispatcher;
override def receive: Receive = {
//receive and process messages here
}
```
So, my question is - is there a clean way to structure and refactor routes instead of lumping them together in one large route definition? I could perhaps create a hierarchy of actors (routers) and the main route definition just delegates it to the routers and we incrementally add more details as we go deeper in the actor hierarchy. But is there a generally accepted patter or two to organize routes?
I would like to suggest you on the basis of the functionality you can have as many actors you you want, but create one supervisor actors which will keep watch on each child actor. And all the supervision strategies should be written into the supervisor itself, and all the msg you gonna sent to the actors should be forwarded by the supervisor.
As soon as you get the data from the end point, may be get or post method take the data into someRequest case class. Then sent it to some handleReq() method. And then do your processing make traits on functionality basis.
You can structure project something like this.
src/
actor//all the actors will be in this package
model// all the case classes and constants
repo// all the db related function you can do here
service// all your routes and endPoint functions
Now you can have package util where all the utilities trait you can put which will be used by any of the actor, or service may be you have lots of validations you can have a package named validator.
The structure is depends on your business. I think it would help.
#Singleton
class EventPublisher #Inject() (#Named("rabbit-mq-event-update-actor") rabbitControlActor: ActorRef)
(implicit ctx: ExecutionContext) {
def publish(event: Event): Unit = {
logger.info("Publishing Event: {}", toJsObject(event), routingKey)
rabbitControlActor ! Message.topic(shipmentStatusUpdate, routingKey = "XXX")
}
}
I want to write a unit test to verify if this publish function is called
rabbitControlActor ! Message.topic(shipmentStatusUpdate, routingKey = "XXX")
is called only once.
I am using spingo to publish messages to Rabbit MQ.
I am using Playframework 2.6.x and scala 2.12.
You can create an TestProbe actor with:
val myActorProbe = TestProbe()
and get its ref with myActorProbe.ref
After, you can verify that it receives only one message with:
myActorProbe.expectMsg("myMsg")
myActorProbe.expectNoMsg()
You probably should take a look at this page: https://doc.akka.io/docs/akka/2.5/testing.html
It depends you want to check only the message is received by that actor or you want to test the functionality of that actor.
If you want to check message got delivered to the actor you can go with TestProbe. I.e.
val probe = TestProbe()
probe.ref ! Message
Then do :
probe.expectMsg[Message]
You can make use of TestActorRef in case where you have supervisor actor, which is performing some db operations so you can over ride its receive method and stop the flow to go till DB.
I.e.
val testActor =new TestActorRef(Actor.props{
override receive :Receive ={
case m:Message => //some db operation in real flow
//in that case you can return what your actor return from the db call may be some case class.
case _ => // do something }})
Assume your method return Future of Boolean.
val testResult=(testActor ? Message).mapTo[boolean]
//Then assert your result
I'm using Spray in my application and from the examples I've see on Github it looks like people handle HTTP requests in Akka by passing the HTTPContext object around to all the actors and calling onComplete { } on the Future in the last actor.
Is sending the context deep down in the application really a good idea ? This way every event object will have a context parameter.
How do we handle HTTP requests & response properly in Akka? I've read this article but I would like to know people's thoughts who run Akka in production on the right way of achieving this.
I prefer to use the ask pattern in the Spray service, and the onSuccess directive, e.g.:
trait MyService extends HttpService {
def worker: ActorRef
implicit def timeout:Timeout
implicit def ec:ExecutionContext
def askWorker: Future[String] = (worker ? "Hello").mapTo[String]
def myRoute = path("/") {
get {
onSuccess(askWorker){
case str => complete(str)
}
}
}
}
Then a concrete actor such as:
class ServiceActor extends MyService with Actor {
implicit val ec = context.system
implicit val timeout = Timeout(3 seconds)
val worker = context.actorOf(Props[WorkerActor])
override def actorRefFactory = context.system
def receive = runRoute(myRoute)
}
I like this pattern rather than passing the request context around since it means that the other actors don't have to have any concept of Http. The service could be completely replaced with a different protocol. In this example the worker actor is can be something like:
class WorkerActor extends Actor {
def receive = {
case "Hello" => sender() ! "Hello World"
}
}
I recently started developing an Application in Play Scala. Although I have used Play Java for several applications already, I am also new to Scala and Play Scala.
I use DAO pattern to abstract the database interaction. The DAO contains methods for insert, update delete. After reading async and thread-pool related documentation, I figured that making database interaction async was highly important, unless you tweak the Play default thread pool to have many threads.
To ensure that all database calls are handled asynchronously, I made all the call to return a Future instead of a value directly. I have created a separate execution context for the database interactions.
trait Dao[K, V] {
def findById(id: K): Future[Option[V]]
def update(v: V): Future[Boolean]
[...]
}
This has lead to very complex and deeply nested code in actions.
trait UserDao extends Dao[Long, User] {
def existsWithEmail(email: String): Future[Boolean]
def insert(u: User) Future[Boolean]
}
object UserController extends Controller {
def register = Action {
[...]
userDao.existsWithEmail(email).flatMap { exists =>
exits match {
case true =>
userDao.insert(new User("foo", "bar")).map { created =>
created match {
case true => Ok("Created!")
case false => BadRequest("Failed creation")
}
}
case false =>
Future(BadRequest("User exists with same email"))
}
}
}
}
Above is a sample of simplest of actions. Level of nesting gets deeper as I have more database calls involved. Although I figured that some of the nesting can be reduced with the use of for comprehension, I am doubting if my approach itself is fundamentally wrong?
Consider a case where I need to create a user,
a. If none exists already with same email address.
b. If none exists already with same mobile number.
I can create two futures,
f(a) checking if user exists with email.
f(b) checking if user exists with mobile.
I cannot go and insert a new user unless I verify that both conditions evaluate false. I can actually have f(a) and f(b) running in parallel. The parallel execution maybe undesirable in case f(a) evaluates to true, and may work in favor otherwise. Step 3 of creating user depends on both these futures, so I wonder if following is equally good?
trait UserDao extends Dao[Long, User] {
def existsWithEmail(email: String): Boolean
def existsWithMobile(mobile: String): Boolean
def insert(u: User): Unit
}
def register = Action {
implicit val dbExecutionContext = myconcurrent.Context.dbExceutionContext
Future {
if (!userDao.existsWithEmail(email) && !userDao.existsWithMobile(mobile) {
userDao.insert(new User("foo", "bar")
Ok("Created!")
} else {
BadRequest("Already exists!")
}
}
}
Which one is a better approach? Does the approach of using a single Future with multiple calls to database have any downside?
You are correct when you say that a for comprehension can make for less nesting.
To solve the dual-future problem, consider:
existsWithEmail(email).zip(existsWithMobile(mobile)) map {
case (false, false) => // create user
case _ => // already exists
}
If you have a lot of these, you can use Future.sequence( Seq(future1, future2, ...) ) to turn a sequence of futures into a future sequence.
You may want to take a look at more functional idioms for DB access than DAO, e.g., Slick or Anorm. Usually those will compose better and end up being more flexible than DAO.
A side note: it is more efficient to use if/else for a simple true/false test than it is to use match/case, and is the preferred style.
I solved this problem using for comprehension in scala. I added a few implicit type converters to help with error handling.
Initially I did something like,
def someAction = Action.async {
val result =
for {
student <- studentDao.findById(studentId)
if (student.isDefined)
parent <- parentDao.findById(student.get.parentId)
if (parent.isDefined)
address <- addressDao.findById(parent.get.addressId)
if (address.isDefined)
} yield {
// business logic
}
result fallbackTo Future.successful(BadRequest("Something went wrong"))
}
This is how the code was initially structured to counter the dependency between futures. Note that each subsequent future depends on the previous future. Also, each findById is returning a Future[Option[T]] so if within for comprehension is required to handle cases where the methods return None. I used fallbackTo method on the Future to fallback to a BadRequest result if any of the futures evaluated to None (In event of any if condition failing within for comprehension, it returns a failed future) Another issue with above approach was that it would suppress any kind of exception encountered (even exceptions as trivial as NPE) and simply fallback to BadRequest instead, which was very bad.
Above method was able to counter future of options and handle the failed cases, although it was not helpful to figure out exactly which of the futures in the for comprehension had failed. To overcome this limitation, I used implicit type converters.
object FutureUtils {
class FutureProcessingException(msg: String) extends Exception(msg)
class MissingOptionValueException(msg: String) extends FutureProcessingException(msg)
protected final class OptionFutureToOptionValueFuture[T](f: Future[Option[T]]) {
def whenUndefined(error: String)(implicit context: ExecutionContext): Future[T] = {
f.map { value =>
if (value.isDefined) value.get else throw new MissingOptionValueException(error)
}
}
}
import scala.language.implicitConversions
implicit def optionFutureToValueFutureConverter[T](f: Future[Option[T]]) = new OptionFutureToOptionValueFuture(f)
}
The above implicit conversions allowed me to write readable for comprehensions chaining multiple futures.
import FutureUtils._
def someAction = Action.async {
val result =
for {
student <- studentDao.findById(studentId) whenUndefined "Invalid student id"
parent <- parentDao.findById(student.get.parentId) whenUndefined "Invalid parent id"
address <- addressDao.findById(parent.get.addressId) whenUndefined "Invalid address id"
} yield {
// business logic
}
result.recover {
case fpe: FutureProcessingException => BadRequest(fpe.getMessage)
case t: Throwable => InternalServerError
}
}
The above approach ensured that all exceptions caused by missing Option value are handled as a BadRequest with specific message about what exactly failed. All other failures are treated as InternalServerError. You can log the exact exception with stack trace in order to help debug.