How to mock actors in a cluster sharding test environment? - scala

I have managed to setup a test environment for event sourced behaviors with akka and scala and can correctly run unit testing on self-contained actors by doing
class CQRSActorSpec
extends ScalaTestWithActorTestKit(
EventSourcedBehaviorTestKit.config.withFallback(CQRSActorSpec.config)
)
then creating my testkit
private val myTestKit = EventSourcedBehaviorTestKit[Command, Event, State](system, MyActor)
and using it to issue commands
val result = myTestKit.runCommand[Response](StartJob(parameters, _))
result.reply shouldBe Done
result.event shouldBe Started(parameters)
result.state shouldBe ProcessingJob
Now I want to unit test an actor that in its lifecycle calls another actor, this is because I'm using the saga pattern, so the actor I'm testing is the supervisor of the saga and must call the involved parties.
So far I managed to do the following:
val mockParty = Behaviors.receiveMessage[Party.Command] { msg =>
val reply = msg match {
case _ => Done
}
msg.replyTo ! reply
Behaviors.same
}
ClusterSharding(system).init(Entity(Party.Key) { _ => mockParty })
this runs ok in the first test, but then when I have to test another, say a failure case, of course the second call will not work because there's already an entity registered in the cluster sharding and I cannot override that behavior. Also, there is no way to reset the cluster sharding.
Has anyone insights on how to solve this problem? Are there other utilities for testing cluster sharding that I'm not aware of? I found the documentation a bit lacking.

The root of the problem is using ClusterSharding(system) in your code which will create the real ClusterSharding object, which like you said you have no control of for tests. Instead, you should pass the ClusterSharding object from outside so that you can pass stub/mock implementation of it.
This is outline of what I am talking about rendered in Java:
Let's assume persistent actor/event sourced behavior of type EntityA will create and interact with another persistent actor/event sourced behavior of type EntityB.
Somewhere you in your application code you will have the ClusterSharding.init(..) call for the EntityA:
ClusterSharding sharding = ClusterSharding.get(actorSystem); //the real stuff in the app
sharding.init(
Entity.of(
EntityA.ENTITY_TYPE_KEY,
entityContext ->
EntityA.create(
entityContext.getEntityId(),
sharding
)
)
);
Note how the sharding object is passed to create method of EntityA and is the ClusterSharding object that EntityA interacts with.
In unit test this code will not be invoked, instead you can inject your own implementation of ClusterSharding when initializing the testKit:
//this is in test
ClusterSharding sharding = new ClusterShardingStub(); //your stub/mock
var esbtk =
EventSourcedBehaviorTestKit.create(
testKit.system(),
EntityA.create("entityId", sharding));
So now somewhere in the logic of your EntityA implementation you will presumably call sharding.entityRefFor(...) to get hold of instance of EntityRef<EntityBProtocol>. What you need to do is to program your stub/mock to return TestEntityRef.of(..) instance:
TestProbe<EntityBProtocol> testProbe = TestProbe.create(testKit.system());
EntityRef<EntityBProtocol> ref = TestEntityRef.of(EntityB.ENTITY_TYPE_KEY, "entityId", testProbe.ref());
And now any interactions that EntityA has with EntityB can be asserted using the TestProbe instance.

Related

How to implement factory of Akka actors?

Let's say I have DAO Actors (CassDaoActor, VerticaDaoActor, etc) that respond to message 'Read'.
First of all, is there a way to express a interface or abstract class that defines the message 'Read' that extending actors should implement?
Now assume it's only at runtime that I could get to know which Actor needs to be created based on the configured db. For example, if configured db is cassandra, I need to create CassDaoActor, etc. This apparently is a typical use case for Factory Method Pattern as we know. I want to understand how can we implement such a thing? Evidently we can't pass "context" around since it looses content outside the scope of the actor.
Please suggest.
What I have tried so far is that I am returning respective props based on the configured db to the actor within which I need to create these actors.
object `package` {
val CASS = "cass"
val VERTICA = "vertica"
def getDAOProps(db: String): Props = db match {
case CASS => CassDaoActor.props
case VERTICA => VerticaDaoActor.props
}
}
// SupervisorActor
val db = configuredDb()
context.actorOf(getDAOProps(db), db)

Play WebSocketActor createHandler with custom name

I am using (learning to) handle websockets in play application.
My controller is using WebSocket.acceptWithActor
def clientWS = WebSocket.acceptWithActor[JsValue, JsValue] { _ =>
upstream => ClientSesssionActor.props(upstream)
}
and all is well except some other "supervisor" actor needs to be able to use context.actorSelection(...) to communicate with all/some of those ClientSessionActors.
But all my ClientSessionActors are created with a path like this one :
[akka://application/system/websockets/ REQ_ID /handler]
Here is the line where WebsocketActorSupervisor creates them :
val webSocketActor = context.watch(context.actorOf(createHandler(self), "handler"))
That is where the "handler" part of the path comes from.
I would like to pass in a specific name for my ClientSessionActor instead of getting "handler".
Overloading the whole call stack with one more parameter seems inelegant: there is WebSocketActor.scala with Connect, WebSocketActorSupervisor(props and constructor), WebSocketsActor receive and then everything inside the WebSocket.scala.
I know I can pass the supervisor reference to the props, but what about the case when the "supervisor" has been restarted and needs to reconnect with his minions.
One more thing, I realize that I might be able to get all the "handler" actors, but there are more than 2 kinds of handlers. Yes I could have them ignore msgs directed at the other groups of handlers but this just feels so redundant sending out 3 times more msgs than I should have to.
Any suggestions ?
James ? :)
Thank you
How about each ClientSesssionActor sends a Register message to supervisor on preStart and store them in eg. val sessions = new HashMap[String, ActorRef].
And then unregister by sending Unregister in postStop
private class WebSocketsActor extends Actor {
import WebSocketsActor._
def receive = {
case c # Connect(requestId, enumerator, iteratee, createHandler) =>
implicit val mt = c.messageType
context.actorOf(WebSocketActorSupervisor.props(enumerator, iteratee, createHandler),
requestId.toString)
}
}
Here is code how play creates actors for handling websockets, it names with requestId.
I have also same question :) why not make it to name with custom names.

Unit Testing AKKA actors

I am doing a web application with Scala and Akka actors and I'm having some troubles with the tests.
In my case I need to taste an actor who talks with the Database. To do the unit testing I would like to use a Fake Database but I can't replace the new with my desired fake object.
Let's see some code:
Class MyActor extends Actor {
val database = new Database()
def receive = { ... }
}
And in the tests I would like to inject a FakeDatabase object instead Database. I've been looking in Internet but the best that I found is:
Add a parameter to the constructor.
Convert the val database to a var so in the test I could access the attribute by the underlying and replace it.
Both solutions solve the problem but are very dirty.
Isn't a better way to solve the problem?
Thanks!
The two primary options for this scenario are:
Dependency Injection Use a DI framework to inject a real or mock service as needed. In Akka: http://letitcrash.com/post/55958814293/akka-dependency-injection
Cake Pattern This is a Scala-specific way of achieving something akin to dependency injection without actually relying on injection. See: Akka and cake pattern
Echoing the advice here, I wouldn't call injecting the database in the constructor dirty. It might have plenty of benefits, including decoupling actor behaviour from the particular database instance.
However if you know there is only ONE database you will be always using in your production code, then think about defining a package level accessible constructor and a companion object returning a Props object without parameters by default.
Example below:
object MyActor {
def props() : Props = Props(new MyActor(new Database()))
}
class MyActor private[package](database : IDatabase) extends Actor {
def receive = { ... }
}
In this case you will still be able to inject the test database in your tests case (given the same package structure), but prevent users of your code from instantiating MyActor with unexpected database instance.

Can it be safe to share a var?

My application has a class ApplicationUsers that has no mutable members. Upon creation of instances, it reads the entire user database (relatively small) into an immutable collection. It has a number of methods to query the data.
I am now faced with the problem of having to create new users (or modify some of their attributes). My current idea is to use an Akka actor that, at a high level, would look like this:
class UserActor extends Actor{
var users = new ApplicationUsers
def receive = {
case GetUsers => sender ! users
case SomeMutableOperation => {
PerformTheChangeOnTheDatabase() // does not alter users (which is immutable)
users = new ApplicationUsers // reads the database from scratch into a new immutable instance
}
}
}
Is this safe? My reasoning is that it should be: whenever users is changed by SomeMutableOperation any other threads making use of previous instances of users already have a handle to an older version, and should not be affected. Also, any GetUsers request will not be acted upon until a new instance is not safely constructed.
Is there anything I am missing? Is my construct safe?
UPDATE: I probably should be using Agents to do this, but the question is still holds: is the above safe?
You are doing it exactly right: have immutable data types and reference them via var within the actor. This way you can freely share the data and mutability is confined to the actor. The only thing to watch out for is if you reference the var from a closure which is executed outside of the actor (e.g. in a Future transformation or a Props instance). In such a case you need to make a stack-local copy:
val currentUsers = users
other ? Process(users) recoverWith { case _ => backup ? Process(currentUsers) }
In the first case you just grab the value—which is fine—but asking the backup happens from a different thread, hence the need for val currentUsers.
Looks fine to me. You don't seem to need Agents here.

Akka actors + Play! 2.0 Scala Edition best practices for spawning and bundling actor instances

My app gets a new instance of Something via an API call on a stateless controller. After I do my mission critical stuff (like saving it to my Postgres database and committing the transaction) I would now like to do a bunch of fire-and-forget operations.
In my controller I send the model instance to the post-processor:
import _root_.com.eaio.uuid.UUID
import akka.actor.Props
// ... skip a bunch of code
play.api.libs.concurrent.Akka.system.actorOf(
Props[MySomethingPostprocessorActor],
name = "somethingActor"+new UUID().toString()
) ! something
The MySomethingPostprocessorActor actor looks like this:
class MySomethingPostprocessorActor extends Actor with ActorLogging {
def receive = {
case Something(thing, alpha, beta) => try {
play.api.libs.concurrent.Akka.system.actorOf(
Props[MongoActor],
name = "mongoActor"+new UUID().toString()
) ! Something(thing, alpha, beta)
play.api.libs.concurrent.Akka.system.actorOf(
Props[PubsubActor],
name = "pubsubActor"+new UUID().toString()
) ! Something(thing, alpha, beta)
// ... and so forth
} catch {
case e => {
log.error("MySomethingPostprocessorActor error=[{}]", e)
}
}
}
}
So, here's what I'm not sure about:
I know Actor factories are discouraged as per the warning on this page. My remedy for this is to name each actor instance with a unique string provided by UUID, to get around the your-actor-is-not-unique errors:
play.core.ActionInvoker$$anonfun$receive$1$$anon$1:
Execution exception [[InvalidActorNameException:
actor name somethingActor is not unique!]]
Is there a better way to do the above, i.e. instead of giving everything a unique name? All examples in the Akka docs I encountered give actors a static name, which is a bit misleading.
(any other comments are welcome too, e.g. the if the bundling pattern I use is frowned upon, etc)
As far as I'm aware the name paramater is optional.
This may or may not be the case with Akka + Play (haven't checked). When working with standalone actor systems though, you usually only name an actor when you need that reference for later.
From the sounds of it you're tossing out these instances after using them, so you could probably skip the naming step.
Better yet, you could probably save the overhead of creating each actor instance by just wrapping your operations in Futures and using callbacks if need be: http://doc.akka.io/docs/akka/2.0.3/scala/futures.html