I have some (Akka) actor code that is using a case class + the copy constructor to update state:
def foo(state:StateCaseClass) : Receive = {
import state._
{
case Bar(updates) =>
context become foo(copy(/* change a limited number of things */))
// ... other message processing w/ lots of context become foo(copy(...))
}
}
I'd like to add below the import
def update = context become foo(copy(_))
so that the code can be
def foo(state:StateCaseClass) : Receive = {
import state._
def update = context become foo(copy(_))
{
case Bar(updates) =>
update(/* change a limited number of things */)
// ... etc
}
}
but that doesn't compile. I can of course tweak the def update a bit to get rid of most of boilerplate, but the copy still sticks around:
def foo(state:StateCaseClass) : Receive = {
import state._
def update(newState:StateCaseClass) = context become foo(newState)
{
case Bar(updates) =>
update(copy(/* change a limited number of things */))
// ... etc
}
}
Is there comparable syntax that will let me pass through the args to the case class copy constructor and dry out that last bit?
Disclaimer: I guess the best solution is to use context become explicitly. And I don't recommend you to use the code below.
I guess it's impossible without metaprogramming (macros). You have to create a method with default values for named parameters.
You could always create such method manually like this:
def update(filed1: Int = state.field1, field2: String = state.field2) =
context become foo(StateCaseClass(filed1, filed2))
...
update(field1 = 0)
...
update(field2 = "str")
But I guess it's not what you want.
The only way to get such method without metaprogramming is... to use method copy itself. Method copy calls constructor and you could call become in constructor.
The code below works, but I strongly don't recommend you to use it! It's a cryptocode and it will confuse all other developers.
import akka.actor._
trait ReceiveHelper extends PartialFunction[Any, Unit] {
def receive: PartialFunction[Any, Unit]
override def apply(v: Any) = receive(v)
override def isDefinedAt(v: Any) = receive isDefinedAt v
}
sealed trait TestActorMessage
case object Get extends TestActorMessage
case class SetInt(i: Int) extends TestActorMessage
case class SetString(s: String) extends TestActorMessage
class TestActor extends Actor {
case class Behaviour(intField: Int, strField: String) extends ReceiveHelper {
context become this
val receive: Receive = {
case Get => sender ! (intField -> strField)
case SetInt(i) => copy(intField = i)
case SetString(s) => copy(strField = s)
}
}
def receive = Behaviour(0, "init")
}
Usage:
val system = ActorSystem("testSystem")
val testActor = system.actorOf(Props[TestActor], "testActor")
import akka.pattern.ask
import akka.util.Timeout
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global
implicit val timeout = Timeout(5 seconds)
testActor ? Get foreach println
// (0,init)
testActor ! SetInt(666)
testActor ? Get foreach println
// (666,init)
testActor ! SetString("next")
testActor ? Get foreach println
// (666,next)
Related
How would I test that a given behavior sends the messages I expect?
Say, three messages of some type, one after the other...
With regular actors (untyped) there was the TestProbe from regular Akka with methods like expectedMsg:
http://doc.akka.io/api/akka/current/index.html#akka.testkit.TestProbe
With akka-typed I'm scratching my head still. There is something called EffectfulActorContext, but I've no idea how to use that.
Example
Say I am writing a simple PingPong service, that given a number n replies with Pong(n) n-times. So:
-> Ping(2)
Pong(2)
Pong(2)
-> Ping(0)
# nothing
-> Ping(1)
Pong(1)
Here is how this behavior might look:
case class Ping(i: Int, replyTo: ActorRef[Pong])
case class Pong(i: Int)
val pingPong: Behavior[Ping] = {
Static {
case Ping(i, replyTo) => (0 until i.max(0)).map(_=> replyTo ! Pong(i))
}
}
My Hack
Now since I can't figure out how to make this work, the "hack" that I am doing right now is making the actor always reply with a list of responses. So the behavior is:
case class Ping(i: Int, replyTo: ActorRef[List[Pong]])
case class Pong(i: Int)
val pingPong: Behavior[Ping] = {
Static {
case Ping(i, replyTo) => replyTo ! (0 until i.max(0)).map(_=>Pong(i)).toList
}
}
Given this hacky change, the tester is easy to write:
package com.test
import akka.typed.AskPattern._
import akka.typed.ScalaDSL._
import akka.typed.{ActorRef, ActorSystem, Behavior, Props}
import akka.util.Timeout
import com.test.PingPong.{Ping, Pong}
import org.scalatest.{FlatSpec, Matchers}
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
import scala.concurrent.{Await, Future}
object PingPongTester {
/* Expect that the given messages arrived in order */
def expectMsgs(i: Int, msgs: List[Pong]) = {
implicit val timeout: Timeout = 5 seconds
val pingPongBe: ActorSystem[Ping] = ActorSystem("pingPongTester", Props(PingPong.pingPong))
val futures: Future[List[Pong]] = pingPongBe ? (Ping(i, _))
for {
pongs <- futures
done <- {
for ((actual, expected) <- pongs.zip(msgs)) {
assert(actual == expected, s"Expected $expected, but received $actual")
}
assert(pongs.size == msgs.size, s"Expected ${msgs.size} messages, but received ${pongs.size}")
pingPongBe.terminate
}
} Await.ready(pingPongBe.whenTerminated, 5 seconds)
}
}
object PingPong {
case class Ping(i: Int, replyTo: ActorRef[List[Pong]])
case class Pong(i: Int)
val pingPong: Behavior[Ping] = {
Static {
case Ping(i, replyTo) => replyTo ! (0 until i.max(0)).map(_=>Pong(i)).toList
}
}
}
class MainSpec extends FlatSpec with Matchers {
"PingPong" should "reply with empty when Pinged with zero" in {
PingPongTester.expectMsgs(0, List.empty)
}
it should "reply once when Pinged with one" in {
PingPongTester.expectMsgs(1, List(Pong(1)))
}
it should "reply with empty when Pinged with negative" in {
PingPongTester.expectMsgs(-1, List.empty)
}
it should "reply with as many pongs as Ping requested" in {
PingPongTester.expectMsgs(5, List(Pong(5), Pong(5), Pong(5), Pong(5), Pong(5)))
}
}
I'm using EffectfulActorContext for testing my Akka typed actors and here is an untested example based on your question.
Note: I'm also using the guardianactor provided in the Akka-typed test cases.
class Test extends TypedSpec{
val system = ActorSystem("actor-system", Props(guardian()))
val ctx: EffectfulActorContext[Ping] = new EffectfulActorContext[Ping]("ping", Ping.props(), system)
//This will send the command to Ping Actor
ctx.run(Ping)
//This should get you the inbox of the Pong created inside the Ping actor.
val pongInbox = ctx.getInbox("pong")
assert(pongInbox.hasMessages)
val pongMessages = pongInbox.receiveAll()
pongMessages.size should be(1) //1 or whatever number of messages you expect
}
Edit (Some more info): Cases where I need to add a replyTo ActorRef in my messages I do the following:
case class Pong(replyTo: ActorRef[Response])
val responseInbox: SyncInbox[Response] = Inbox.sync[Response]("responseInbox")
Pong(responseInbox.ref)
My initial approach to testing was to extend Behavior class
class TestQueueBehavior[Protocol] extends Behavior[Protocol] {
val messages: BlockingQueue[Protocol] = new LinkedBlockingQueue[Protocol]()
val behavior: Protocol => Unit = {
(p: Protocol) => messages.put(p)
}
def pollMessage(timeout: FiniteDuration = 3.seconds): Protocol = {
messages.poll(timeout.toMillis, TimeUnit.MILLISECONDS)
}
override def management(ctx: ActorContext[Protocol], msg: Signal): Behavior[Protocol] = msg match {
case _ ⇒ ScalaDSL.Unhandled
}
override def message(ctx: ActorContext[Protocol], msg: Protocol): Behavior[Protocol] = msg match {
case p =>
behavior(p)
Same
}
}
then I could call behavior.pollMessage(2.seconds) shouldBe somethingToCompareTo which was very similar to using TestProbe.
Although I think EffectfulActorContext is the right way to go, unfortunately couldn't figure out how to properly use it.
Let say I have an actor called TestedActor wich is able to save an Int value and send it back as follow:
class TestedActor extends Actor {
override def receive = receive(0)
def receive(number: Int): Receive = {
case new_number: Int => context.become(receive(new_number))
case ("get", ref: ActorRef) => ref ! number
}
}
In my test, I would like to be able to get this Integer and test it.
So i've been thinking about creating something like:
class ActorsSpecs extends FlatSpec with Matchers {
case class TestingPositive(testedActor: ActorRef) extends Actor {
override def receive = {
case number: Int => checkNumber(number)
case "get" => testedActor ! ("get", self)
}
def checkNumber(number: Int) = {
number should be > 0
}
}
implicit val system = ActorSystem("akka-stream")
implicit val flowMaterializer = ActorMaterializer()
val testedActor = system.actorOf(Props[TestedActor], name = "testedActor")
val testingActor = system.actorOf(Props(new TestingPositive(testedActor)), name = "testingActor")
testingActor ! "get"
}
This way, i'm able to create this TestingPositive actor, to get the number in the TestedActor and test it in checkNumber.
It seems to be working well, my problem is :
When the test fail, it raise an exception in the actor thread, I can see what went wrong in the console, but it is still saying that all my tests succeeded. Because (I think) the main thread is not aware of this failure.
Does someone knows an easier way than all of this TestingActor stuff?
Or any solution to tell the main thread that it failed?
Thank you
Take a look at using TestKit docs here. You can write a much simpler test for your actor. See how you like this test:
import akka.actor.{Props, ActorSystem}
import akka.testkit.{TestProbe, TestKit}
import org.scalatest.{BeforeAndAfterAll, FlatSpecLike, ShouldMatchers}
class ActorSpecs extends TestKit(ActorSystem("TestSystem"))
with FlatSpecLike
with ShouldMatchers
with BeforeAndAfterAll {
override def afterAll = {
TestKit.shutdownActorSystem(system)
}
def fixtures = new {
val caller = TestProbe()
val actorUnderTest = system.actorOf(Props[TestedActor], name = "testedActor")
}
"The TestedActor" should "pass a good test" in {
val f = fixtures; import f._
caller.send(actorUnderTest, 42)
caller.send(actorUnderTest, ("get", caller.ref))
caller.expectMsg(42)
}
"The TestedActor" should "fail a bad test" in {
val f = fixtures; import f._
caller.send(actorUnderTest, 42)
caller.send(actorUnderTest, ("get", caller.ref))
caller.expectMsg("this won't work")
}
}
Also, you should know about sender. While your get certainly works, a cleaner approach might be to reply to the sending actor:
def receive(number: Int): Receive = {
case new_number: Int => context.become(receive(new_number))
case "get" => sender ! number
}
And the test becomes:
"The TestedActor" should "pass a good test" in {
val f = fixtures; import f._
caller.send(actorUnderTest, 42)
caller.send(actorUnderTest, "get")
caller.expectMsg(42)
}
And finally, I'll shamelessly plug my recent blog post about maintaining an akka code base with my team. I feel morally obligated to give a new hAkker an opportunity to read it. :)
Background: I have a function:
def doWork(symbol: String): Future[Unit]
which initiates some side-effects to fetch data and store it, and completes a Future when its done. However, the back-end infrastructure has usage limits, such that no more than 5 of these requests can be made in parallel. I have a list of N symbols that I need to get through:
var symbols = Array("MSFT",...)
but I want to sequence them such that no more than 5 are executing simultaneously. Given:
val allowableParallelism = 5
my current solution is (assuming I'm working with async/await):
val symbolChunks = symbols.toList.grouped(allowableParallelism).toList
def toThunk(x: List[String]) = () => Future.sequence(x.map(doWork))
val symbolThunks = symbolChunks.map(toThunk)
val done = Promise[Unit]()
def procThunks(x: List[() => Future[List[Unit]]]): Unit = x match {
case Nil => done.success()
case x::xs => x().onComplete(_ => procThunks(xs))
}
procThunks(symbolThunks)
await { done.future }
but, for obvious reasons, I'm not terribly happy with it. I feel like this should be possible with folds, but every time I try, I end up eagerly creating the Futures. I also tried out a version with RxScala Observables, using concatMap, but that also seemed like overkill.
Is there a better way to accomplish this?
I have example how to do it with scalaz-stream. It's quite a lot of code because it's required to convert scala Future to scalaz Task (abstraction for deferred computation). However it's required to add it to project once. Another option is to use Task for defining 'doWork'. I personally prefer task for building async programs.
import scala.concurrent.{Future => SFuture}
import scala.util.Random
import scala.concurrent.ExecutionContext.Implicits.global
import scalaz.stream._
import scalaz.concurrent._
val P = scalaz.stream.Process
val rnd = new Random()
def doWork(symbol: String): SFuture[Unit] = SFuture {
Thread.sleep(rnd.nextInt(1000))
println(s"Symbol: $symbol. Thread: ${Thread.currentThread().getName}")
}
val symbols = Seq("AAPL", "MSFT", "GOOGL", "CVX").
flatMap(s => Seq.fill(5)(s).zipWithIndex.map(t => s"${t._1}${t._2}"))
implicit class Transformer[+T](fut: => SFuture[T]) {
def toTask(implicit ec: scala.concurrent.ExecutionContext): Task[T] = {
import scala.util.{Failure, Success}
import scalaz.syntax.either._
Task.async {
register =>
fut.onComplete {
case Success(v) => register(v.right)
case Failure(ex) => register(ex.left)
}
}
}
}
implicit class ConcurrentProcess[O](val process: Process[Task, O]) {
def concurrently[O2](concurrencyLevel: Int)(f: Channel[Task, O, O2]): Process[Task, O2] = {
val actions =
process.
zipWith(f)((data, f) => f(data))
val nestedActions =
actions.map(P.eval)
merge.mergeN(concurrencyLevel)(nestedActions)
}
}
val workChannel = io.channel((s: String) => doWork(s).toTask)
val process = Process.emitAll(symbols).concurrently(5)(workChannel)
process.run.run
When you'll have all this transformation in scope, basically all you need is:
val workChannel = io.channel((s: String) => doWork(s).toTask)
val process = Process.emitAll(symbols).concurrently(5)(workChannel)
Quite short and self-decribing
Although you've already got an excellent answer, I thought I might still offer an opinion or two about these matters.
I remember seeing somewhere (on someone's blog) "use actors for state and use futures for concurrency".
So my first thought would be to utilize actors somehow. To be precise, I would have a master actor with a router launching multiple worker actors, with number of workers restrained according to allowableParallelism. So, assuming I have
def doWorkInternal (symbol: String): Unit
which does the work from yours doWork taken 'outside of future', I would have something along these lines (very rudimentary, not taking many details into consideration, and practically copying code from akka documentation):
import akka.actor._
case class WorkItem (symbol: String)
case class WorkItemCompleted (symbol: String)
case class WorkLoad (symbols: Array[String])
case class WorkLoadCompleted ()
class Worker extends Actor {
def receive = {
case WorkItem (symbol) =>
doWorkInternal (symbol)
sender () ! WorkItemCompleted (symbol)
}
}
class Master extends Actor {
var pending = Set[String] ()
var originator: Option[ActorRef] = None
var router = {
val routees = Vector.fill (allowableParallelism) {
val r = context.actorOf(Props[Worker])
context watch r
ActorRefRoutee(r)
}
Router (RoundRobinRoutingLogic(), routees)
}
def receive = {
case WorkLoad (symbols) =>
originator = Some (sender ())
context become processing
for (symbol <- symbols) {
router.route (WorkItem (symbol), self)
pending += symbol
}
}
def processing: Receive = {
case Terminated (a) =>
router = router.removeRoutee(a)
val r = context.actorOf(Props[Worker])
context watch r
router = router.addRoutee(r)
case WorkItemCompleted (symbol) =>
pending -= symbol
if (pending.size == 0) {
context become receive
originator.get ! WorkLoadCompleted
}
}
}
You could query the master actor with ask and receive a WorkLoadCompleted in a future.
But thinking more about 'state' (of number of simultaneous requests in processing) to be hidden somewhere, together with implementing necessary code for not exceeding it, here's something of the 'future gateway intermediary' sort, if you don't mind imperative style and mutable (used internally only though) structures:
object Guardian
{
private val incoming = new collection.mutable.HashMap[String, Promise[Unit]]()
private val outgoing = new collection.mutable.HashMap[String, Future[Unit]]()
private val pending = new collection.mutable.Queue[String]
def doWorkGuarded (symbol: String): Future[Unit] = {
synchronized {
val p = Promise[Unit] ()
incoming(symbol) = p
if (incoming.size <= allowableParallelism)
launchWork (symbol)
else
pending.enqueue (symbol)
p.future
}
}
private def completionHandler (t: Try[Unit]): Unit = {
synchronized {
for (symbol <- outgoing.keySet) {
val f = outgoing (symbol)
if (f.isCompleted) {
incoming (symbol).completeWith (f)
incoming.remove (symbol)
outgoing.remove (symbol)
}
}
for (i <- outgoing.size to allowableParallelism) {
if (pending.nonEmpty) {
val symbol = pending.dequeue()
launchWork (symbol)
}
}
}
}
private def launchWork (symbol: String): Unit = {
val f = doWork(symbol)
outgoing(symbol) = f
f.onComplete(completionHandler)
}
}
doWork now is exactly like yours, returning Future[Unit], with the idea that instead of using something like
val futures = symbols.map (doWork (_)).toSeq
val future = Future.sequence(futures)
which would launch futures not regarding allowableParallelism at all, I would instead use
val futures = symbols.map (Guardian.doWorkGuarded (_)).toSeq
val future = Future.sequence(futures)
Think about some hypothetical database access driver with non-blocking interface, i.e. returning futures on requests, which is limited in concurrency by being built over some connection pool for example - you wouldn't want it to return futures not taking parallelism level into account, and require you to juggle with them to keep parallelism under control.
This example is more illustrative than practical since I wouldn't normally expect that 'outgoing' interface would be utilizing futures like this (which is quote ok for 'incoming' interface).
First, obviously some purely functional wrapper around Scala's Future is needed, cause it's side-effective and runs as soon as it can. Let's call it Deferred:
import scala.concurrent.Future
import scala.util.control.Exception.nonFatalCatch
class Deferred[+T](f: () => Future[T]) {
def run(): Future[T] = f()
}
object Deferred {
def apply[T](future: => Future[T]): Deferred[T] =
new Deferred(() => nonFatalCatch.either(future).fold(Future.failed, identity))
}
And here is the routine:
import java.util.concurrent.CopyOnWriteArrayList
import java.util.concurrent.atomic.AtomicInteger
import scala.collection.immutable.Seq
import scala.concurrent.{ExecutionContext, Future, Promise}
import scala.util.control.Exception.nonFatalCatch
import scala.util.{Failure, Success}
trait ConcurrencyUtils {
def runWithBoundedParallelism[T](parallelism: Int = Runtime.getRuntime.availableProcessors())
(operations: Seq[Deferred[T]])
(implicit ec: ExecutionContext): Deferred[Seq[T]] =
if (parallelism > 0) Deferred {
val indexedOps = operations.toIndexedSeq // index for faster access
val promise = Promise[Seq[T]]()
val acc = new CopyOnWriteArrayList[(Int, T)] // concurrent acc
val nextIndex = new AtomicInteger(parallelism) // keep track of the next index atomically
def run(operation: Deferred[T], index: Int): Unit = {
operation.run().onComplete {
case Success(value) =>
acc.add((index, value)) // accumulate result value
if (acc.size == indexedOps.size) { // we've done
import scala.collection.JavaConversions._
// in concurrent setting next line may be called multiple times, that's why trySuccess instead of success
promise.trySuccess(acc.view.sortBy(_._1).map(_._2).toList)
} else {
val next = nextIndex.getAndIncrement() // get and inc atomically
if (next < indexedOps.size) { // run next operation if exists
run(indexedOps(next), next)
}
}
case Failure(t) =>
promise.tryFailure(t) // same here (may be called multiple times, let's prevent stdout pollution)
}
}
if (operations.nonEmpty) {
indexedOps.view.take(parallelism).zipWithIndex.foreach((run _).tupled) // run as much as allowed
promise.future
} else {
Future.successful(Seq.empty)
}
} else {
throw new IllegalArgumentException("Parallelism must be positive")
}
}
In a nutshell, we run as much operations initially as allowed and then on each operation completion we run next operation available, if any. So the only difficulty here is to maintain next operation index and results accumulator in concurrent setting. I'm not an absolute concurrency expert, so make me know if there are some potential problems in the code above. Notice that returned value is also a deferred computation that should be run.
Usage and test:
import org.scalatest.{Matchers, FlatSpec}
import org.scalatest.concurrent.ScalaFutures
import org.scalatest.time.{Seconds, Span}
import scala.collection.immutable.Seq
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
import scala.concurrent.duration._
class ConcurrencyUtilsSpec extends FlatSpec with Matchers with ScalaFutures with ConcurrencyUtils {
"runWithBoundedParallelism" should "return results in correct order" in {
val comp1 = mkDeferredComputation(1)
val comp2 = mkDeferredComputation(2)
val comp3 = mkDeferredComputation(3)
val comp4 = mkDeferredComputation(4)
val comp5 = mkDeferredComputation(5)
val compountComp = runWithBoundedParallelism(2)(Seq(comp1, comp2, comp3, comp4, comp5))
whenReady(compountComp.run()) { result =>
result should be (Seq(1, 2, 3, 4, 5))
}
}
// increase default ScalaTest patience
implicit val defaultPatience = PatienceConfig(timeout = Span(10, Seconds))
private def mkDeferredComputation[T](result: T, sleepDuration: FiniteDuration = 100.millis): Deferred[T] =
Deferred {
Future {
Thread.sleep(sleepDuration.toMillis)
result
}
}
}
Use Monix Task. An example from Monix document for parallelism=10
val items = 0 until 1000
// The list of all tasks needed for execution
val tasks = items.map(i => Task(i * 2))
// Building batches of 10 tasks to execute in parallel:
val batches = tasks.sliding(10,10).map(b => Task.gather(b))
// Sequencing batches, then flattening the final result
val aggregate = Task.sequence(batches).map(_.flatten.toList)
// Evaluation:
aggregate.foreach(println)
//=> List(0, 2, 4, 6, 8, 10, 12, 14, 16,...
Akka streams, allow you to do the following:
import akka.NotUsed
import akka.stream.Materializer
import akka.stream.scaladsl.Source
import scala.concurrent.Future
def sequence[A: Manifest, B](items: Seq[A], func: A => Future[B], parallelism: Int)(
implicit mat: Materializer
): Future[Seq[B]] = {
val futures: Source[B, NotUsed] =
Source[A](items.toList).mapAsync(parallelism)(x => func(x))
futures.runFold(Seq.empty[B])(_ :+ _)
}
sequence(symbols, doWork, allowableParallelism)
I am trying to set a heartbeat over a network, i.e. having an actor send a message to the network on a fixed period of time. I would like to know if you have any better solution than the one I used below as I feel is pretty ugly, considering synchronisation contraints.
import akka.actor._
import akka.actor.Actor
import akka.actor.Props
import akka.actor.ScalaActorRef
import akka.pattern.gracefulStop
import akka.util._
import java.util.Calendar
import java.util.concurrent._
import java.text.SimpleDateFormat
import scala.Array._
import scala.concurrent._
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global
sealed trait Message
case class Information() extends Message//does really need to be here?
case class StartMessage() extends Message
case class HeartbeatMessage() extends Message
case class StopMessage() extends Message
case class FrequencyChangeMessage(
f: Int
) extends Message
class Gps extends Actor {
override def preStart() {
val child = context.actorOf(Props(new Cadencer(500)), name = "cadencer")
}
def receive = {
case "beat" =>
//TODO
case _ =>
println("gps: wut?")
}
}
class Cadencer(p3riod: Int) extends Actor {
var period: Int = _
var stop: Boolean = _
override def preStart() {
period = p3riod
stop = false
context.system.scheduler.scheduleOnce(period milliseconds, self, HeartbeatMessage)
}
def receive = {
case StartMessage =>
stop = false
context.system.scheduler.scheduleOnce(period milliseconds, self, HeartbeatMessage)
case HeartbeatMessage =>
if (false == stop) {
context.system.scheduler.scheduleOnce(0 milliseconds, context.parent, "beat")
context.system.scheduler.scheduleOnce(period milliseconds, self, HeartbeatMessage)
}
case StopMessage =>
stop = true
case FrequencyChangeMessage(f) =>
period = f
case _ =>
println("wut?\n")
//throw exception
}
}
object main extends App {
val system = akka.actor.ActorSystem("mySystem")
val gps = system.actorOf(Props[Gps], name = "gps")
}
What I called cadencer here sends to a target actor and to itself an HeartbeatMessage ; to itself to transmit the order to resend one after a given amount of time, and thus going on with the process till a StopMessage (flipping the stop to true). Good?
Is even having a separated actor efficient rather than having it within a greater one?
Try this. It does not need a separate cadencer class.
class Gps extends Actor
{
var ticker : Cancellable = _
override def preStart()
{
println ("Gps prestart")
// val child = context.actorOf(Props(new Cadencer(500)), name = "cadencer")
ticker = context.system.scheduler.schedule (
500 milliseconds,
500 milliseconds,
context.self,
"beat")
}
def receive: PartialFunction[Any, Unit] =
{
case "beat" =>
println ("got a beat")
case "stop" =>
ticker.cancel()
case _ =>
println("gps: wut?")
}
}
object main extends App
{
val system = akka.actor.ActorSystem("mySystem")
val gps = system.actorOf(Props[Gps], name = "gps")
Thread.sleep (5000)
gps ! "stop"
println ("stop")
}
Actors are pretty lightweight, so it is no problem to have one actor for sending heartbeat messages (and it's preferable if you think of the Single Responsibility Principle).
Further remarks:
If you want to get rid of the period var, you can do the following (it's called hotswapping):
override def preStart() {
// ...
context.become(receive(p3riod))
}
def receive(period: Int) = {
// ...
case FrequencyChangeMessage(f) =>
context.become(receive(f))
// ...
}
Instead of using the stop var, you can stop the actor after getting the StopMessage.
If you need a heartbeat actor again, just start a new one.
Instead of scheduling with 0 milliseconds, you can send the message directly to the parent.
I have the following scala code:
package dummy
import javax.servlet.http.{HttpServlet,
HttpServletRequest => HSReq, HttpServletResponse => HSResp}
import scala.actors.Actor
class DummyServlet extends HttpServlet {
RNG.start
override def doGet(req: HSReq, resp: HSResp) = {
def message = <HTML><HEAD><TITLE>RandomNumber </TITLE></HEAD><BODY>
Random number = {getRandom}</BODY></HTML>
resp.getWriter().print(message)
def getRandom: String = {var d = new DummyActor;d.start;d.getRandom}
}
class DummyActor extends Actor {
var result = "0"
def act = { RNG ! GetRandom
react { case (r:Int) => result = r.toString }
}
def getRandom:String = {
Thread.sleep(300)
result
}
}
}
// below code is not modifiable. I am using it as a library
case object GetRandom
object RNG extends Actor {
def act{loop{react{case GetRandom=>sender!scala.util.Random.nextInt}}}
}
In the above code, I have to use thread.sleep to ensure that there is enough time for result to get updated, otherwise 0 is returned. What is a more elegant way of doing this without using thread.sleep? I think I have to use futures but I cannot get my head around the concept. I need to ensure that each HTTP reaquest gets a unique random number (of course, the random number is just to explain the problem). Some hints or references would be appreciated.
Either use:
!! <-- Returns a Future that you can wait for
or
!? <-- Use the one with a timeout, the totally synchronous is dangerous
Given your definition of RNG, heres some REPL code to verify:
scala> def foo = { println(RNG.!?(1000,GetRandom)) }
foo: Unit
scala> foo
Some(-1025916420)
scala> foo
Some(-1689041124)
scala> foo
Some(-1633665186)
Docs are here: http://www.scala-lang.org/api/current/scala/actors/Actor.html