I apologize in advance if this seems at all confusing, as I'm dumping quite a bit here. Basically, I have a small service grabbing some Json, parsing and extracting it to case class(es), then writing it to a database. This service needs to run on a schedule, which is being handled well by an Akka scheduler. My database doesn't like when Slick tries to ask for a new AutoInc id at the same time, so I built in an Await.result to block that from happening. All of this works quite well, but my issue starts here: there are 7 of these services running, so I would like to block each one using a similar Await.result system. Every time I try to send the end time of the request back as a response (at the end of the else block), it gets sent to dead letters instead of to the Distributor. Basically: why does sender ! time go to dead letters and not to Distributor. This is a long question for a simple problem, but that's how development goes...
ClickActor.scala
import java.text.SimpleDateFormat
import java.util.Date
import Message._
import akka.actor.{Actor, ActorLogging, Props}
import akka.util.Timeout
import com.typesafe.config.ConfigFactory
import net.liftweb.json._
import spray.client.pipelining._
import spray.http.{BasicHttpCredentials, HttpRequest, HttpResponse, Uri}
import akka.pattern.ask
import scala.concurrent.{Await, Future}
import scala.concurrent.duration._
case class ClickData(recipient : String, geolocation : Geolocation, tags : Array[String],
url : String, timestamp : Double, campaigns : Array[String],
`user-variables` : JObject, ip : String,
`client-info` : ClientInfo, message : ClickedMessage, event : String)
case class Geolocation(city : String, region : String, country : String)
case class ClientInfo(`client-name`: String, `client-os`: String, `user-agent`: String,
`device-type`: String, `client-type`: String)
case class ClickedMessage(headers : ClickHeaders)
case class ClickHeaders(`message-id` : String)
class ClickActor extends Actor with ActorLogging{
implicit val formats = DefaultFormats
implicit val timeout = new Timeout(3 minutes)
import context.dispatcher
val con = ConfigFactory.load("connection.conf")
val countries = ConfigFactory.load("country.conf")
val regions = ConfigFactory.load("region.conf")
val df = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss -0000")
var time = System.currentTimeMillis()
var begin = new Date(time - (12 hours).toMillis)
var end = new Date(time)
val pipeline : HttpRequest => Future[HttpResponse] = (
addCredentials(BasicHttpCredentials("api", con.getString("mailgun.key")))
~> sendReceive
)
def get(lastrun : Long): Future[String] = {
if(lastrun != 0) {
begin = new Date(lastrun)
end = new Date(time)
}
val uri = Uri(con.getString("mailgun.uri")) withQuery("begin" -> df.format(begin), "end" -> df.format(end),
"ascending" -> "yes", "limit" -> "100", "pretty" -> "yes", "event" -> "clicked")
val request = Get(uri)
val futureResponse = pipeline(request)
return futureResponse.map(_.entity.asString)
}
def receive = {
case lastrun : Long => {
val start = System.currentTimeMillis()
val responseFuture = get(lastrun)
responseFuture.onSuccess {
case payload: String => val json = parse(payload)
//println(pretty(render(json)))
val elements = (json \\ "items").children
if (elements.length == 0) {
log.info("[ClickActor: " + this.hashCode() + "] did not find new events between " +
begin.toString + " and " + end.toString)
sender ! time
context.stop(self)
}
else {
for (item <- elements) {
val data = item.extract[ClickData]
var tags = ""
if (data.tags.length != 0) {
for (tag <- data.tags)
tags += (tag + ", ")
}
var campaigns = ""
if (data.campaigns.length != 0) {
for (campaign <- data.campaigns)
campaigns += (campaign + ", ")
}
val timestamp = (data.timestamp * 1000).toLong
val msg = new ClickMessage(
data.recipient, data.geolocation.city,
regions.getString(data.geolocation.country + "." + data.geolocation.region),
countries.getString(data.geolocation.country), tags, data.url, timestamp,
campaigns, data.ip, data.`client-info`.`client-name`,
data.`client-info`.`client-os`, data.`client-info`.`user-agent`,
data.`client-info`.`device-type`, data.`client-info`.`client-type`,
data.message.headers.`message-id`, data.event, compactRender(item))
val csqla = context.actorOf(Props[ClickSQLActor])
val future = csqla.ask(msg)
val result = Await.result(future, timeout.duration).asInstanceOf[Int]
if (result == 1) {
log.error("[ClickSQLActor: " + csqla.hashCode() + "] shutting down due to lack of system environment variables")
context.stop(csqla)
}
else if(result == 0) {
log.info("[ClickSQLActor: " + csqla.hashCode() + "] successfully wrote to the DB")
}
}
sender ! time
log.info("[ClickActor: " + this.hashCode() + "] processed |" + elements.length + "| new events in " +
(System.currentTimeMillis() - start) + " ms")
}
}
}
}
}
Distributor.scala
import akka.actor.{Props, ActorSystem}
import akka.event.Logging
import akka.util.Timeout
import akka.pattern.ask
import scala.concurrent.duration._
import scala.concurrent.Await
class Distributor {
implicit val timeout = new Timeout(10 minutes)
var lastClick : Long = 0
def distribute(system : ActorSystem) = {
val log = Logging(system, getClass)
val clickFuture = (system.actorOf(Props[ClickActor]) ? lastClick)
lastClick = Await.result(clickFuture, timeout.duration).asInstanceOf[Long]
log.info(lastClick.toString)
//repeat process with other events (open, unsub, etc)
}
}
The reason is because the value of 'sender' (which is a method that retrieves the value) is no longer valid after leaving the receive block, yet the future that is being used in the above example will still be running and by the time that it finishes the actor will have left the receive block and bang; an invalid sender results in the message going to the dead letter queue.
The fix is to either not use a future, or when combining futures, actors and sender then capture the value of sender before you trigger the future.
val s = sender
val responseFuture = get(lastrun)
responseFuture.onSuccess {
....
s ! time
}
Related
I'm trying to learn how to use Alpakka and have setup a test to write a document to Elastic. From reading docs, including https://doc.akka.io/docs/alpakka/current/elasticsearch.html have written the following :
import akka.actor.ActorSystem
import akka.stream.alpakka.elasticsearch.scaladsl.ElasticsearchSink
import akka.stream.alpakka.elasticsearch._
import akka.stream.scaladsl.Source
import spray.json.DefaultJsonProtocol._
import spray.json.{JsonFormat, _}
object AlpakkaWrite extends App{
case class VolResult(symbol : String, vol : Double, timestamp : Long)
implicit val actorSystem = ActorSystem()
val connectionString = "****";
val userName = "****"
val password = "****"
def constructElasticsearchParams(indexName: String, typeName: String, apiVersion: ApiVersion) =
if (apiVersion eq ApiVersion.V5)
ElasticsearchParams.V5(indexName, typeName)
else if (apiVersion eq ApiVersion.V7)
ElasticsearchParams.V7(indexName)
else
throw new IllegalArgumentException("API version " + apiVersion + " is not supported")
val connectionSettings = ElasticsearchConnectionSettings
.create(connectionString).withCredentials(userName, password)
val sinkSettings =
ElasticsearchWriteSettings.create(connectionSettings).withApiVersion(ApiVersion.V7);
implicit val formatVersionTestDoc: JsonFormat[VolResult] = jsonFormat3(VolResult)
Source(List(VolResult("test" , 1 , System.currentTimeMillis())))
.map { message: VolResult =>
WriteMessage.createIndexMessage("00002", message )
}
.log(("Error"))
.runWith(
ElasticsearchSink.create[VolResult](
constructElasticsearchParams("ccy_vol_normalized", "_doc", ApiVersion.V7),
settings = sinkSettings
)
)
}
Outputs :
19:15:51.815 [default-akka.actor.default-dispatcher-5] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
19:15:52.547 [default-akka.actor.default-dispatcher-5] ERROR akka.stream.alpakka.elasticsearch.impl.ElasticsearchSimpleFlowStage$StageLogic - Received error from elastic after having already processed 0 documents. Error: java.lang.RuntimeException: Request failed for POST /_bulk
Have I defined the case class DataPayload correctly ? It does match the expected payload defined in the index mapping ? :
"properties": {
"timestamp": { "type": "date",
"format": "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"
},
"vol": { "type": "float" },
"symbol": { "type": "text" }
}
Using Elastic dev tools the following command will insert a document successfully :
POST ccy_vol_normalized/_doc/
{
"timestamp": "2022-10-21T00:00:00.000Z",
"vol": 1.221,
"symbol" : "SYM"
}
This works :
import akka.actor.ActorSystem
import akka.stream.alpakka.elasticsearch._
import akka.stream.alpakka.elasticsearch.scaladsl.ElasticsearchSink
import akka.stream.scaladsl.Source
import spray.json.DefaultJsonProtocol._
import spray.json.JsonFormat
import java.text.SimpleDateFormat
import java.util.Date
object AlpakkaWrite extends App {
val connectionString = "";
implicit val actorSystem = ActorSystem()
val userName = ""
val password = ""
val connectionSettings = ElasticsearchConnectionSettings
.create(connectionString).withCredentials(userName, password)
val sinkSettings =
ElasticsearchWriteSettings.create(connectionSettings).withApiVersion(ApiVersion.V7);
val HOUR = 1000 * 60 * 60
val utcDate = new Date(System.currentTimeMillis() - HOUR)
val ts = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS").format(utcDate) + "Z"
implicit val formatVersionTestDoc: JsonFormat[VolResult] = jsonFormat3(VolResult)
def constructElasticsearchParams(indexName: String, typeName: String, apiVersion: ApiVersion) =
if (apiVersion eq ApiVersion.V5)
ElasticsearchParams.V5(indexName, typeName)
else if (apiVersion eq ApiVersion.V7)
ElasticsearchParams.V7(indexName)
else
throw new IllegalArgumentException("API version " + apiVersion + " is not supported")
case class VolResult(symbol: String, vol: Double, timestamp: String)
println("ts : " + ts)
Source(List(VolResult("test1", 1, ts)))
.map { message: VolResult =>
WriteMessage.createIndexMessage(System.currentTimeMillis().toString, message)
}
.log(("Error"))
.runWith(
ElasticsearchSink.create[VolResult](
constructElasticsearchParams("ccy_vol_normalized", "_doc", ApiVersion.V7),
settings = sinkSettings
)
)
}
My date format was incorrect, using :
val HOUR = 1000 * 60 * 60
val utcDate = new Date(System.currentTimeMillis() - HOUR)
val ts = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS").format(utcDate) + "Z"
fixed the issue.
I try write some simple akka-http and akka-streams based application, that handle http requests, always with one precompiled stream, because I plan to use long time processing with back-pressure in my requestProcessor stream
My application code:
import akka.actor.{ActorSystem, Props}
import akka.http.scaladsl._
import akka.http.scaladsl.server.Directives._
import akka.http.scaladsl.server._
import akka.stream.ActorFlowMaterializer
import akka.stream.actor.ActorPublisher
import akka.stream.scaladsl.{Sink, Source}
import scala.annotation.tailrec
import scala.concurrent.Future
object UserRegisterSource {
def props: Props = Props[UserRegisterSource]
final case class RegisterUser(username: String)
}
class UserRegisterSource extends ActorPublisher[UserRegisterSource.RegisterUser] {
import UserRegisterSource._
import akka.stream.actor.ActorPublisherMessage._
val MaxBufferSize = 100
var buf = Vector.empty[RegisterUser]
override def receive: Receive = {
case request: RegisterUser =>
if (buf.isEmpty && totalDemand > 0)
onNext(request)
else {
buf :+= request
deliverBuf()
}
case Request(_) =>
deliverBuf()
case Cancel =>
context.stop(self)
}
#tailrec final def deliverBuf(): Unit =
if (totalDemand > 0) {
if (totalDemand <= Int.MaxValue) {
val (use, keep) = buf.splitAt(totalDemand.toInt)
buf = keep
use foreach onNext
} else {
val (use, keep) = buf.splitAt(Int.MaxValue)
buf = keep
use foreach onNext
deliverBuf()
}
}
}
object Main extends App {
val host = "127.0.0.1"
val port = 8094
implicit val system = ActorSystem("my-testing-system")
implicit val fm = ActorFlowMaterializer()
implicit val executionContext = system.dispatcher
val serverSource: Source[Http.IncomingConnection, Future[Http.ServerBinding]] = Http(system).bind(interface = host, port = port)
val mySource = Source.actorPublisher[UserRegisterSource.RegisterUser](UserRegisterSource.props)
val requestProcessor = mySource
.mapAsync(1)(fakeSaveUserAndReturnCreatedUserId)
.to(Sink.head[Int])
.run()
val route: Route =
get {
path("test") {
parameter('test) { case t: String =>
requestProcessor ! UserRegisterSource.RegisterUser(t)
???
}
}
}
def fakeSaveUserAndReturnCreatedUserId(param: UserRegisterSource.RegisterUser): Future[Int] =
Future.successful {
1
}
serverSource.to(Sink.foreach {
connection =>
connection handleWith Route.handlerFlow(route)
}).run()
}
I found solution about how create Source that can dynamically accept new items to process, but I can found any solution about how than obtain result of stream execution in my route
The direct answer to your question is to materialize a new Stream for each HttpRequest and use Sink.head to get the value you're looking for. Modifying your code:
val requestStream =
mySource.map(fakeSaveUserAndReturnCreatedUserId)
.to(Sink.head[Int])
//.run() - don't materialize here
val route: Route =
get {
path("test") {
parameter('test) { case t: String =>
//materialize a new Stream here
val userIdFut : Future[Int] = requestStream.run()
requestProcessor ! UserRegisterSource.RegisterUser(t)
//get the result of the Stream
userIdFut onSuccess { case userId : Int => ...}
}
}
}
However, I think your question is ill posed. In your code example the only thing you're using an akka Stream for is to create a new UserId. Futures readily solve this problem without the need for a materialized Stream (and all the accompanying overhead):
val route: Route =
get {
path("test") {
parameter('test) { case t: String =>
val user = RegisterUser(t)
fakeSaveUserAndReturnCreatedUserId(user) onSuccess { case userId : Int =>
...
}
}
}
}
If you want to limit the number of concurrent calls to fakeSaveUserAndReturnCreateUserId then you can create an ExecutionContext with a defined ThreadPool size, as explained in the answer to this question, and use that ExecutionContext to create the Futures:
val ThreadCount = 10 //concurrent queries
val limitedExecutionContext =
ExecutionContext.fromExecutor(Executors.newFixedThreadPool(ThreadCount))
def fakeSaveUserAndReturnCreatedUserId(param: UserRegisterSource.RegisterUser): Future[Int] =
Future { 1 }(limitedExecutionContext)
I am working on an artificial life simulation with Scala and Akka and so far I've been super happy with both. I am having some issues with timing however that I can't quite explain.
At the moment, each animal in my simulation is a pair of actors (animal + brain). Typically, these two actors take turns (animal sends sensor input to brain, waits for result, acts on it and starts over). Every now and then however, animals need to interact with each other to eat each other or reproduce.
The one thing that is odd to me is the timing. It turns out that sending a message from one animal to another is a LOT slower (about 100x) than sending from animal to brain. This puts my poor predators and sexually active animals at a disadvantage as opposed to the vegetarians and asexual creatures (disclaimer: I am vegetarian myself but I think there are better reasons for being a vegetarian than getting stuck for a bit while trying to hunt...).
I extracted a minimal code snippet that demonstrates the problem:
package edu.blindworld.test
import java.util.concurrent.TimeUnit
import akka.actor.{ActorRef, ActorSystem, Props, Actor}
import akka.pattern.ask
import akka.util.Timeout
import scala.concurrent.Await
import scala.concurrent.duration.Duration
import scala.util.Random
class Animal extends Actor {
val brain = context.actorOf(Props(classOf[Brain]))
var animals: Option[List[ActorRef]] = None
var brainCount = 0
var brainRequestStartTime = 0L
var brainNanos = 0L
var peerCount = 0
var peerRequestStartTime = 0L
var peerNanos = 0L
override def receive = {
case Go(all) =>
animals = Some(all)
performLoop()
case BrainResponse =>
brainNanos += (System.nanoTime() - brainRequestStartTime)
brainCount += 1
// Animal interactions are rare
if (Random.nextDouble() < 0.01) {
// Send a ping to a random other one (or ourselves). Defer our own loop
val randomOther = animals.get(Random.nextInt(animals.get.length))
peerRequestStartTime = System.nanoTime()
randomOther ! PeerRequest
} else {
performLoop()
}
case PeerResponse =>
peerNanos += (System.nanoTime() - peerRequestStartTime)
peerCount += 1
performLoop()
case PeerRequest =>
sender() ! PeerResponse
case Stop =>
sender() ! StopResult(brainCount, brainNanos, peerCount, peerNanos)
context.stop(brain)
context.stop(self)
}
def performLoop() = {
brain ! BrainRequest
brainRequestStartTime = System.nanoTime()
}
}
class Brain extends Actor {
override def receive = {
case BrainRequest =>
sender() ! BrainResponse
}
}
case class Go(animals: List[ActorRef])
case object Stop
case class StopResult(brainCount: Int, brainNanos: Long, peerCount: Int, peerNanos: Long)
case object BrainRequest
case object BrainResponse
case object PeerRequest
case object PeerResponse
object ActorTest extends App {
println("Sampling...")
val system = ActorSystem("Test")
val animals = (0 until 50).map(i => system.actorOf(Props(classOf[Animal]))).toList
animals.foreach(_ ! Go(animals))
Thread.sleep(5000)
implicit val timeout = Timeout(5, TimeUnit.SECONDS)
val futureStats = animals.map(_.ask(Stop).mapTo[StopResult])
val stats = futureStats.map(Await.result(_, Duration(5, TimeUnit.SECONDS)))
val brainCount = stats.foldLeft(0)(_ + _.brainCount)
val brainNanos = stats.foldLeft(0L)(_ + _.brainNanos)
val peerCount = stats.foldLeft(0)(_ + _.peerCount)
val peerNanos = stats.foldLeft(0L)(_ + _.peerNanos)
println("Average time for brain request: " + (brainNanos / brainCount) / 1000000.0 + "ms (sampled from " + brainCount + " requests)")
println("Average time for peer pings: " + (peerNanos / peerCount) / 1000000.0 + "ms (sampled from " + peerCount + " requests)")
system.shutdown()
}
This is what happens here:
I am creating 50 pairs of animal/brain actors
They are all launched and run for 5 seconds
Each animal does an infinite loop, taking turns with its brain
In 1% of all runs, an animal sends a ping to a random other animal and waits for its reply. Then, it continues its loop with its brain
Each request to the brain and to peer is measured, so that we can get an average
After 5 seconds, everything is stopped and the timings for brain-requests and pings to peers are compared
On my dual core i7 I am seeing these numbers:
Average time for brain request: 0.004708ms (sampled from 21073859 requests)
Average time for peer pings: 0.66866ms (sampled from 211167 requests)
So pings to peers are 165x slower than requests to brains. I've been trying lots of things to fix this (e.g. priority mailboxes and warming up the JIT), but haven't been able to figure out what's going on. Does anyone have an idea?
I think you should use the ask pattern to handle the message. In your code, the BrainRequest was sent to the brain actor and then it sent back the BrainResponse. The problem was here. The BrainResponse was not that BrainRequest's response. Maybe it was previous BrainRequest's response.
The following code uses the ask pattern and the perf result is almost same.
package edu.blindworld.test
import java.util.concurrent.TimeUnit
import akka.actor.{ActorRef, ActorSystem, Props, Actor}
import akka.pattern.ask
import akka.util.Timeout
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Await
import scala.concurrent.duration._
import scala.util.Random
class Animal extends Actor {
val brain = context.actorOf(Props(classOf[Brain]))
var animals: Option[List[ActorRef]] = None
var brainCount = 0
var brainRequestStartTime = 0L
var brainNanos = 0L
var peerCount = 0
var peerRequestStartTime = 0L
var peerNanos = 0L
override def receive = {
case Go(all) =>
animals = Some(all)
performLoop()
case PeerRequest =>
sender() ! PeerResponse
case Stop =>
sender() ! StopResult(brainCount, brainNanos, peerCount, peerNanos)
context.stop(brain)
context.stop(self)
}
def performLoop(): Unit = {
brainRequestStartTime = System.nanoTime()
brain.ask(BrainRequest)(10.millis) onSuccess {
case _ =>
brainNanos += (System.nanoTime() - brainRequestStartTime)
brainCount += 1
// Animal interactions are rare
if (Random.nextDouble() < 0.01) {
// Send a ping to a random other one (or ourselves). Defer our own loop
val randomOther = animals.get(Random.nextInt(animals.get.length))
peerRequestStartTime = System.nanoTime()
randomOther.ask(PeerRequest)(10.millis) onSuccess {
case _ =>
peerNanos += (System.nanoTime() - peerRequestStartTime)
peerCount += 1
performLoop()
}
} else {
performLoop()
}
}
}
}
class Brain extends Actor {
override def receive = {
case BrainRequest =>
sender() ! BrainResponse
}
}
case class Go(animals: List[ActorRef])
case object Stop
case class StopResult(brainCount: Int, brainNanos: Long, peerCount: Int, peerNanos: Long)
case object BrainRequest
case object BrainResponse
case object PeerRequest
case object PeerResponse
object ActorTest extends App {
println("Sampling...")
val system = ActorSystem("Test")
val animals = (0 until 50).map(i => system.actorOf(Props(classOf[Animal]))).toList
animals.foreach(_ ! Go(animals))
Thread.sleep(5000)
implicit val timeout = Timeout(5, TimeUnit.SECONDS)
val futureStats = animals.map(_.ask(Stop).mapTo[StopResult])
val stats = futureStats.map(Await.result(_, Duration(5, TimeUnit.SECONDS)))
val brainCount = stats.foldLeft(0)(_ + _.brainCount)
val brainNanos = stats.foldLeft(0L)(_ + _.brainNanos)
val peerCount = stats.foldLeft(0)(_ + _.peerCount)
val peerNanos = stats.foldLeft(0L)(_ + _.peerNanos)
println("Average time for brain request: " + (brainNanos / brainCount) / 1000000.0 + "ms (sampled from " + brainCount + " requests)")
println("Average time for peer pings: " + (peerNanos / peerCount) / 1000000.0 + "ms (sampled from " + peerCount + " requests)")
system.shutdown()
}
I've an actor (Worker) which basically ask 3 other actors (Filter1, Filter2 and Filter3) for a result. If any of them return a false, It's unnecessary to wait for the others, like an "and" operation over the results. When a false response is receive, a cancel message is sent to the actors in a way to cancel the queued work and make it more effective in the execution.
Filters aren't children of Worker, but there are a common pool of actor which are used by all Worker actors. I use an Agent to maintain the collection of cancel Works. Then, before a particular work is processed, I check in the cancel agent if that work was cancel, and then avoid the execution for it. Cancel has a higher priority than Work, then, it is processed always first.
The code is something like this
Proxy, who create the actors tree:
import scala.collection.mutable.HashSet
import scala.concurrent.ExecutionContext.Implicits.global
import com.typesafe.config.Config
import akka.actor.Actor
import akka.actor.ActorLogging
import akka.actor.ActorSystem
import akka.actor.PoisonPill
import akka.actor.Props
import akka.agent.Agent
import akka.routing.RoundRobinRouter
class Proxy extends Actor with ActorLogging {
val agent1 = Agent(new HashSet[Work])
val agent2 = Agent(new HashSet[Work])
val agent3 = Agent(new HashSet[Work])
val filter1 = context.actorOf(Props(Filter1(agent1)).withDispatcher("priorityMailBox-dispatcher")
.withRouter(RoundRobinRouter(24)), "filter1")
val filter2 = context.actorOf(Props(Filter2(agent2)).withDispatcher("priorityMailBox-dispatcher")
.withRouter(RoundRobinRouter(24)), "filter2")
val filter3 = context.actorOf(Props(Filter3(agent3)).withDispatcher("priorityMailBox-dispatcher")
.withRouter(RoundRobinRouter(24)), "filter3")
//val workerRouter = context.actorOf(Props[SerialWorker].withRouter(RoundRobinRouter(24)), name = "workerRouter")
val workerRouter = context.actorOf(Props(new Worker(filter1, filter2, filter3)).withRouter(RoundRobinRouter(24)), name = "workerRouter")
def receive = {
case w: Work =>
workerRouter forward w
}
}
Worker:
import scala.concurrent.Await
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
import scala.concurrent.duration.DurationInt
import akka.actor.Actor
import akka.actor.ActorLogging
import akka.actor.Props
import akka.actor.actorRef2Scala
import akka.pattern.ask
import akka.pattern.pipe
import akka.util.Timeout
import akka.actor.ActorRef
import akka.routing.RoundRobinRouter
import akka.agent.Agent
import scala.collection.mutable.HashSet
class Worker(filter1: ActorRef, filter2: ActorRef, filter3: ActorRef) extends Actor with ActorLogging {
implicit val timeout = Timeout(30.seconds)
def receive = {
case w:Work =>
val start = System.currentTimeMillis();
val futureF3 = (filter3 ? w).mapTo[Response]
val futureF2 = (filter2 ? w).mapTo[Response]
val futureF1 = (filter1 ? w).mapTo[Response]
val aggResult = Future.find(List(futureF3, futureF2, futureF1)) { res => !res.reponse }
Await.result(aggResult, timeout.duration) match {
case None =>
Nqueen.fact(10500000L)
log.info(s"[${w.message}] Procesado mensaje TRUE en ${System.currentTimeMillis() - start} ms");
sender ! WorkResponse(w, true)
case _ =>
filter1 ! Cancel(w)
filter2 ! Cancel(w)
filter3 ! Cancel(w)
log.info(s"[${w.message}] Procesado mensaje FALSE en ${System.currentTimeMillis() - start} ms");
sender ! WorkResponse(w, false)
}
}
}
and Filters:
import scala.collection.mutable.HashSet
import scala.util.Random
import akka.actor.Actor
import akka.actor.ActorLogging
import akka.actor.actorRef2Scala
import akka.agent.Agent
trait CancellableFilter { this: Actor with ActorLogging =>
//val canceledJobs = new HashSet[Int]
val agent: Agent[HashSet[Work]]
def cancelReceive: Receive = {
case Cancel(w) =>
agent.send(_ += w)
//log.info(s"[$t] El trabajo se cancelara (si llega...)")
}
def cancelled(w: Work): Boolean =
if (agent.get.contains(w)) {
agent.send(_ -= w)
true
} else {
false
}
}
abstract class Filter extends Actor with ActorLogging { this: CancellableFilter =>
val random = new Random(System.currentTimeMillis())
def response: Boolean
val timeToWait: Int
val timeToExecutor: Long
def receive = cancelReceive orElse {
case w:Work if !cancelled(w) =>
//log.info(s"[$t] Llego trabajo")
Thread.sleep(timeToWait)
Nqueen.fact(timeToExecutor)
val r = Response(response)
//log.info(s"[$t] Respondio ${r.reponse}")
sender ! r
}
}
object Filter1 {
def apply(agente: Agent[HashSet[Work]]) = new Filter with CancellableFilter {
val timeToWait = 74
val timeToExecutor = 42000000L
val agent = agente
def response = true //random.nextBoolean
}
}
object Filter2 {
def apply(agente: Agent[HashSet[Work]]) = new Filter with CancellableFilter {
val timeToWait = 47
val timeToExecutor = 21000000L
val agent = agente
def response = true //random.nextBoolean
}
}
object Filter3 {
def apply(agente: Agent[HashSet[Work]]) = new Filter with CancellableFilter {
val timeToWait = 47
val timeToExecutor = 21000000L
val agent = agente
def response = true //random.nextBoolean
}
}
Basically, I think Worker code is ugly and I want to make it better. Could you help to improve it?
Other point I want to improve is the cancel message. As I don't know which of the filters are done, I need to Cancel all of them, then, at least one cancel is redundant (Since this work is completed)
It is minor, but why don't you store filters as sequence? filters.foreach(f ! Cancel(w)) is nicer than
filter1 ! Cancel(w)
filter2 ! Cancel(w)
filter3 ! Cancel(w)
Same for other cases:
class Worker(filter1: ActorRef, filter2: ActorRef, filter3: ActorRef) extends Actor with ActorLogging {
private val filters = Seq(filter1, filter2, filter3)
implicit val timeout = Timeout(30.seconds)
def receive = {
case w:Work =>
val start = System.currentTimeMillis();
val futures = filters.map { f =>
(f ? w).mapTo[Response]
}
val aggResult = Future.find(futures) { res => !res.reponse }
Await.result(aggResult, timeout.duration) match {
case None =>
Nqueen.fact(10500000L)
log.info(s"[${w.message}] Procesado mensaje TRUE en ${System.currentTimeMillis() - start} ms");
sender ! WorkResponse(w, true)
case _ =>
filters.foreach(f ! Cancel(w))
log.info(s"[${w.message}] Procesado mensaje FALSE en ${System.currentTimeMillis() - start} ms");
sender ! WorkResponse(w, false)
}
}
You may also consider to write constructor as Worker(filters: ActorRef*) if you do not enforce exactly three filters. It think it is okay to sendoff one redundant cancel (alternatives I see is overly complicated). I'm not sure, but if filters will be created very fast, if may got randoms initialized with the same seed value.
I want to create a Play 2 Enumeratee that takes in values and outputs them, chunked together, every x seconds/milliseconds. That way, in a multi-user websocket environment with lots of user input, one could limit the number of received frames per second.
I know that it's possible to group a set number of items together like this:
val chunker = Enumeratee.grouped(
Traversable.take[Array[Double]](5000) &>> Iteratee.consume()
)
Is there a built-in way to do this based on time rather than based on the number of items?
I was thinking about doing this somehow with a scheduled Akka job, but on first sight this seems inefficient, and I'm not sure if concurency issues would arise.
How about like this? I hope this is helpful for you.
package controllers
import play.api._
import play.api.Play.current
import play.api.mvc._
import play.api.libs.iteratee._
import play.api.libs.concurrent.Akka
import play.api.libs.concurrent.Promise
object Application extends Controller {
def index = Action {
val queue = new scala.collection.mutable.Queue[String]
Akka.future {
while( true ){
Logger.info("hogehogehoge")
queue += System.currentTimeMillis.toString
Thread.sleep(100)
}
}
val timeStream = Enumerator.fromCallback { () =>
Promise.timeout(Some(queue), 200)
}
Ok.stream(timeStream.through(Enumeratee.map[scala.collection.mutable.Queue[String]]({ queue =>
var str = ""
while(queue.nonEmpty){
str += queue.dequeue + ", "
}
str
})))
}
}
And this document is also helpful for you.
http://www.playframework.com/documentation/2.0/Enumerators
UPDATE
This is for play2.1 version.
package controllers
import play.api._
import play.api.Play.current
import play.api.mvc._
import play.api.libs.iteratee._
import play.api.libs.concurrent.Akka
import play.api.libs.concurrent.Promise
import scala.concurrent._
import ExecutionContext.Implicits.global
object Application extends Controller {
def index = Action {
val queue = new scala.collection.mutable.Queue[String]
Akka.future {
while( true ){
Logger.info("hogehogehoge")
queue += System.currentTimeMillis.toString
Thread.sleep(100)
}
}
val timeStream = Enumerator.repeatM{
Promise.timeout(queue, 200)
}
Ok.stream(timeStream.through(Enumeratee.map[scala.collection.mutable.Queue[String]]({ queue =>
var str = ""
while(queue.nonEmpty){
str += queue.dequeue + ", "
}
str
})))
}
}
Here I've quickly defined an iteratee that will take values from an input for a fixed time length t measured in milliseconds and an enumeratee that will allow you to group and further process an input stream divided into segments constructed within such length t. It relies on JodaTime to keep track of how much time has passed since the iteratee began.
def throttledTakeIteratee[E](timeInMillis: Long): Iteratee[E, List[E]] = {
var startTime = new Instant()
def step(state: List[E])(input: Input[E]): Iteratee[E, List[E]] = {
val timePassed = new Interval(startTime, new Instant()).toDurationMillis
input match {
case Input.EOF => { startTime = new Instant; Done(state, Input.EOF) }
case Input.Empty => Cont[E, List[E]](i => step(state)(i))
case Input.El(e) =>
if (timePassed >= timeInMillis) { startTime = new Instant; Done(e::state, Input.Empty) }
else Cont[E, List[E]](i => step(e::state)(i))
}
}
Cont(step(List[E]()))
}
def throttledTake[T](timeInMillis: Long) = Enumeratee.grouped(throttledTakeIteratee[T](timeInMillis))