How to work with Source.Queue in Akka-Stream - scala

I am toying around trying to use a source.queue from an Actor. I am stuck in parttern match the result of an offer operation
class MarcReaderActor(file: File, sourceQueue: SourceQueueWithComplete[Record]) extends Actor {
val inStream = file.newInputStream
val reader = new MarcStreamReader(inStream)
override def receive: Receive = {
case Process => {
if (reader.hasNext()) {
val record = reader.next()
pipe(sourceQueue.offer(record)) to self
}
}
case f:Future[QueueOfferResult] =>
}
}
}
I don't know how to check if it was Enqueued or Dropped or Failure
if i write f:Future[QueueOfferResult.Enqueued] the compile complain

Since you use pipeTo, you do no need to match on futures - the contents of the future will be sent to the actor when this future is completed, not the future itself. Do this:
override def receive: Receive = {
case Process =>
if (reader.hasNext()) {
val record = reader.next()
pipe(sourceQueue.offer(record)) to self
}
case r: QueueOfferResult =>
r match {
case QueueOfferResult.Enqueued => // element has been consumed
case QueueOfferResult.Dropped => // element has been ignored because of backpressure
case QueueOfferResult.QueueClosed => // the queue upstream has terminated
case QueueOfferResult.Failure(e) => // the queue upstream has failed with an exception
}
case Status.Failure(e) => // future has failed, e.g. because of invalid usage of `offer()`
}

Related

akka- how to ensure all responses of dynamic number of actors are returned to parent actor?

I need to create variable number of actors each time my program starts and then must ensure all responses are return after a period of time. This
link gives a good idea for fixed number of actors but what about dynamic number?
This is my code that creates actor and passes messages to them:
ruleList = ...
val childActorList: Iterable[ActorRef] = ruleList.map(ruleItem =>
context.actorOf(DbActor.props(ruleItem.parameter1, ruleItem.parameter2)))
implicit val timeout = Timeout(10.second)
childActorList.foreach(childActor =>
childActor ? (tempTableName, lastDate)
)
Updated-1
According to #Raman Mishra guides , I updated my code as bellow, this is the code in parent actor:
override val supervisorStrategy: SupervisorStrategy = {
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 10 seconds) {
case exp: SQLException => //Resume;
throw exp
case exp:AskTimeoutException => throw exp
case other: Exception => throw other
}
}
override def receive: Receive = {
case Start(tempTableName, lastDate) => {
implicit val timeout = Timeout(10.second)
ruleList.foreach { ruleItem =>
val childActor = context.actorOf(DbActor.props(ruleItem._1, query = ruleItem._2))
ask(childActor, (tempTableName, lastDate)).mapTo[Seq[Int]]
onComplete {
lastDate)).mapTo[Seq[Int]] onComplete {
case util.Success(res) => println("done" + res + ruleItem._2)
case util.Failure(exp: AskTimeoutException) => println("Failed query:" + ruleItem._2); throw exp
case other => println(other)
}
}
And in child actor:
case (brokerTableName, lastDate) => {
Logger("Started query by actor" + self.path.name + ':' +
val repo = new Db()
val res = repo.getAggResult(query = (brokerTableName, lastDate))
val resWrapper = res match {
case elem: Future[Any] => elem
case elem:Any => Future(elem)
}
resWrapper pipeTo self
}
case res:List[Map[Any, Any]] => {
// here final result is send to parent actor
repo.insertAggresults(res, aggTableName) pipeTo context.parent
}
Now, whenever I run main app, first, parent actor starts and create child actors and send messages to them using ask method. Child actors do their tasks but the problem here is child actors response never returns back to parent actor and in every run of app, AskTimeoutException occurs. I doubt if the use of onComplete method is correct or not. Any help will be appreciated.
"Updated-2"
I found out the problem is in using context.parent instead of sender(). Also, when I pipe to sender, first part of my result, and the sender ask for second part, the problem is resolved but I don't know what happens here, why Can't I pipe to self and return the final result to parent?
This is the last code:
In parent actor:
override def receive: Receive = {
case Start(tempTableName, lastDate) => {
println("started: called by remote actor")
implicit val timeout = Timeout(5 second)
ruleList.foreach { ruleItem =>
val childActor = context.actorOf(DbActor.props(ruleItem._1, query = ruleItem._2))
ask(childActor, Broker(tempTableName, lastDate)) onComplete {
// (childActor ? Broker(tempTableName, lastDate)).mapTo[Seq[Int]] onComplete {
case util.Success(res: List[Map[Any, Any]]) => (childActor ? res) onComplete {
case util.Success(res: Seq[Any]) => println("Successfull- Num,ber of documents:" + res.length + " " + ruleItem._2)
case util.Failure(exp: AskTimeoutException) => println("Failed for writing - query:" + ruleItem._2); throw exp
}
case util.Failure(exp: AskTimeoutException) => println("Failed for reading - query :" + ruleItem._2); throw exp
case other => println(other)
}
}
}
}
In child actor:
case (brokerTableName, lastDate) => {
Logger("Started query by actor" + self.path.name + ':' +
val repo = new Db()
val res = repo.getAggResult(query = (brokerTableName, lastDate))
val resWrapper = res match {
case elem: Future[Any] => elem
case elem:Any => Future(elem)
}
resWrapper pipeTo sender()
}
case res:List[Map[Any, Any]] => {
// here final result is send to parent actor
repo.insertAggresults(res, aggTableName) pipeTo sender()
}
The reason that replying to sender() works where replying to context.parent does not is that ask creates an temporary actor to handle the response. You need to reply to this temporary actor: the sender (which is different from the parent).
Also it's not clear whether the getAggResult method is blocking. If so this will not help (see here).

Update state in actor from within a future

Consider the following code sample:
class MyActor (httpClient: HttpClient) {
var canSendMore = true
override def receive: Receive = {
case PayloadA(name: String) => send(urlA)
case PayloadB(name: String) => send(urlB)
def send(url: String){
if (canSendMore)
httpClient.post(url).map(response => canSendMore = response.canSendMore)
else {
Thread.sleep(5000) //this will be done in a more elegant way, it's just for the example.
httpClient.post(url).map(response => canSendMore = response.canSendMore)
}
}
}
Each message handling will result in an async http request. (post return value is a Future[Response])
My problem is that I want to safely update counter ( At the moment there is a race condition)
BTW, I must somehow update counter in the same thread, or at least before any other message is processed by this actor.
Is this possible?
You can use become + stash combination to keep on stashing messages when the http request future is in process.
object FreeToProcess
case PayloadA(name: String)
class MyActor (httpClient: HttpClient) extends Actor with Stash {
def canProcessReceive: Receive = {
case PayloadA(name: String) => {
// become an actor which just stashes messages
context.become(canNotProcessReceive, discardOld = false)
httpClient.post(urlA).onComplete({
case Success(x) => {
// Use your result
self ! FreeToProcess
}
case Failure(e) => {
// Use your failure
self ! FreeToProcess
}
})
}
}
def canNotProcessReceive: Receive = {
case CanProcess => {
// replay stash to mailbox
unstashAll()
// start processing messages
context.unbecome()
}
case msg => {
stash()
}
}
}

Resolving Akka futures from ask in the event of a failure

I am calling an Actor using the ask pattern within a Spray application, and returning the result as the HTTP response. I map failures from the actor to a custom error code.
val authActor = context.actorOf(Props[AuthenticationActor])
callService((authActor ? TokenAuthenticationRequest(token)).mapTo[LoggedInUser]) { user =>
complete(StatusCodes.OK, user)
}
def callService[T](f: => Future[T])(cb: T => RequestContext => Unit) = {
onComplete(f) {
case Success(value: T) => cb(value)
case Failure(ex: ServiceException) => complete(ex.statusCode, ex.errorMessage)
case e => complete(StatusCodes.InternalServerError, "Unable to complete the request. Please try again later.")
//In reality this returns a custom error object.
}
}
This works correctly when the authActor sends a failure, but if the authActor throws an exception, nothing happens until the ask timeout completes. For example:
override def receive: Receive = {
case _ => throw new ServiceException(ErrorCodes.AuthenticationFailed, "No valid session was found for that token")
}
I know that the Akka docs say that
To complete the future with an exception you need send a Failure message to the sender. This is not done automatically when an actor throws an exception while processing a message.
But given that I use asks for a lot of the interface between the Spray routing actors and the service actors, I would rather not wrap the receive part of every child actor with a try/catch. Is there a better way to achieve automatic handling of exceptions in child actors, and immediately resolve the future in the event of an exception?
Edit: this is my current solution. However, it's quite messy to do this for every child actor.
override def receive: Receive = {
case default =>
try {
default match {
case _ => throw new ServiceException("")//Actual code would go here
}
}
catch {
case se: ServiceException =>
logger.error("Service error raised:", se)
sender ! Failure(se)
case ex: Exception =>
sender ! Failure(ex)
throw ex
}
}
That way if it's an expected error (i.e. ServiceException), it's handled by creating a failure. If it's unexpected, it returns a failure immediately so the future is resolved, but then throws the exception so it can still be handled by the SupervisorStrategy.
If you want a way to provide automatic sending of a response back to the sender in case of an unexpected exception, then something like this could work for you:
trait FailurePropatingActor extends Actor{
override def preRestart(reason:Throwable, message:Option[Any]){
super.preRestart(reason, message)
sender() ! Status.Failure(reason)
}
}
We override preRestart and propagate the failure back to the sender as a Status.Failure which will cause an upstream Future to be failed. Also, it's important to call super.preRestart here as that's where child stopping happens. Using this in an actor looks something like this:
case class GetElement(list:List[Int], index:Int)
class MySimpleActor extends FailurePropatingActor {
def receive = {
case GetElement(list, i) =>
val result = list(i)
sender() ! result
}
}
If I was to call an instance of this actor like so:
import akka.pattern.ask
import concurrent.duration._
val system = ActorSystem("test")
import system.dispatcher
implicit val timeout = Timeout(2 seconds)
val ref = system.actorOf(Props[MySimpleActor])
val fut = ref ? GetElement(List(1,2,3), 6)
fut onComplete{
case util.Success(result) =>
println(s"success: $result")
case util.Failure(ex) =>
println(s"FAIL: ${ex.getMessage}")
ex.printStackTrace()
}
Then it would properly hit my Failure block. Now, the code in that base trait works well when Futures are not involved in the actor that is extending that trait, like the simple actor here. But if you use Futures then you need to be careful as exceptions that happen in the Future don't cause restarts in the actor and also, in preRestart, the call to sender() will not return the correct ref because the actor has already moved into the next message. An actor like this shows that issue:
class MyBadFutureUsingActor extends FailurePropatingActor{
import context.dispatcher
def receive = {
case GetElement(list, i) =>
val orig = sender()
val fut = Future{
val result = list(i)
orig ! result
}
}
}
If we were to use this actor in the previous test code, we would always get a timeout in the failure situation. To mitigate that, you need to pipe the results of futures back to the sender like so:
class MyGoodFutureUsingActor extends FailurePropatingActor{
import context.dispatcher
import akka.pattern.pipe
def receive = {
case GetElement(list, i) =>
val fut = Future{
list(i)
}
fut pipeTo sender()
}
}
In this particular case, the actor itself is not restarted because it did not encounter an uncaught exception. Now, if your actor needed to do some additional processing after the future, you can pipe back to self and explicitly fail when you get a Status.Failure:
class MyGoodFutureUsingActor extends FailurePropatingActor{
import context.dispatcher
import akka.pattern.pipe
def receive = {
case GetElement(list, i) =>
val fut = Future{
list(i)
}
fut.to(self, sender())
case d:Double =>
sender() ! d * 2
case Status.Failure(ex) =>
throw ex
}
}
If that behavior becomes common, you can make it available to whatever actors need it like so:
trait StatusFailureHandling{ me:Actor =>
def failureHandling:Receive = {
case Status.Failure(ex) =>
throw ex
}
}
class MyGoodFutureUsingActor extends FailurePropatingActor with StatusFailureHandling{
import context.dispatcher
import akka.pattern.pipe
def receive = myReceive orElse failureHandling
def myReceive:Receive = {
case GetElement(list, i) =>
val fut = Future{
list(i)
}
fut.to(self, sender())
case d:Double =>
sender() ! d * 2
}
}

How to log internal actor state in receive?

For Actors that can be expressed fairly concisely, it's frustrating to have to add in blocks ({...}) just so I can add a log command. I would like to log my internal state before the message is handled and then after the message is handled - is this possible?
def receive = {
// I want to log here instead and remove the non-critical logs from below
// e.g. log.debug(s"Received $message")
// log.debug(s"Internal state is $subscriptions")
case RegisterChannel(name, owner) => {
getChannel(name) match {
case Some(deadChannel: DeadChannel) => {
subscriptions += (RealChannel(name, Server(owner)) -> subscriptions(deadChannel))
subscriptions -= deadChannel
context.watch(owner)
log.debug(s"Replaced $deadChannel with a real channel $channels")
}
case Some(realChannel: RealChannel) =>
log.error(s"Refusing to use RegisterChannel($name, $owner) due to $realChannel")
case None => {
subscriptions += (RealChannel(name, Server(owner)) -> Vector())
context.watch(owner)
log.debug(s"Registered a new channel $channels")
}
}
}
case Terminated(dead) => {
getRole(dead) match {
case Some(client: Client) => // Remove subscriptions
log.debug(s"Received Client Terminated($dead) $client")
subscriptionsFor(client).foreach { subscription =>
subscriptions += (subscription._1 -> subscription._2.filterNot(c => c == client))
}
case Some(server: Server) => { // Remove any channels
log.debug(s"Received Server Terminated($dead) $server")
channelsBy(server).foreach { realChannel =>
subscriptions += (DeadChannel(realChannel.name) -> subscriptions(realChannel))
subscriptions -= realChannel
}
}
case None =>
log.debug(s"Received Terminated($dead) but no channel is registered")
}
}
// I want to log here as well, to see what effect the message had
// e.g. log.debug(s"Finished $message")
// log.debug(s"Internal state is now $subscriptions")
}
I'm not sure if this is an Akka-specific or Scala pattern-matching specific question, so I tagged both
EDIT: After trying #aepurniet 's answer, I have no idea how to solve the compiler error. receive needs to return PartialFunction[Any,Unit], but when match is not the only item in the => {...} it seems to be returning Any=>AnyRef
// Compiler error because msg=>{...} is not proper type
def receive = msg => {
log.info(s"some log")
msg match {
case RegisterChannel(name, owner) => {
getChannel(name) match {
<snip>
received = { case ... } is actually shorthand for received = msg => msg match { case ... }. you can rewrite that receive = msg => { log.info(..); msg match { case ... } } you may have to additionally specify types.
There is akka.event.LoggingReceive that you can use like this:
def receive = LoggingReceive {
case ...
}
Then you set akka.actor.debug.receive to on and this will log (to DEBUG) all message that were received and whether they were handled or not.
See Tracing Actor Invocations section in the official documentation of Akka.
For the additional state logging, you can do something similar to LoggingReceive
def withStateLogging(handler: Receive): Receive = {
case msg if handler.isDefinedAt(msg) ⇒
log.debug("before actual receive")
log.debug(s"received: $msg")
log.debug(s"state: $state")
handler(msg)
log.debug("after actual receive")
log.debug(s"finished: $msg")
log.debug(s"state: $state")
}
def receive = withStateLogging {
case ...
}
Compiler complains because Actor#receive return type is Receive which is actually defined as
type Receive = PartialFunction[Any, Unit]
Here is nice example how your problem can be solved with stackable traits: how to add logging function in sending and receiving action in akka
It is a little tricky and overrides default behavior of PartialFunction.

Akka persistentChannel does not delete message from Journal upon confirm

I am writing a piece of code that uses PersistentChannel to send a message to an actor that does some IO. Upon completion it confirms the ConfirmablePersistent message.
The document says that upon confirmation the message shall be deleted in a PersistentChannel. But in my case my files stays in the journal with out getting deleted.
My requirement is that as soon as I get a successful result for the IO or the deadline has exceeded the persisted message should be deleted from the journal.
class IOWorker(config: Config, ref: ActorRef)
extends Actor with ActorLogging {
import IOWorker._
val channel = context.actorOf(PersistentChannel.props(
PersistentChannelSettings(redeliverInterval = 1.minute,
pendingConfirmationsMax = 1,pendingConfirmationsMin = 0)))
val doIOActor = context.actorOf(DOIOActor(config))
def receive = {
case payload # (msg, deadline)=>
channel ! Deliver(Persistent(payload), doIOActor.path)
}
}
object DOIOActor {
def apply(config: Config) = Props(classOf[DOIOActor], config)
}
class DOIOActor(config: Config) extends Actor
with ActorLogging {
def receive = {
case p # ConfirmablePersistent(payload, sequenceNr, redeliveries) =>
payload match {
case (msg, deadline: Deadline) =>
deadline.hasTimeLeft match {
case false => p.confirm()
case true =>
sender ! SAVED(msg)
Try{DOIO}
match
{
case Success(v) =>
sender ! SUCCESS(msg)
p.confirm()
case Failure(doioException) =>
log.warning(s"Could not complete DOIO. $doioException")
throw doioException
}
}
}
}
def DOIO(ftpClient: FTPClient, destination: String, file: AISData) = {
SOMEIOTASK match {
case true => log.info(s"Storing file to $destination.")
case false =>
throw new Exception(s"Could not DOIO to destination $destination")
}
}
}
Deletions are performed asynchronously by most journal implementations, as discussed on the mailing list.