ActorPublisher as Comet Event Source in Play Application - scala

I'm trying to write an Actor which connects to an Amazon Kinesis stream and then relays any messages received via Comet to a Web UI. I'm using Source.actorPublisher for this and using the json method with Comet in Play described here. I got the events working just fine using Source.tick(), but when I tried using an ActorPublisher, the Actor never seems to be sent any Request messages as expected. How are requests for data usually sent down an Akka flow? I'm using v2.5 of the Play Framework.
My controller code:
def subDeviceSeen(id: Id): Action[AnyContent] = Action {
val deviceSeenSource: Source[DeviceSeenMessage, ActorRef] = Source.actorPublisher(DeviceSeenEventActor.props)
Ok.chunked(deviceSeenSource
.filter(m => m.id == id)
.map(Json.toJson(_))
via Comet.json("parent.deviceSeen")).as(ContentTypes.JSON)
}
Am I doing anything obviously wrong in the above? Here is my Actor code:
object DeviceSeenEventActor {
def props: Props = Props[DeviceSeenEventActor]
}
class DeviceSeenEventActor extends ActorPublisher[DeviceSeenMessage] {
implicit val mat = ActorMaterializer()(context)
val log = Logging(context.system, this)
def receive: Receive = {
case Request => log.debug("Received request message")
initKinesis()
context.become(run)
case Cancel => context.stop(self)
}
def run: Receive = {
case vsm:DeviceSeenMessage => onNext(vsm)
log.debug("Received request message")
onCompleteThenStop() //we are currently only interested in one message
case _:Any => log.warning("Unknown message received by event Actor")
}
private def initKinesis() = {
//init kinesis, a worker is created and given a reference to this Actor.
//The reference is used to send messages to the actor.
}
}
The 'Received request message' log line is never displayed. Am I missing some implicit? There are no warnings or anything else obvious displayed in the play console.

The issue was that I was pattern matching on case Request => ... instead of case Request() => .... Since I didn't have a default case in my receive() method, the message was simply dropped by the Actor.

Related

Persistent Redis Pub/Sub Actors

I have a backend app writed in Scala Play. Because I have a realtime implementation using Akka Actors with data stored in a Redis server, I want as my each backend instance (deployed on centos servers) to be a Publisher and in same time a Subscriber to Redis service. Why this? Because a 3rd party app will send requests to my backend to update the data from Redis, and I want that all actors from all instances to push data to clients (frontend) indifferent on which backend instance is redirected this request (a load balancer is used there).
So, when instance1 will publish on Redis, I want that all subscribers(instance2, instance3, even instance1 because I said each instance must be pub/sub) to push data to clients.
I created an object with a Publisher and a Subscriber client and I was expecting that these will have a singleton behavior. But, for an unknown reason, over the night I see that my instances are unsubscribed from the Redis server without a message. I think this, because in the next day, my Redis service have 0 subscribers. I don't know if I have a bad implementation there or just Redis kill the connections after some time.
RedisPubSubServer.scala (In facts, here are just 2 Akka Actors which take RedisClient as params)
class Subscriber(client: RedisClient) extends Actor {
var callback: PubSubMessage => Any = { m => }
implicit val timeout = Timeout(2 seconds)
override def receive: Receive = {
case Subscribe(channel) => client.subscribe(channel)(callback)
case Register(cb) => callback = cb; self ? true
case Unsubscribe(channel) => client.unsubscribe(channel); self ? true
}
}
class Publisher(client: RedisClient) extends Actor {
implicit val timeout = Timeout(2 seconds)
override def receive: Receive = {
case Publish(channel, msg) => client.publish(channel, msg); self ? true
}
}
RedisPubSubClient.scala (here I create the Publisher and Subscriber as singleton)
object Pub {
println("starting publishing service...")
val config = ConfigFactory.load.getObject("redis").toConfig
val client = new RedisClient(config.getString("master"), config.getInt("port"))
val system = ActorSystem("RedisPublisher")
val publisher = system.actorOf(Props(new Publisher(client)))
def publish(channel: String, message: String) =
publisher ! Publish(channel, message)
}
object Sub {
val client = new RedisClient(config.getString("master"), config.getInt("port"))
val system = ActorSystem("RedisSubscriber")
val subscriber = system.actorOf(Props(new Subscriber(client)))
println("SUB Registering...")
subscriber ! Register(callback)
def sub(channel: String) = subscriber ! Subscribe(channel)
def unsub(channel: String) = subscriber ! Unsubscribe(channel)
def callback(msg: PubSubMessage) = {
msg match {
case S(channel, no) => println(s"subscribed to $channel and count $no")
case U(channel, no) => println(s"unsubscribed from $channel and count $no")
case M(channel, msg) => msg match {
case "exit" => client.unsubscribe()
case jsonString => // do the job
}
case E(e) => println(s"ERR = ${e.getMessage}")
}
}
}
and the RedisService
object RedisService {
val system = ActorSystem("RedisServiceSubscriber")
val subscriber = system.actorOf(Props(new Subscriber(client)))
subscriber ! Register(callback)
subscriber ! Subscribe("channelName")
// So, here I'm expecting that subscriber to have a life-cycle as the backend instance
}
from an api endpoint, I push data calling Pub publish method as:
def reloadData(request: AnyType) {
Pub.publish("channelName", requestAsString)
}
Can be possible as Publisher/Subscriber Actors to be killed after a while and due of that to throw in some errors for redis clients Pub/Sub?
For Publisher, I must say that I'm thinking to create the client each time when the api call is made, but for the Subscriber, I can not use another way that a singleton object which will listen the Redis entire life of the backend.
thanks
edit: used library:
"net.debasishg" %% "redisclient" % "3.41"
After some researches, I found another scala redis lib which seems to do exactly what I need in an easier maner
"com.github.etaty" %% "rediscala" % "1.9.0"

Sending to Akka's Dead Letter Channel from many other actors in the fallback case

New to both Scala + Akka here. I have a small network of actors that all send different messages to each other. Many of them need access to my DeadLetterChannel actor in case they receive a message that they are not sure how to handle:
class DeadLetterChannel(val queue : mutable.Queue[Any]) extends Actor with Logging {
override def receive: Receive = {
case any : Any =>
log.info(s"${this.getClass} just received a message of type ${} from ${sender().getClass}.")
queue.enqueue(any)
case _ =>
log.warn(s"${this.getClass} just received a non-Any message.")
}
}
Then from inside many other actors:
class DeviceManager(val dlc : DeadLetterChannel) extends ActorRef {
override def receive: Receive = {
case Heartbeat =>
// Handle Heartbeat here...
case Connect:
// Handle Connect here...
case Disconnect:
// Handle Disconnect here...
case _ =>
dlc ! ???
}
}
I have two problems here:
I'm getting a compiler error on the sending of the message (specifically the ! overload: "Cannot resolve symbol !"); and
I have no idea what to send to dlc in the _ case, any ideas? Obviously I'm just trying to send the message that DeviceManager received that is neither a Heartbeat, a Connect or a Disconnect type
You may have over-simplified the code for your example, but you can't just send a message to an Actor class instance. You need to create a named actor using actorOf on an ActorSystem, and then use actorSelection on that system to get a handle on the actor.
For the second part of your question, just put a match value in the case and send the result:
case msg =>
dlc ! msg
Also, you might want to use a different name for your class because a dead letter is message that can't be delivered, not a message that can't be handled by the recipient.
Actor classes should extend Actor instead of ActorRef. Also, use ActorRef to reference an actor. DeviceManager, therefore, should be defined as the following:
class DeviceManager(dlc: ActorRef) extends Actor {
// ...
}
The idea here is that you would use the ActorRef of the DeadLetterChannel actor when creating the device actors. For example:
val dlc: ActorRef = context.actorOf(/* .. */)
val device1: ActorRef = context.actorOf(Props(classOf[DeviceManager], dlc)
val device2: ActorRef = context.actorOf(Props(classOf[DeviceManager], dlc)
Then you can do as Tim suggested in his answer and capture the default case in the receive block and send this message to dlc:
class DeviceManager(dlc: ActorRef) extends Actor {
def receive = {
/* other case clauses */
case msg =>
dlc ! msg
}
}
Also, instead of passing a mutable queue to DeadLetterChannel as a constructor argument, encapsulate this state inside the actor as an immutable queue and expose it only via message passing:
class DeadLetterChannel extends Actor with akka.actor.ActorLogging {
var queue = collection.immutable.Queue[Any](0)
def receive = {
case GetMessages => // GetMessages is a custom case object
sender() ! queue
case msg =>
val s = sender()
log.info(s"DLC just received the following message from [$s]: $msg")
queue = queue.enqueue(msg)
}
}
Another approach is to not define a default case (i.e., leave out the case msg => clause) and use Akka's mechanisms for dealing with unhandled messages. Below are a couple of options:
Enable the built-in logging for unhandled messages, which logs these messages at the DEBUG level:
akka {
actor {
debug {
# enable DEBUG logging of unhandled messages
unhandled = on
}
}
}
This approach is perhaps the simplest if you just want to log unhandled messages.
As the documentation states:
If the current actor behavior does not match a received message...by default publishes an akka.actor.UnhandledMessage(message, sender, recipient) on the actor system’s event stream.
In light of this, you could create an actor that subscribes to the event stream and handles akka.actor.UnhandledMessage messages. Such an actor could look like the following:
import akka.actor.{ Actor, UnhandledMessage, Props }
class UnhandledMessageListener extends Actor {
def receive = {
case UnhandledMessage(msg, sender, recipient) =>
// do something with the message, such as log it and/or something else
}
}
val listener = system.actorOf(Props[UnhandledMessageListener])
system.eventStream.subscribe(listener, classOf[UnhandledMessage])
More information about the event stream and creating a subscriber to that stream can be found here.

Websocket - Sink.actorRefWithAck and Source.queue - only one request TO server gets processed?

Consider this
def handle = WebSocket.accept[Array[Byte], Array[Byte]]
{
request =>
log.info("Handling byte-message")
ActorFlow.actorRef
{
out => MyActor.props(out)
}
}
Whenever a byte message is sent to the websocket, it gets delegated to the actor and before I get a log entry.
Works fine.
Now the same logic, with a Flow instead
def handle = WebSocket.accept[Array[Byte], Array[Byte]]
{
request =>
{
log.info("Handling byte-message")
Flow.fromSinkAndSource(sink, source).log("flow")
}
}
I'll add the rest of the code:
val tickingSource: Source[Array[Byte], Cancellable] =
Source.tick(initialDelay = 1 second, interval = 10 seconds, tick = NotUsed)
.map(_ => Wrapper().withKeepAlive(KeepAlive()).toByteArray)
val myActor = system.actorOf(Props{new MyActor(null)}, "myActor")
val serverMessageSource = Source
.queue[Array[Byte]](10, OverflowStrategy.backpressure)
.mapMaterializedValue { queue => myActor ! InitTunnel(queue)}
val sink = Sink.actorRefWithAck(myActor, InternalMessages.Init(), InternalMessages.Acknowledged(), InternalMessages.Completed())
val source = tickingSource.merge(serverMessageSource)
It has a keepAlive source, and an actual source, if the server wants to push something, merged.
The sink is again the actor.
Now the problem is, in this scenario I get EXACTLY one message from the client TO the server, even if it sends more, they do not get passed to myActor
At first I thought this may be due to the null reference passed to myActor here, but then the first one could not be processed either. I am out of ideas, what is causing this. The flow itself works, I get the keepAlive messages just fine and if I refresh the client (Scala.js) again, first request gets sent just fine to the server and server responds and all is well
edit to clarify:
I am NOT talking about the log entry here - I am sorry, I had another log entry in myActor and got myself confused.
If the client sends more than one message the server does not handle it. It never reaches the actor, although the client definitely sends it :(
What I would expect:
1) At first message from client to server, the websocket gets created
2) The websocket is kept alive by the server, via the tickingSource (that actually works!)
3) If the client sends another request, it gets handled by myActor and that also responds to the client over the websocket
So, 3) does not work. In fact, the client sends a message, but that never reaches myActor after the initial one :(
edit:
This is my actor logic for initializing the websocket/stream in myActor:
var tunnel: Option[SourceQueueWithComplete[Array[Byte]]] = None
override def receive: Receive = {
case i: InternalMessages.InitTunnel =>
log.info("Initializing tunnel")
tunnel = Some(i.sourceQueue)
case _: InternalMessages.Init =>
sender() ! InternalMessages.Acknowledged()
log.info("websocket stream initialized")
case _: InternalMessages.Completed =>
log.info("websocket stream completed")
case q: Question => {
tunnel match {
case Some(t) => t offer Answer()...
case None => log.error("No tunnel available")
}
}
}
object InternalMessages {
case class Acknowledged()
case class Init()
case class Completed()
case class InitTunnel(sourceQueue: SourceQueueWithComplete[Array[Byte]])
}
I have the feeling that you don't send acks after receiving the Question message, but you should as the akka docs says (http://doc.akka.io/docs/akka/current/scala/stream/stream-integrations.html#sink-actorrefwithack): It also requires the given acknowledgement message after each stream element to make back-pressure work.
I had almost the same problem in Java. But messages were not sent to the "actorRefWithAck" at all(just onInitMessage was received). The actor was remote and was sending "Acknowledged" message that was not the same instance as in Sink.actorRefWithAck() method. Adding equals method to the message resolved the issue.
#Override
public boolean equals(Object obj) {
return obj.getClass().equals(getClass());
}

Akka-Http Websockets: How to Send consumers the same stream of data

I have a WebSocket that clients can connect to I also have a stream of data using akka-streams. How can I make it that all clients get the same data. At the moment they seem to be racing for the data.
Thanks
One way you could do is is to have an actor that extends ActorPublisher and have it subscribe
to some message.
class MyPublisher extends ActorPublisher[MyData]{
override def preStart = {
context.system.eventStream.subscribe(self, classOf[MyData])
}
override def receive: Receive = {
case msg: MyData ⇒
if (isActive && totalDemand > 0) {
// Pushes the message onto the stream
onNext(msg)
}
}
}
object MyPublisher {
def props(implicit ctx: ExecutionContext): Props = Props(new MyPublisher())
}
case class MyData(data:String)
You can then use that actor as the source for the stream:
val dataSource = Source.actorPublisher[MyData](MyPublisher.props(someExcutionContext))
You can then create a flow from that datasource and apply a transform to convert the data into a websocket message
val myFlow = Flow.fromSinkAndSource(Sink.ignore, dataSource map {d => TextMessage.Strict(d.data)})
Then you can use that flow in your route handling.
path("readings") {
handleWebsocketMessages(myFlow)
}
From the processing of the original stream, you can then publish the data to the event stream and any instance of that actor will pick it up and put in onto the stream that their websocket is being served from.
val actorSystem = ActorSystem("foo")
val otherSource = Source.fromIterator(() => List(MyData("a"), MyData("b")).iterator)
otherSource.runForeach { msg ⇒ actorSystem.eventStream.publish(MyData("data"))}
Each socket will then have its own instance of the actor to provide it with data all coming from a single source.

MDC (Mapped Diagnostic Context) Logging in AKKA

I want to implement logback MDC logging on my AKKA application to organize and have a more infomative log; however, I also read that MDC might not work well with AKKA because AKKA has asynchronous logging system (MDC might be stored on a different thread). I used the Custom Dispatcher for MDC Logging defined here hoping to solve my problem but I can't make it work on my application. My application is not a play framework app though.
I have a RequestHandler Actor that receives different types of request and delegates it to a RequestSpecificHandler Actor which will process it.
class RequestHandler() extends Actor with akka.actor.ActorLogging {
def receive: Receive = {
//Requests
case req: RequestA =>
org.slf4j.MDC.put("messageId", req.msgId)
org.slf4j.MDC.put("requestType", req.requestType)
log.debug("FIRST LOG Received a RequestA")
val actorA = context.ActorOf(ActorA.props)
actorA ! req.msg
case req: RequestB => //...
//other requests...
//Response
case res: ResponseA =>
log.debug("Received responseA")
org.slf4j.MDC.remove("messageId")
org.slf4j.MDC.remove("requestType")
//other response
}
}
In my RequestSpecificHandler Actors, I also create new or refer to other existing HelperActors
class ActorA () extends Actor with akka.actor.ActorLogging {
val helperA = context.actorSelection("/user/helperA")
val helperB = context.actorOf("HelperB.props")
def receive: Receive = {
case msg: MessageTypeA =>
//do some stuff
log.debug("received MessageTypeA")
helperA ! taskForA
case doneByA =>
//do some stuff
log.debug("received doneByA")
helperB ! taskForB
case doneByB =>
log.debug("send reponseA")
sender ! ResponseA
}
}
Logging differs everytime I send a request, sometimes it logs with correct MDC messageId and requestType, sometimes it does not have any value. Even the "FIRST LOG Received a RequestA" log behaves this way, I assume it should always have the correct logstamp as it is in the same class where I call MDC.put
Here is my application.conf:
akka {
log-dead-letters = 10
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = DEBUG
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
actor{
default-dispatcher {
type = "some.package.monitoring.MDCPropagatingDispatcherConfigurator"
}
...
How can I do MDC logging where all the code logs (including dependency lib logs) executed during a certain request will have the same messageId, requestType logstamp? Are there other ways to do this aside from Custom Dispatcher for AKKA? Also, what is a more organize way to declare MDC.put and MDC.remove codes? Right now I'm having it on each case on receive.
Thanks
akka.actor.DiagnosticActorLogging might should solve your problem.