How to fix NullpointerException error scala - scala

So when i receive message from rabbitmq i want to send it to actor, but when i'm trying to match message i'm getting nullpointerexception. As it appears trouble occurs when i'm matching "msg" and trying send it to actor. Without this part " msg match {
case _ => drawer ! Drawer.Data(msg)
}" All works. How to fix it?
import java.util.UUID
import Client.{GameBullet, GameTank}
import akka.actor.{Actor, ActorLogging, ActorRef}
import com.rabbitmq.client.AMQP.BasicProperties
import com.rabbitmq.client.{AMQP, ConnectionFactory, DefaultConsumer, Envelope}
import org.json4s._
import org.json4s.jackson.JsonMethods._
import org.json4s.jackson.Serialization
import org.json4s.native.Serialization.{read, write}
object MessageSender {
case object Left
case object Right
case object Up
case object Down
case object StartGame
case object MakeShot
case object Start
}
case class Message(id:String,content:String)
class MessageSender(drawer:ActorRef) extends Actor with ActorLogging{
import MessageSender._
val factory = new ConnectionFactory()
factory.setHost("localhost")
val connection = factory.newConnection()
val channel = connection.createChannel()
val replyQueueName: String = channel.queueDeclare().getQueue
val corrId = UUID.randomUUID().toString
channel.queueBind(replyQueueName, "myDirect", replyQueueName)
val props = new BasicProperties.Builder().correlationId(corrId).replyTo(replyQueueName).build()
var currentId = 0
var message = ""
implicit val formats = Serialization.formats(ShortTypeHints(List(classOf[Message])))
override def receive: Receive = {
case StartGame =>
message="startGame"
var response: String = null
var msg:String =""
val code = pretty(render(Extraction.decompose(Message(currentId.toString,message))))
println(code)
channel.basicPublish("myDirect", "service", props, code.getBytes("UTF-8"))
println(replyQueueName)
println(corrId)
while(response == null){
val consumer = new DefaultConsumer(channel) {
override def handleDelivery(consumerTag: String,
envelope: Envelope,
properties: AMQP.BasicProperties,
body: Array[Byte]) {
msg = new String(body, "UTF-8")
println(properties.getCorrelationId)
println(s"message is $msg")
if(properties.getCorrelationId == corrId){
response = new String(body, "UTF-8")
}
}
}
channel.basicConsume(replyQueueName, true, consumer)
}
currentId=response.toInt
log.info(s"Session started [$currentId]")
msg match {
case _ => drawer ! Drawer.Data(msg)
}
self ! Start
}
}
Error:
This i get when i try match msg

According to the attached screenshot, the exception occurs on line 83 of this class.
The problem is therefore that the drawer member is null, so drawer ! Drawer.Data(msg) throws NPE.
I would guess that the initialization of the actor (using props) is done with a variable that is defined later in the same scope resulting with a "froward reference".
See my example in your other related question (Scala Akka NullPointerException error)

Related

How to emit messages from a Sink and pass them to another function?

I am currently building a client-side WebSockets consumer using Akka-HTTP. Instead of trying to do the parsing in the Sink, I wanted to wrap the code in a function which emits the outcome from the Sink, and then use this output (from the function) later for further processing (more parsing...etc.).
I am currently able to print every message from the Sink; however, the return type of the function remains to be Unit. My objective is to Emit a String from the function, for each item that lands in the sink, and then use the returned string to do further parsing. I have the code I have so far (Note: it's mostly boiler plate).
import java.util.concurrent.atomic.AtomicInteger
import akka.Done
import akka.actor.ActorSystem
import akka.http.scaladsl.Http
import akka.http.scaladsl.model.StatusCodes
import akka.http.scaladsl.model.ws.{Message, TextMessage, WebSocketRequest, WebSocketUpgradeResponse}
import akka.http.scaladsl.settings.ClientConnectionSettings
import akka.stream.Materializer
import akka.stream.scaladsl.{Flow, Keep, Sink, Source}
import akka.util.ByteString
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
import scala.util.{Failure, Success, Try}
object client extends App {
def parseData(uri: String)(implicit system: ActorSystem, materializer: Materializer): Unit = {
val defaultSettings = ClientConnectionSettings(system)
val pingCounter = new AtomicInteger()
val customWebsocketSettings = defaultSettings.websocketSettings.withPeriodicKeepAliveData(
() => ByteString(s"debug-${pingCounter.incrementAndGet()}")
)
val customSettings = defaultSettings.withWebsocketSettings(customWebsocketSettings)
val outgoing = Source.maybe[Message]
val sink: Sink[Message, Future[Done]] = Sink.foreach[Message] {
case message: TextMessage.Strict => message.text // I Want to emit/stream this message as a String from the function (or get a handle on it from the outside)
case _ => println("Other")
}
val webSocketFlow: Flow[Message, Message, Future[WebSocketUpgradeResponse]] =
Http().webSocketClientFlow(WebSocketRequest(uri), settings = customSettings)
val (upgradeResponse, closed) =
outgoing
.viaMat(webSocketFlow)(Keep.right)
.toMat(sink)(Keep.both)
.run()
val connected = upgradeResponse.flatMap { upgrade =>
if (upgrade.response.status == StatusCodes.SwitchingProtocols) {
Future.successful(Done)
} else {
throw new RuntimeException(
s"Connection failed: ${upgrade.response.status}"
)
}
}
connected.onComplete {
case Success(value) => value
case Failure(exception) => throw exception
}
closed.onComplete { _ =>
println("Retrying...")
parseData(uri)
}
upgradeResponse.onComplete {
case Success(value) => println(value)
case Failure(exception) => throw exception
}
}
}
And in a seperate object, I would like to do the parsing, so something like:
import akka.actor.ActorSystem
import akka.stream.Materializer
import api.client.parseData
object Application extends App {
implicit val system: ActorSystem = ActorSystem()
implicit val materializer: Materializer = Materializer(system)
val uri = "ws://localhost:8080/foobar"
val res = parseData(uri) // I want to handle the function output here
// parse(res)
println(res)
Is there a way I can get a handle on the Sink from outside the function, or do I need to do any parsing in the Sink. I am mainly trying to not overcomplicate the Sink.
Update: I am also considering if adding another Flow element to the stream (which handles the parsing) is a better practice than getting values outside of the stream.
Adding a flow element seems to solve your problem while being totally idiomatic.
What you have to keep in mind is that the sinks semantic is meant to describe how to "terminate" the stream, so while it can describe very complex computations, it will always return a single value which is returned only once the stream ends.
Said differently, a sink does not return a value per stream element, it returns a value per whole stream.

How to ensure router actor with "group mode" already associate remote routees?

For router actor with "group mode", we new a series of actors in advance, and then we will use actorOf to associate one router with remote routees.
In fact, it internal will use actorSelection, my question is: how to assure the association already finish?
For actorSelection, we can use resolveOne to make sure the selection is successful, then send message. But how about router actor? See following code, how to assure line2 associate with remote routees before send message to it, just like line line1 does?
package local
import akka.actor._
import akka.routing.RoundRobinGroup
import akka.util.Timeout
import scala.concurrent.duration._
import com.typesafe.config.ConfigFactory
import scala.util.{Failure, Success}
object Local extends App {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=2251")
val system = ActorSystem("LocalSystem", config.withFallback(ConfigFactory.load()))
system.actorOf(Props[LocalActor], "LocalActor")
val config2 = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=2252")
val system2 = ActorSystem("LocalSystem2", config2.withFallback(ConfigFactory.load()))
system2.actorOf(Props[LocalActor2], "LocalActor2")
}
class LocalActor extends Actor {
def receive = {
case _ => println("hi")
}
}
class LocalActor2 extends Actor {
import scala.concurrent.ExecutionContext.Implicits.global
implicit val timeout = Timeout(5 seconds)
val a = context.actorSelection("akka.tcp://LocalSystem#127.0.0.1:2251/user/LocalActor")
a.resolveOne().onComplete { // line 1
case Success(actor) => println("ready")
a ! 1
case Failure(ex) => println("not ready")
}
val paths = List("akka.tcp://LocalSystem#127.0.0.1:2251/user/LocalActor")
val b = context.actorOf(RoundRobinGroup(paths).props(), "LocalActorRouter") // line 2
b ! 1
def receive = {
case _ =>
}
}

Akka Testkit does not work when using state variable in stub in other than last test case

I have a problem when using the following stub with Akka Testkit:
import akka.actor.{Props, Status}
import akka.camel.{CamelExtension, CamelMessage, Consumer}
import akka.testkit.{ImplicitSender, TestProbe}
import org.scalatest._
[import own packages]
import scala.concurrent.duration.DurationInt
import scala.language.postfixOps
class ActorSpec extends AkkaTestBase
with ImplicitSender with WordSpecLike
with Matchers with BeforeAndAfterAll with IOSugars {
var stubResponse: String = ""
var doNotRespond = false
val stubEndpoint = "direct:test"
val reportProbe = TestProbe()
val actorUnderTest= TestProbe()
// create stub consumer
val stubConsumer = system.actorOf(Props(new Consumer {
override val endpointUri = stubEndpoint
override def replyTimeout = 1 second
override def receive: Receive = {
case msg: CamelMessage if doNotRespond => ()
case msg: CamelMessage => sender() ! stubResponse;
case _ => sender() ! "ack"
}
}), "stubActor")
[...]
}
When I run the following test cases:
"Respond with failure when camel produces a failure" in {
doNotRespond = true
actorUnderTest ! // Send relevant message
// (which returns the () above because of doNotRespond set to true)
doNotRespond = false // Return to original state
testProbe.expectMsg // Expect relevant message
"Respond with a failure when 'ERROR' status code is returned" in {
stubResponse = readStringFromResource("/failure.xml")
actorUnderTest ! // send relevant message
expectMsgClass(classOf[Status.Failure])
}
Unfortunately this does not work. We get a timeout:
ERROR o.a.c.processor.DefaultErrorHandler - Failed delivery for (MessageId: xxx on ExchangeId: xxx). Exhausted after delivery attempt: 1 caught: java.util.concurrent.TimeoutException: Failed to get response from the actor [ActorEndpointPath(akka://xxx/stubActor] within timeout [1 second].
When the test case that changes the doNotRespond state variable is put as last test case everything does work.
Why is this and how can it be fixed?

Akka-http process requests with Stream

I try write some simple akka-http and akka-streams based application, that handle http requests, always with one precompiled stream, because I plan to use long time processing with back-pressure in my requestProcessor stream
My application code:
import akka.actor.{ActorSystem, Props}
import akka.http.scaladsl._
import akka.http.scaladsl.server.Directives._
import akka.http.scaladsl.server._
import akka.stream.ActorFlowMaterializer
import akka.stream.actor.ActorPublisher
import akka.stream.scaladsl.{Sink, Source}
import scala.annotation.tailrec
import scala.concurrent.Future
object UserRegisterSource {
def props: Props = Props[UserRegisterSource]
final case class RegisterUser(username: String)
}
class UserRegisterSource extends ActorPublisher[UserRegisterSource.RegisterUser] {
import UserRegisterSource._
import akka.stream.actor.ActorPublisherMessage._
val MaxBufferSize = 100
var buf = Vector.empty[RegisterUser]
override def receive: Receive = {
case request: RegisterUser =>
if (buf.isEmpty && totalDemand > 0)
onNext(request)
else {
buf :+= request
deliverBuf()
}
case Request(_) =>
deliverBuf()
case Cancel =>
context.stop(self)
}
#tailrec final def deliverBuf(): Unit =
if (totalDemand > 0) {
if (totalDemand <= Int.MaxValue) {
val (use, keep) = buf.splitAt(totalDemand.toInt)
buf = keep
use foreach onNext
} else {
val (use, keep) = buf.splitAt(Int.MaxValue)
buf = keep
use foreach onNext
deliverBuf()
}
}
}
object Main extends App {
val host = "127.0.0.1"
val port = 8094
implicit val system = ActorSystem("my-testing-system")
implicit val fm = ActorFlowMaterializer()
implicit val executionContext = system.dispatcher
val serverSource: Source[Http.IncomingConnection, Future[Http.ServerBinding]] = Http(system).bind(interface = host, port = port)
val mySource = Source.actorPublisher[UserRegisterSource.RegisterUser](UserRegisterSource.props)
val requestProcessor = mySource
.mapAsync(1)(fakeSaveUserAndReturnCreatedUserId)
.to(Sink.head[Int])
.run()
val route: Route =
get {
path("test") {
parameter('test) { case t: String =>
requestProcessor ! UserRegisterSource.RegisterUser(t)
???
}
}
}
def fakeSaveUserAndReturnCreatedUserId(param: UserRegisterSource.RegisterUser): Future[Int] =
Future.successful {
1
}
serverSource.to(Sink.foreach {
connection =>
connection handleWith Route.handlerFlow(route)
}).run()
}
I found solution about how create Source that can dynamically accept new items to process, but I can found any solution about how than obtain result of stream execution in my route
The direct answer to your question is to materialize a new Stream for each HttpRequest and use Sink.head to get the value you're looking for. Modifying your code:
val requestStream =
mySource.map(fakeSaveUserAndReturnCreatedUserId)
.to(Sink.head[Int])
//.run() - don't materialize here
val route: Route =
get {
path("test") {
parameter('test) { case t: String =>
//materialize a new Stream here
val userIdFut : Future[Int] = requestStream.run()
requestProcessor ! UserRegisterSource.RegisterUser(t)
//get the result of the Stream
userIdFut onSuccess { case userId : Int => ...}
}
}
}
However, I think your question is ill posed. In your code example the only thing you're using an akka Stream for is to create a new UserId. Futures readily solve this problem without the need for a materialized Stream (and all the accompanying overhead):
val route: Route =
get {
path("test") {
parameter('test) { case t: String =>
val user = RegisterUser(t)
fakeSaveUserAndReturnCreatedUserId(user) onSuccess { case userId : Int =>
...
}
}
}
}
If you want to limit the number of concurrent calls to fakeSaveUserAndReturnCreateUserId then you can create an ExecutionContext with a defined ThreadPool size, as explained in the answer to this question, and use that ExecutionContext to create the Futures:
val ThreadCount = 10 //concurrent queries
val limitedExecutionContext =
ExecutionContext.fromExecutor(Executors.newFixedThreadPool(ThreadCount))
def fakeSaveUserAndReturnCreatedUserId(param: UserRegisterSource.RegisterUser): Future[Int] =
Future { 1 }(limitedExecutionContext)

Improve this actor calling futures

I've an actor (Worker) which basically ask 3 other actors (Filter1, Filter2 and Filter3) for a result. If any of them return a false, It's unnecessary to wait for the others, like an "and" operation over the results. When a false response is receive, a cancel message is sent to the actors in a way to cancel the queued work and make it more effective in the execution.
Filters aren't children of Worker, but there are a common pool of actor which are used by all Worker actors. I use an Agent to maintain the collection of cancel Works. Then, before a particular work is processed, I check in the cancel agent if that work was cancel, and then avoid the execution for it. Cancel has a higher priority than Work, then, it is processed always first.
The code is something like this
Proxy, who create the actors tree:
import scala.collection.mutable.HashSet
import scala.concurrent.ExecutionContext.Implicits.global
import com.typesafe.config.Config
import akka.actor.Actor
import akka.actor.ActorLogging
import akka.actor.ActorSystem
import akka.actor.PoisonPill
import akka.actor.Props
import akka.agent.Agent
import akka.routing.RoundRobinRouter
class Proxy extends Actor with ActorLogging {
val agent1 = Agent(new HashSet[Work])
val agent2 = Agent(new HashSet[Work])
val agent3 = Agent(new HashSet[Work])
val filter1 = context.actorOf(Props(Filter1(agent1)).withDispatcher("priorityMailBox-dispatcher")
.withRouter(RoundRobinRouter(24)), "filter1")
val filter2 = context.actorOf(Props(Filter2(agent2)).withDispatcher("priorityMailBox-dispatcher")
.withRouter(RoundRobinRouter(24)), "filter2")
val filter3 = context.actorOf(Props(Filter3(agent3)).withDispatcher("priorityMailBox-dispatcher")
.withRouter(RoundRobinRouter(24)), "filter3")
//val workerRouter = context.actorOf(Props[SerialWorker].withRouter(RoundRobinRouter(24)), name = "workerRouter")
val workerRouter = context.actorOf(Props(new Worker(filter1, filter2, filter3)).withRouter(RoundRobinRouter(24)), name = "workerRouter")
def receive = {
case w: Work =>
workerRouter forward w
}
}
Worker:
import scala.concurrent.Await
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
import scala.concurrent.duration.DurationInt
import akka.actor.Actor
import akka.actor.ActorLogging
import akka.actor.Props
import akka.actor.actorRef2Scala
import akka.pattern.ask
import akka.pattern.pipe
import akka.util.Timeout
import akka.actor.ActorRef
import akka.routing.RoundRobinRouter
import akka.agent.Agent
import scala.collection.mutable.HashSet
class Worker(filter1: ActorRef, filter2: ActorRef, filter3: ActorRef) extends Actor with ActorLogging {
implicit val timeout = Timeout(30.seconds)
def receive = {
case w:Work =>
val start = System.currentTimeMillis();
val futureF3 = (filter3 ? w).mapTo[Response]
val futureF2 = (filter2 ? w).mapTo[Response]
val futureF1 = (filter1 ? w).mapTo[Response]
val aggResult = Future.find(List(futureF3, futureF2, futureF1)) { res => !res.reponse }
Await.result(aggResult, timeout.duration) match {
case None =>
Nqueen.fact(10500000L)
log.info(s"[${w.message}] Procesado mensaje TRUE en ${System.currentTimeMillis() - start} ms");
sender ! WorkResponse(w, true)
case _ =>
filter1 ! Cancel(w)
filter2 ! Cancel(w)
filter3 ! Cancel(w)
log.info(s"[${w.message}] Procesado mensaje FALSE en ${System.currentTimeMillis() - start} ms");
sender ! WorkResponse(w, false)
}
}
}
and Filters:
import scala.collection.mutable.HashSet
import scala.util.Random
import akka.actor.Actor
import akka.actor.ActorLogging
import akka.actor.actorRef2Scala
import akka.agent.Agent
trait CancellableFilter { this: Actor with ActorLogging =>
//val canceledJobs = new HashSet[Int]
val agent: Agent[HashSet[Work]]
def cancelReceive: Receive = {
case Cancel(w) =>
agent.send(_ += w)
//log.info(s"[$t] El trabajo se cancelara (si llega...)")
}
def cancelled(w: Work): Boolean =
if (agent.get.contains(w)) {
agent.send(_ -= w)
true
} else {
false
}
}
abstract class Filter extends Actor with ActorLogging { this: CancellableFilter =>
val random = new Random(System.currentTimeMillis())
def response: Boolean
val timeToWait: Int
val timeToExecutor: Long
def receive = cancelReceive orElse {
case w:Work if !cancelled(w) =>
//log.info(s"[$t] Llego trabajo")
Thread.sleep(timeToWait)
Nqueen.fact(timeToExecutor)
val r = Response(response)
//log.info(s"[$t] Respondio ${r.reponse}")
sender ! r
}
}
object Filter1 {
def apply(agente: Agent[HashSet[Work]]) = new Filter with CancellableFilter {
val timeToWait = 74
val timeToExecutor = 42000000L
val agent = agente
def response = true //random.nextBoolean
}
}
object Filter2 {
def apply(agente: Agent[HashSet[Work]]) = new Filter with CancellableFilter {
val timeToWait = 47
val timeToExecutor = 21000000L
val agent = agente
def response = true //random.nextBoolean
}
}
object Filter3 {
def apply(agente: Agent[HashSet[Work]]) = new Filter with CancellableFilter {
val timeToWait = 47
val timeToExecutor = 21000000L
val agent = agente
def response = true //random.nextBoolean
}
}
Basically, I think Worker code is ugly and I want to make it better. Could you help to improve it?
Other point I want to improve is the cancel message. As I don't know which of the filters are done, I need to Cancel all of them, then, at least one cancel is redundant (Since this work is completed)
It is minor, but why don't you store filters as sequence? filters.foreach(f ! Cancel(w)) is nicer than
filter1 ! Cancel(w)
filter2 ! Cancel(w)
filter3 ! Cancel(w)
Same for other cases:
class Worker(filter1: ActorRef, filter2: ActorRef, filter3: ActorRef) extends Actor with ActorLogging {
private val filters = Seq(filter1, filter2, filter3)
implicit val timeout = Timeout(30.seconds)
def receive = {
case w:Work =>
val start = System.currentTimeMillis();
val futures = filters.map { f =>
(f ? w).mapTo[Response]
}
val aggResult = Future.find(futures) { res => !res.reponse }
Await.result(aggResult, timeout.duration) match {
case None =>
Nqueen.fact(10500000L)
log.info(s"[${w.message}] Procesado mensaje TRUE en ${System.currentTimeMillis() - start} ms");
sender ! WorkResponse(w, true)
case _ =>
filters.foreach(f ! Cancel(w))
log.info(s"[${w.message}] Procesado mensaje FALSE en ${System.currentTimeMillis() - start} ms");
sender ! WorkResponse(w, false)
}
}
You may also consider to write constructor as Worker(filters: ActorRef*) if you do not enforce exactly three filters. It think it is okay to sendoff one redundant cancel (alternatives I see is overly complicated). I'm not sure, but if filters will be created very fast, if may got randoms initialized with the same seed value.