In my below test, I tried to simulate a timeout and then send a normal request. however, I got spray.can.Http$ConnectionException: Premature connection close (the server doesn't appear to support request pipelining)
class SprayCanTest extends ModuleTestKit("/SprayCanTest.conf") with FlatSpecLike with Matchers {
import system.dispatcher
var app = Actor.noSender
protected override def beforeAll(): Unit = {
super.beforeAll()
app = system.actorOf(Props(new MockServer))
}
override protected def afterAll(): Unit = {
system.stop(app)
super.afterAll()
}
"response time out" should "work" in {
val setup = Http.HostConnectorSetup("localhost", 9101, false)
connect(setup).onComplete {
case Success(conn) => {
conn ! HttpRequest(HttpMethods.GET, "/timeout")
}
}
expectMsgPF() {
case Status.Failure(t) =>
t shouldBe a[RequestTimeoutException]
}
}
"normal http response" should "work" in {
//Thread.sleep(5000)
val setup = Http.HostConnectorSetup("localhost", 9101, false)
connect(setup).onComplete {
case Success(conn) => {
conn ! HttpRequest(HttpMethods.GET, "/hello")
}
}
expectMsgPF() {
case HttpResponse(status, entity, _, _) =>
status should be(StatusCodes.OK)
entity should be(HttpEntity("Helloworld"))
}
}
def connect(setup: HostConnectorSetup)(implicit system: ActorSystem) = {
// for the actor 'asks'
import system.dispatcher
implicit val timeout: Timeout = Timeout(1 second)
(IO(Http) ? setup) map {
case Http.HostConnectorInfo(connector, _) => connector
}
}
class MockServer extends Actor {
//implicit val timeout: Timeout = 1.second
implicit val system = context.system
// Register connection service
IO(Http) ! Http.Bind(self, interface = "localhost", port = 9101)
def receive: Actor.Receive = {
case _: Http.Connected => sender ! Http.Register(self)
case HttpRequest(GET, Uri.Path("/timeout"), _, _, _) => {
Thread.sleep(3000)
sender ! HttpResponse(entity = HttpEntity("ok"))
}
case HttpRequest(GET, Uri.Path("/hello"), _, _, _) => {
sender ! HttpResponse(entity = HttpEntity("Helloworld"))
}
}
}
}
and My config for test:
spray {
can {
client {
response-chunk-aggregation-limit = 0
connecting-timeout = 1s
request-timeout = 1s
}
host-connector {
max-retries = 0
}
}
}
I found that in both cases, the "conn" object is the same.
So I guess when RequestTimeoutException happens, spray put back the conn to the pool (by default 4?) and the next case will use the same conn but at this time, this conn is keep alive, so the server will treat it as chunked request.
If I put some sleep in the second case, it will just passed.
So I guess I must close the conn when got RequestTimeoutException and make sure the second case use a fresh new connection, right?
How should I do? Any configurations?
Thanks
Leon
You should not block inside an Actor (your MockServer). When it is blocked, it is unable to respond to any messages. You can wrap the Thread.sleep and response inside a Future. Or even better: use the Akka Scheduler. Be sure to assign the sender to a val because it may change when you respond to the request asynchronously. This should do the trick:
val savedSender = sender()
context.system.scheduler.scheduleOnce(3 seconds){
savedSender ! HttpResponse(entity = HttpEntity("ok"))
}
Related
I'm starting to explore the new Akka Typed API. I'm trying to run an updated version of the random router from this blog post.
My router is largely the same:
import java.util.concurrent.ThreadLocalRandom
import akka.actor.Address
import akka.actor.typed.{ActorRef, Behavior}
import akka.actor.typed.receptionist.{Receptionist, ServiceKey}
import akka.actor.typed.scaladsl.Behaviors
import akka.cluster.ClusterEvent.{ReachabilityEvent, ReachableMember, UnreachableMember}
import akka.cluster.typed.{Cluster, Subscribe}
object RandomRouter {
private final case class WrappedReachabilityEvent(event: ReachabilityEvent)
// subscribes to cluster reachability events and
// avoids routees that are unreachable
def clusterRouter[T](serviceKey: ServiceKey[T]): Behavior[T] =
Behaviors.setup[Any] { ctx ⇒
ctx.system.receptionist ! Receptionist.Subscribe(serviceKey, ctx.self)
val cluster = Cluster(ctx.system)
// typically you have to map such external messages into this
// actor's protocol with a message adapter
val reachabilityAdapter: ActorRef[ReachabilityEvent] = ctx.messageAdapter(WrappedReachabilityEvent.apply)
cluster.subscriptions ! Subscribe(reachabilityAdapter, classOf[ReachabilityEvent])
def routingBehavior(routees: Vector[ActorRef[T]], unreachable: Set[Address]): Behavior[Any] =
Behaviors.receive { (ctx, msg) ⇒
msg match {
case serviceKey.Listing(services) ⇒
if (services.isEmpty) {
ctx.log.info("Found no services")
} else {
ctx.log.info(s"Found services: ${services.map(_.path.name).mkString(", ")}")
}
routingBehavior(services.toVector, unreachable)
case WrappedReachabilityEvent(event) => event match {
case UnreachableMember(m) =>
ctx.log.warning(s"Member ${m.address} has become unreachable")
routingBehavior(routees, unreachable + m.address)
case ReachableMember(m) =>
ctx.log.info(s"Member ${m.address} has become reachable again")
routingBehavior(routees, unreachable - m.address)
}
case other: T #unchecked ⇒
if (routees.isEmpty)
Behaviors.unhandled
else {
val reachableRoutes =
if (unreachable.isEmpty) routees
else routees.filterNot { r => unreachable(r.path.address) }
val i = ThreadLocalRandom.current.nextInt(reachableRoutes.size)
reachableRoutes(i) ! other
Behaviors.same
}
}
}
routingBehavior(Vector.empty, Set.empty)
}.narrow[T]
}
And my cluster spins off dummy actors:
object DummyActor {
def behavior[T](serviceKey: ServiceKey[T]): Behavior[Any] = Behaviors.setup { ctx =>
ctx.log.info("Woohoo, I'm alive!")
Behaviors.empty
}
}
with the following:
object MyCluster {
val serviceKey: ServiceKey[String] = ServiceKey[String]("cluster")
val behavior: Behavior[String] = Behaviors.setup { ctx =>
(1 to 5).foreach { i =>
ctx.log.info("I'm so sleepy...")
Thread.sleep(500)
ctx.log.info(s"Spawning actor #$i")
ctx.spawnAnonymous(DummyActor.behavior(serviceKey))
ctx.log.info("I'm tired again...")
Thread.sleep(500)
}
val router = ctx.spawn(RandomRouter.clusterRouter(serviceKey), "router")
Behaviors.stopped
}
}
When I run the following main, I always see "Found no services" in my log, indicating what I believe to mean that none of my dummy actors registered with the cluster receptionist.
import akka.actor.typed.ActorSystem
object Main extends App {
val system = ActorSystem(MyCluster.behavior, "cluster-system")
}
What am I missing? I'm using Akka 2.5.12.
The dummy actors need to register! It doesn't happen automatically. This was solved by adding the following line in the setup block:
ctx.system.receptionist ! Receptionist.Register(serviceKey, ctx.self)
object DummyActor {
def behavior[T](serviceKey: ServiceKey[T]): Behavior[Any] = Behaviors.setup { ctx =>
ctx.system.receptionist ! Receptionist.Register(serviceKey, ctx.self)
ctx.log.info("Woohoo, I'm alive!")
Behaviors.empty
}
}
I have implemented a receiver that is supposed to connect to a WebSocket stream and get the messages for processing. Here is the implementation that I have done so far:
class WebSocketReader (wsConfig: WebSocketConfig, stringMessageHandler: String => Option[String],
storageLevel: StorageLevel) extends Receiver[String] (storageLevel) {
// TODO: avoid using a var
private var wsClient: WebSocketClient = _
def sendRequest(isRequest: Boolean, msgCount: Int) = {
while (isRequest) {
wsClient.send(msgCount.toString)
Thread.sleep(1000)
}
}
// TODO: avoid using Synchronization...
private def connect(): Unit = {
Try {
wsClient = createWsClient
} match {
case Success(_) =>
wsClient.connect().map {
case result if result.isSuccess =>
sendRequest(true, 10)
case _ =>
connect()
}
case Failure(ex) =>
// TODO: how to signal a failure so that it is tried the next time....
ex.printStackTrace()
}
}
def onStart(): Unit = {
new Thread(getClass.getSimpleName) {
override def run() { connect() }
}.start()
}
override def onStop(): Unit =
if (wsClient != null) wsClient.disconnect()
private def createWsClient = {
new DefaultHookupClient(new HookupClientConfig(new URI(wsConfig.wsUrl))) {
override def receive: Receive = {
case Disconnected(_) =>
// TODO: use Logging framework, try reconnecting....
println(s"the web socket is disconnected")
case TextMessage(message) =>
stringMessageHandler(message).foreach(store)
case JsonMessage(jsValue) =>
stringMessageHandler(jsValue.toString).foreach(store)
}
}
}
}
How is this Receiver being run? Does this Receiver run on the worker nodes or on the driver node? Is this way of sleeping a thread a correct approach?
The reason why I want to do this is that the server that is exposing the WebSocket end point would need a count on the messages that I want to receive. Say if I ask the server for 100 messages, it would give me 100 messages and so on. So I need a way to periodically schedule this request to the server. Currently, I'm using the Thread.sleep mechanism. Is this advisable? What could be the alternative?
I have the following flow:
val actorSource = Source.actorRef(10000, OverflowStrategy.dropHead)
val targetSink = Flow[ByteString]
.map(_.utf8String)
.via(new JsonStage())
.map { json =>
MqttMessages.jsonToObject(json)
}
.to(Sink.actorRef(self, "Done"))
sourceRef = Some(Flow[ByteString]
.via(conn.flow)
.to(targetSink)
.runWith(actorSource))
within an Actor (which is the Sink.actorRef one). The conn.flow is an incoming TCP Connection using Tcp().bind(address, port).
Currently the Sink.actorRef Actor keeps running when the tcp connection is closed from the client side. Is there a way to register the client side termination of the tcp connection to shutdown the Actor?
Edit:
I tried handling both cases as suggested:
case "Done" =>
context.stop(self)
case akka.actor.Status.Failure =>
context.stop(self)
But when I test with a socket client and cancel it, the actor is not being shutdown. So neither the "Done" message nor the Failure seem to be registered if the TCP connection is terminated.
Here is the whole code:
private var connection: Option[Tcp.IncomingConnection] = None
private var mqttpubsub: Option[ActorRef] = None
private var sourceRef: Option[ActorRef] = None
private val sdcTopic = "out"
private val actorSource = Source.actorRef(10000, OverflowStrategy.dropHead)
implicit private val system = context.system
implicit private val mat = ActorMaterializer.create(context.system)
override def receive: Receive = {
case conn: Tcp.IncomingConnection =>
connection = Some(conn)
mqttpubsub = Some(context.actorOf(Props(classOf[MqttPubSub], PSConfig(
brokerUrl = "tcp://127.0.0.1:1883", //all params is optional except brokerUrl
userName = null,
password = null,
//messages received when disconnected will be stash. Messages isOverdue after stashTimeToLive will be discard
stashTimeToLive = 1.minute,
stashCapacity = 100000, //stash messages will be drop first haft elems when reach this size
reconnectDelayMin = 10.millis, //for fine tuning re-connection logic
reconnectDelayMax = 30.seconds
))))
val targetSink = Flow[ByteString]
.alsoTo(Sink.foreach(println))
.map(_.utf8String)
.via(new JsonStage())
.map { json =>
MqttMessages.jsonToObject(json)
}
.to(Sink.actorRef(self, "Done"))
sourceRef = Some(Flow[ByteString]
.via(conn.flow)
.to(targetSink)
.runWith(actorSource))
case msg: MqttMessages.MqttMessage =>
processMessage(msg)
case msg: Message =>
val jsonMsg = JsonParser(msg.payload).asJsObject
val mqttMsg = MqttMessages.jsonToObject(jsonMsg)
try {
sourceRef.foreach(_ ! ByteString(msg.payload))
} catch {
case e: Throwable => e.printStackTrace()
}
case SubscribeAck(Subscribe(topic, self, qos), fail) =>
case "Done" =>
context.stop(self)
case akka.actor.Status.Failure =>
context.stop(self)
}
the Actor keeps running
Which actor do you mean, the one you've registered with Sink.actorRef? If yes, then to shut it down when the stream shuts down, you need to handle "Done" and akka.actor.Status.Failure messages in it and invoke context.stop(self) explicitly. "Done" message will be sent when the stream closes successfully, while Status.Failure will be sent if there is an error.
For more information see Sink.actorRef API docs, they explain the termination semantics.
I ended up creating another Stage, which only passes elements through but and emits an additional message to the next flow if the upstream closes:
class TcpStage extends GraphStage[FlowShape[ByteString, ByteString]] {
val in = Inlet[ByteString]("TCPStage.in")
val out = Outlet[ByteString]("TCPStage.out")
override val shape = FlowShape.of(in, out)
override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) {
setHandler(out, new OutHandler {
override def onPull(): Unit = {
if (isClosed(in)) emitDone()
else pull(in)
}
})
setHandler(in, new InHandler {
override def onPush(): Unit = {
push(out, grab(in))
}
override def onUpstreamFinish(): Unit = {
emitDone()
completeStage()
}
})
private def emitDone(): Unit = {
push(out, ByteString("{ }".getBytes("utf-8")))
}
}
}
Which I then use in my flow:
val targetSink = Flow[ByteString]
.via(new TcpStage())
.map(_.utf8String)
.via(new JsonStage())
.map { json =>
MqttMessages.jsonToObject(json)
}
.to(Sink.actorRef(self, MqttDone))
sourceRef = Some(Flow[ByteString]
.via(conn.flow)
.to(targetSink)
.runWith(actorSource))
Following the Akka Cluster documentation, I have the Worker Dial-in example running.
http://doc.akka.io/docs/akka/snapshot/java/cluster-usage.html
So I've trying to integrate that with a spray routing.
My idea is to have a cluster behind the scenes and through a http rest, call that service.
So I have the following code.
object Boot extends App {
val port = if (args.isEmpty) "0" else args(0)
val config =
ConfigFactory
.parseString(s"akka.remote.netty.tcp.port=$port")
.withFallback(ConfigFactory.parseString("akka.cluster.roles = [frontend]"))
.withFallback(ConfigFactory.load())
val system = ActorSystem("ClusterSystem", config)
val frontend = system.actorOf(Props[TransformationFrontend], name = "frontend")
implicit val actSystem = ActorSystem()
IO(Http) ! Http.Bind(frontend, interface = config.getString("http.interface"), port = config.getInt("http.port"))
}
class TransformationFrontend extends Actor {
var backends = IndexedSeq.empty[ActorRef]
var jobCounter = 0
implicit val timeout = Timeout(5 seconds)
override def receive: Receive = {
case _: Http.Connected => sender ! Http.Register(self)
case HttpRequest(GET, Uri.Path("/job"), _, _, _) =>
jobCounter += 1
val backend = backends(jobCounter % backends.size)
val originalSender = sender()
val future : Future[TransformationResult] = (backend ? new TransformationJob(jobCounter + "-job")).mapTo[TransformationResult]
future onComplete {
case Success(s) =>
println("received from backend: " + s.text)
originalSender ! s.text
case Failure(f) => println("error found: " + f.getMessage)
}
case job: TransformationJob if backends.isEmpty =>
sender() ! JobFailed("Service unavailable, try again later", job)
case job: TransformationJob =>
jobCounter += 1
backends(jobCounter % backends.size) forward job
case BackendRegistration if !backends.contains(sender()) =>
println("backend registered")
context watch sender()
backends = backends :+ sender()
case Terminated(a) =>
backends = backends.filterNot(_ == a)
}
}
But what I really want to do is to combining the spray routing with those pattern matching.
Instead of writing my GET like the above, I would like to write like this:
path("job") {
get {
respondWithMediaType(`application/json`) {
complete {
(backend ? new TransformationJob(jobCounter + "-job")).mapTo[TransformationResult]
}
}
}
}
But extending my Actor with this class, I have to do the following
def receive = runRoute(defaultRoute)
How can I combine this approach with my TransformationFrontend Actor pattern matching methods? BackendRegistration, Terminated, TransformationJob?
You can compose PartialFunctions like Receive with PartialFunction.orElse:
class TransformationFrontend extends Actor {
// ...
def myReceive: Receive = {
case job: TransformationJob => // ...
// ...
}
def defaultRoute: Route =
get {
// ...
}
override def receive: Receive = runRoute(defaultRoute) orElse myReceive
}
That said, it often makes sense to split up functionality into several actors (as suggested in the comment above) if possible.
I'm writing an Interactive Broker API using Scala and Akka actors.
I have a Client actor that connects to the server and communicate with the IO manager to send requests and receive responses from TWS. The connection works fine and I'm able to send a request and get the response.
Then I receive automatically a PeerClosed message from the IO manager after 1 minute. I would like that the connection stays open unless I explicitly close it. I tried to set keepOpenOnPeerClosed = true but it changes nothing.
Here is the Actor:
class Client(remote: InetSocketAddress, clientId: Int, extraAuth: Boolean, onConnected: Session => Unit, listener: EWrapper) extends Actor {
final val ClientVersion: Int = 63
final val ServerVersion: Int = 38
final val MinServerVerLinking: Int = 70
import Tcp._
import context.system
IO(Tcp) ! Connect(remote)
def receive = {
case CommandFailed(_: Connect) =>
print("connect failed")
context stop self
case c#Connected(remote, local) => {
val connection = sender()
connection ! Register(self, keepOpenOnPeerClosed = true)
context become connected(connection,1)
val clientVersionBytes = ByteString.fromArray(String.valueOf(ClientVersion).getBytes() ++ Array[Byte](0.toByte))
println("Sending Client Version " + clientVersionBytes)
sender() ! Write(clientVersionBytes)
}
}
def connected(connection: ActorRef, serverVersion: Int): Receive = {
case request: Request =>
print("Send request " + request)
connection ! Write(ByteString(request.toBytes(serverVersion)))
case CommandFailed(w: Write) =>
connection ! Close
print("write failed")
case Received(data) => {
println(data)
implicit val is = new DataInputStream(new ByteArrayInputStream(data.toArray))
EventDispatcher.consumers.get(readInt()) match {
case Some(consumer) => {
consumer.consume(listener, serverVersion)
}
case None => {
listener.error(EClientErrors.NoValidId, EClientErrors.UnknownId.code, EClientErrors.UnknownId.msg)
}
}
}
case _ : ConnectionClosed => context stop self
}
I don't have the same behaviour if I connect using IBJts API (using a standard Java Socket)
Have you tried it with the keep alive option?
sender ! Tcp.SO.KeepAlive(on = true)