Akka Tcp create peer to peer architecture instead of client server - scala

In the current Akka documentation there is a nice example of creating a client server architecture. I'm creating a Akka actor that can send and receive messages on the bitcoin protocol. So far I've been able to send messages & receive replies to the message I sent, but I haven't been able to receive unsolicited messages as required on the peer to peer protocol.
I've tried to use Tcp.Bind and Tcp.Connect to be able to listen to unsolicited messages on port 18333 whistle also being able to send messages to a peer on the network. However, I run into this issue where it will say that the port is already bound (by the Tcp.Connect event) or it won't be able to send messages from that port (due to the Tcp.Bind event).
How can I send messages and receive unsolicited messages on the same port? Am I missing something here?
sealed trait Client extends Actor with BitcoinSLogger {
/**
* The address of the peer we are attempting to connect to
* on the p2p network
* #return
*/
def remote: InetSocketAddress
/**
* The actor that is listening to all communications between the
* client and its peer on the network
* #return
*/
def listener : ActorRef
def actorSystem : ActorSystem
/**
* The manager is an actor that handles the underlying low level I/O resources (selectors, channels)
* and instantiates workers for specific tasks, such as listening to incoming connections.
*/
def manager : ActorRef = IO(Tcp)(actorSystem)
/**
* This actor signifies the node we are connected to on the p2p network
* This is set when we received a [[Tcp.Connected]] message
*/
private var peer : Option[ActorRef] = None
def receive = {
case message : Tcp.Message => message match {
case event : Tcp.Event =>
logger.debug("Event: " + event)
handleEvent(event)
case command : Tcp.Command =>
logger.debug("Command: " + command)
handleCommand(command)
}
case unknownMessage => throw new IllegalArgumentException("Unknown message for client: " + unknownMessage)
}
/**
* This function is responsible for handling a [[Tcp.Event]] algebraic data type
* #param event
*/
private def handleEvent(event : Tcp.Event) = event match {
case Tcp.Bound(localAddress) =>
logger.debug("Actor is now bound to the local address: " + localAddress)
case Tcp.CommandFailed(w: Tcp.Write) =>
logger.debug("Client write command failed: " + Tcp.CommandFailed(w))
logger.debug("O/S buffer was full")
// O/S buffer was full
//listener ! "write failed"
case Tcp.CommandFailed(command) =>
logger.debug("Client Command failed:" + command)
case Tcp.Received(data) =>
logger.debug("Received data from our peer on the network: " + BitcoinSUtil.encodeHex(data.toArray))
//listener ! data
case Tcp.Connected(remote, local) =>
logger.debug("Tcp connection to: " + remote)
logger.debug("Local: " + local)
peer = Some(sender)
peer.get ! Tcp.Register(listener)
listener ! Tcp.Connected(remote,local)
case Tcp.ConfirmedClosed =>
logger.debug("Client received confirmed closed msg: " + Tcp.ConfirmedClosed)
peer = None
context stop self
}
/**
* This function is responsible for handling a [[Tcp.Command]] algebraic data type
* #param command
*/
private def handleCommand(command : Tcp.Command) = command match {
case Tcp.ConfirmedClose =>
logger.debug("Client received connection closed msg: " + Tcp.ConfirmedClose)
listener ! Tcp.ConfirmedClose
peer.get ! Tcp.ConfirmedClose
}
}
case class ClientImpl(remote: InetSocketAddress, network : NetworkParameters,
listener: ActorRef, actorSystem : ActorSystem) extends Client {
manager ! Tcp.Bind(listener, new InetSocketAddress(network.port))
//this eagerly connects the client with our peer on the network as soon
//as the case class is instantiated
manager ! Tcp.Connect(remote)
}
object Client {
def props(remote : InetSocketAddress, network : NetworkParameters, listener : ActorRef, actorSystem : ActorSystem) : Props = {
Props(classOf[ClientImpl], remote, network, listener, actorSystem)
}
def apply(remote : InetSocketAddress, network : NetworkParameters, listener : ActorRef, actorSystem : ActorSystem) : ActorRef = {
actorSystem.actorOf(props(remote, network, listener, actorSystem))
}
def apply(network : NetworkParameters, listener : ActorRef, actorSystem : ActorSystem) : ActorRef = {
//val randomSeed = ((Math.random() * 10) % network.dnsSeeds.size).toInt
val remote = new InetSocketAddress(network.dnsSeeds(0), network.port)
Client(remote, network, listener, actorSystem)
}
EDIT: Adding test case that is using my actor
"Client" must "connect to a node on the bitcoin network, " +
"send a version message to a peer on the network and receive a version message back, then close that connection" in {
val probe = TestProbe()
val client = Client(TestNet3, probe.ref, system)
val conn : Tcp.Connected = probe.expectMsgType[Tcp.Connected]
val versionMessage = VersionMessage(TestNet3, conn.localAddress.getAddress,conn.remoteAddress.getAddress)
val networkMessage = NetworkMessage(TestNet3, versionMessage)
client ! networkMessage
val receivedMsg = probe.expectMsgType[Tcp.Received](5.seconds)
//~~~~~~~~THIS IS WHERE THE TEST IS FAILING~~~~~~~~~~~~~~~~~~
//the bitcoin protocol states that after exchanging version messages a verack message is sent if the version message is accepted
//this is appearing on wireshark, but not being found by my actor
val verackMessage = probe.expectMsgType[Tcp.Received](2.seconds)
}
EDIT2:
Wireshark output showing that I am receiving these messages, and akka is not registering them

The core abstraction of Akka is Actors, so peers in Tcp are just Actors that you can receive messages from AND send messages to.
In this case you can get the ActorRef of your peer by calling sender() once you've received a Tcp.Connected message. In your code you are already saving that ref in peer. It should be as simple as peer.get ! Write(data) to send arbitrary data back to that peer.
Since the connection could break at any point, the docs appear to be using actor supervision to handle this:
class SimpleClient(connection: ActorRef, remote: InetSocketAddress)
extends Actor with ActorLogging {
import Tcp._
// sign death pact: this actor terminates when connection breaks
context watch connection
...
}
Update
(This took me way too long to realize.) The issue you are having is that you are not explicitly handling message framing: i.e the mechanics of buffer accumulation and message reconstruction. Akka TCP only hands you raw buffers. These buffers do NOT necessarily break on message boundaries, or even know anything about the messages of the higher-level protocols, like BitCoin, that ride TCP.
If you run the test case, the listener receives a Tcp.Receive message containing 1244 bytes of data. From this the unit test extracts a NetworkHeader and a VersionMessage, but it's entirely possible there are more messages in this buffer to be extracted and processed, depending on the specifics of the bitcoin protocol, but that it not handled. Instead the buffer is discarded and the test case waits for a second buffer (that may or may not ever arrive) and constructs from this buffer another message, with the hidden expectation it just happens to be perfectly byte-aligned.
Architecturally I would recommend creating a new actor specifically to handle message framing. This actor would receive the raw bits and reconstruct completed messages to send on down to the listener.

TCP sockets have a propery SO_REUSEADDR, which I believe you can enable here using either
.reuseAddress(true)
on your socket object
or
here I see a socket-options array that includes this property:
socket-options {
so-receive-buffer-size = undefined
so-send-buffer-size = undefined
so-reuse-address = undefined
so-traffic-class = undefined
tcp-keep-alive = undefined
tcp-oob-inline = undefined
tcp-no-delay = undefined
}
I think this is what you were looking for, but I may have misunderstood the question.

Related

Stop Akka stream Source when web socket connection is closed by the client

I have an akka http web socket Route with a code similar to:
private val wsReader: Route =
path("v1" / "data" / "ws") {
log.info("Opening websocket connecting ...")
val testSource = Source
.repeat("Hello")
.throttle(1, 1.seconds)
.map(x => {
println(x)
x
})
.map(TextMessage.Strict)
.limit(1000)
extractUpgradeToWebSocket { upgrade ⇒
complete(upgrade.handleMessagesWithSinkSource(Sink.ignore, testSource))
}
}
Everything works fine (I receive from the client 1 test message every second). The only problem is that I don't understand how to stop/close the Source (testSource) if the client close the web socket connection.
You can see that the source continue to produce elements (see println) also if the web socket is down.
How can I detect a client disconnection?
handleMessagesWithSinkSource is implemented as:
/**
* The high-level interface to create a WebSocket server based on "messages".
*
* Returns a response to return in a request handler that will signal the
* low-level HTTP implementation to upgrade the connection to WebSocket and
* use the supplied inSink to consume messages received from the client and
* the supplied outSource to produce message to sent to the client.
*
* Optionally, a subprotocol out of the ones requested by the client can be chosen.
*/
def handleMessagesWithSinkSource(
inSink: Graph[SinkShape[Message], Any],
outSource: Graph[SourceShape[Message], Any],
subprotocol: Option[String] = None): HttpResponse =
handleMessages(Flow.fromSinkAndSource(inSink, outSource), subprotocol)
This means the sink and the source are independent, and indeed the source should keep producing elements even when the client closes the incoming side of the connection. It should stop when the client resets the connection completely, though.
To stop producing outgoing data as soon as the incoming connection is closed, you may use Flow.fromSinkAndSourceCoupled, so:
val socket = upgrade.handleMessages(
Flow.fromSinkAndSourceCoupled(inSink, outSource)
subprotocol = None
)
One way is to use KillSwitches to handle testSource shutdown.
private val wsReader: Route =
path("v1" / "data" / "ws") {
logger.info("Opening websocket connecting ...")
val sharedKillSwitch = KillSwitches.shared("my-kill-switch")
val testSource =
Source
.repeat("Hello")
.throttle(1, 1.seconds)
.map(x => {
println(x)
x
})
.map(TextMessage.Strict)
.limit(1000)
.via(sharedKillSwitch.flow)
extractUpgradeToWebSocket { upgrade ⇒
val inSink = Sink.onComplete(_ => sharedKillSwitch.shutdown())
val outSource = testSource
val socket = upgrade.handleMessagesWithSinkSource(inSink, outSource)
complete(socket)
}
}

Terminate Akka-Http Web Socket connection asynchronously

Web Socket connections in Akka Http are treated as an Akka Streams Flow. This seems like it works great for basic request-reply, but it gets more complex when messages should also be pushed out over the websocket. The core of my server looks kind of like:
lazy val authSuccessMessage = Source.fromFuture(someApiCall)
lazy val messageFlow = requestResponseFlow
.merge(updateBroadcastEventSource)
lazy val handler = codec
.atop(authGate(authSuccessMessage))
.join(messageFlow)
handleWebSocketMessages {
handler
}
Here, codec is a (de)serialization BidiFlow and authGate is a BidiFlow that processes an authorization message and prevents outflow of any messages until authorization succeeds. Upon success, it sends authSuccessMessage as a reply. requestResponseFlow is the standard request-reply pattern, and updateBroadcastEventSource mixes in async push messages.
I want to be able to send an error message and terminate the connection gracefully in certain situations, such as bad authorization, someApiCall failing, or a bad request processed by requestResponseFlow. So basically, basically it seems like I want to be able to asynchronously complete messageFlow with one final message, even though its other constituent flows are still alive.
Figured out how to do this using a KillSwitch.
Updated version
The old version had the problem that it didn't seem to work when triggered by a BidiFlow stage higher up in the stack (such as my authGate). I'm not sure exactly why, but modeling the shutoff as a BidiFlow itself, placed further up the stack, resolved the issue.
val shutoffPromise = Promise[Option[OutgoingWebsocketEvent]]()
/**
* Shutoff valve for the connection. It is triggered when `shutoffPromise`
* completes, and sends a final optional termination message if that
* promise resolves with one.
*/
val shutoffBidi = {
val terminationMessageSource = Source
.maybe[OutgoingWebsocketEvent]
.mapMaterializedValue(_.completeWith(shutoffPromise.future))
val terminationMessageBidi = BidiFlow.fromFlows(
Flow[IncomingWebsocketEventOrAuthorize],
Flow[OutgoingWebsocketEvent].merge(terminationMessageSource)
)
val terminator = BidiFlow
.fromGraph(KillSwitches.singleBidi[IncomingWebsocketEventOrAuthorize, OutgoingWebsocketEvent])
.mapMaterializedValue { killSwitch =>
shutoffPromise.future.foreach { _ => println("Shutting down connection"); killSwitch.shutdown() }
}
terminationMessageBidi.atop(terminator)
}
Then I apply it just inside the codec:
val handler = codec
.atop(shutoffBidi)
.atop(authGate(authSuccessMessage))
.join(messageFlow)
Old version
val shutoffPromise = Promise[Option[OutgoingWebsocketEvent]]()
/**
* Shutoff valve for the flow of outgoing messages. It is triggered when
* `shutoffPromise` completes, and sends a final optional termination
* message if that promise resolves with one.
*/
val shutoffFlow = {
val terminationMessageSource = Source
.maybe[OutgoingWebsocketEvent]
.mapMaterializedValue(_.completeWith(shutoffPromise.future))
Flow
.fromGraph(KillSwitches.single[OutgoingWebsocketEvent])
.mapMaterializedValue { killSwitch =>
shutoffPromise.future.foreach(_ => killSwitch.shutdown())
}
.merge(terminationMessageSource)
}
Then handler looks like:
val handler = codec
.atop(authGate(authSuccessMessage))
.join(messageFlow via shutoffFlow)

unicast in Play framework and SSE (scala): how do i know which stream to send to?

my app lists hosts, and the list is dynamic and changing. it is based on Akka actors and Server Sent Events.
when a new client connects, they need to get the current list to display. but, i don't want to push the list to all clients every time a new one connects. so, followed the realtime elastic search example and emulated unicast by creating an (Enumerator, Channel) per Connect() and giving it an UUID. when i need to broadcast i will map over all and update them, with the intent of being able to do unicast to clients (and there should be very few of those).
my problem is - how do i get the new client its UUID so it can use it? the flow i am looking for is:
- client asks for EventStream
- server creates a new (Enumerator, channel) with a UUID, and returns Enumerator and UUID to client
- client asks for table using uuid
- server pushes table only on channel corresponding to the uuid
so, how would the client know about the UUID? had it been web socket, sending the request should have had the desired result, as it would have reached its own channel. but in SSE the client -> server is done on a different channel. any solutions to that?
code snippets:
case class Connected(uuid: UUID, enumerator: Enumerator[ JsValue ] )
trait MyActor extends Actor{
var channelMap = new HashMap[UUID,(Enumerator[JsValue], Channel[JsValue])]
def connect() = {
val con = Concurrent.broadcast[JsValue]
val uuid = UUID.randomUUID()
channelMap += (uuid -> con)
Connected(uuid, con._1)
}
...
}
object HostsActor extends MyActor {
...
override def receive = {
case Connect => {
sender ! connect
}
...
}
object Actors {
def hostsStream = {
getStream(getActor("hosts", Props (HostsActor)))
}
def getActor(actorPath: String, actorProps : Props): Future[ActorRef] = {
/* some regular code to create a new actor if the path does not exist, or return the existing one else */
}
def getStream(far: Future[ActorRef]) = {
far flatMap {ar =>
(ar ? Connect).mapTo[Connected].map { stream =>
stream
}
}
}
...
}
object AppController extends Controller {
def getHostsStream = Action.async {
Actors.hostsStream map { ac =>
************************************
** how do i use the UUID here?? **
************************************
Ok.feed(ac.enumerator &> EventSource()).as("text/event-stream")
}
}
I managed to solve it by asynchronously pushing the uuid after returning the channel, with some time in between:
override def receive = {
case Connect => {
val con = connect()
sender ! con
import scala.concurrent.ExecutionContext.Implicits.global
context.system.scheduler.scheduleOnce(0.1 seconds){
unicast(
con.uuid,
JsObject (
Seq (
"uuid" -> JsString(con.uuid.toString)
)
)
)
}
}
this achieved its goal - the client got the UUID and was able to cache and use it to push a getHostsList to the server:
#stream = new EventSource("/streams/hosts")
#stream.addEventListener "message", (event) =>
data = JSON.parse(event.data)
if data.uuid
#uuid = data.uuid
$.ajax
type: 'POST',
url: "/streams/hosts/" + #uuid + "/sendlist"
success: (data) ->
console.log("sent hosts request to server successfully")
error: () ->
console.log("failed sending hosts request to server")
else
****************************
* *
* handle parsing hosts *
* *
* *
****************************
#view.render()
while this works, i must say i don't like it. introducing an artificial delay so the client can get the channel and start listening (i tried with no delay, and the client didn't get the uuid) is dangerous, as it might still miss if the system get busier, but making it too long hurts the reactivity aspect.
if anyone has a solution in which this can be done synchronically - having the uuid returned as part of the original eventSource request - i would be more than happy to demote my solution.

Scala Remote Actors stop client from terminating

I am writing a simple chat server, and I want to keep it as simple as possible. My server listed below only receives connections and stores them in the clients set. Incoming messages are then broadcasted to all clients on that Server. The server works with no problem, but on the client side, the RemoteActor stops my program from termination. Is there a way to remove the Actor on my client without terminating the Actor on the Server?
I don't want to use a "one actor per client" model yet.
import actors.{Actor,OutputChannel}
import actors.remote.RemoteActor
object Server extends Actor{
val clients = new collection.mutable.HashSet[OutputChannel[Any]]
def act{
loop{
react{
case 'Connect =>
clients += sender
case 'Disconnect =>
clients -= sender
case message:String =>
for(client <- clients)
client ! message
}
}
}
def main(args:Array[String]){
start
RemoteActor.alive(9999)
RemoteActor.register('server,this)
}
}
my client would then look like this
val server = RemoteActor.select(Node("localhost",9999),'server)
server.send('Connect,messageHandler) //answers will be redirected to the messageHandler
/*do something until quit*/
server ! 'Disconnect
I would suggest placing the client side code into an actor itself - ie not calling alive/register in the main thread
(implied by http://www.scala-lang.org/api/current/scala/actors/remote/RemoteActor$.html)
something like
//body of your main:
val client = actor {
alive(..)
register(...)
loop {
receive {
case 'QUIT => exit()
}
}
}
client.start
//then to quit:
client ! 'QUIT
Or similar (sorry I am not using 2.8 so might have messed something up - feel free to edit if you make it actually work for you !).

How should I handle blocking operations when using scala actors?

I started learning the scala actors framework about two days ago. To make the ideas concrete in my mind, I decided to implement a TCP based echo server that could handle multiple simultaneous connections.
Here is the code for the echo server (error handling not included):
class EchoServer extends Actor {
private var connections = 0
def act() {
val serverSocket = new ServerSocket(6789)
val echoServer = self
actor { while (true) echoServer ! ("Connected", serverSocket.accept) }
while (true) {
receive {
case ("Connected", connectionSocket: Socket) =>
connections += 1
(new ConnectionHandler(this, connectionSocket)).start
case "Disconnected" =>
connections -= 1
}
}
}
}
Basically, the server is an Actor that handles the "Connected" and "Disconnected" messages. It delegates the connection listening to an anonymous actor that invokes the accept() method (a blocking operation) on the serverSocket. When a connection arrives it informs the server via the "Connected" message and passes it the socket to use for communication with the newly connected client. An instance of the ConnectionHandler class handles the actual communication with the client.
Here is the code for the connection handler (some error handling included):
class ConnectionHandler(server: EchoServer, connectionSocket: Socket)
extends Actor {
def act() {
for (input <- getInputStream; output <- getOutputStream) {
val handler = self
actor {
var continue = true
while (continue) {
try {
val req = input.readLine
if (req != null) handler ! ("Request", req)
else continue = false
} catch {
case e: IOException => continue = false
}
}
handler ! "Disconnected"
}
var connected = true
while (connected) {
receive {
case ("Request", req: String) =>
try {
output.writeBytes(req + "\n")
} catch {
case e: IOException => connected = false
}
case "Disconnected" =>
connected = false
}
}
}
close()
server ! "Disconnected"
}
// code for getInputStream(), getOutputStream() and close() methods
}
The connection handler uses an anonymous actor that waits for requests to be sent to the socket by calling the readLine() method (a blocking operation) on the input stream of the socket. When a request is received a "Request" message is sent to the handler which then simply echoes the request back to the client. If the handler or the anonymous actor experiences problems with the underlying socket then the socket is closed and a "Disconnect" message is sent to the echo server indicating that the client has been disconnected from the server.
So, I can fire up the echo server and let it wait for connections. Then I can open a new terminal and connect to the server via telnet. I can send it requests and it responds correctly. Now, if I open another terminal and connect to the server the server registers the connection but fails to start the connection handler for this new connection. When I send it messages via any of the existing connections I get no immediate response. Here's the interesting part. When I terminate all but one of the existing client connections and leave client X open, then all the responses to the request I sent via client X are returned. I've done some tests and concluded that the act() method is not being called on subsequent client connections even though I call the start() method on creating the connection handler.
I suppose I'm handling the blocking operations incorrectly in my connection handler. Since a previous connection is handled by a connection handler that has an anonymous actor blocked waiting for a request I'm thinking that this blocked actor is preventing the other actors (connection handlers) from starting up.
How should I handle blocking operations when using scala actors?
Any help would be greatly appreciated.
From the scaladoc for scala.actors.Actor:
Note: care must be taken when invoking thread-blocking methods other than those provided by the Actor trait or its companion object (such as receive). Blocking the underlying thread inside an actor may lead to starvation of other actors. This also applies to actors hogging their thread for a long time between invoking receive/react.
If actors use blocking operations (for example, methods for blocking I/O), there are several options:
The run-time system can be configured to use a larger thread pool size (for example, by setting the actors.corePoolSize JVM property).
The scheduler method of the Actor trait can be overridden to return a ResizableThreadPoolScheduler, which resizes its thread pool to avoid starvation caused by actors that invoke arbitrary blocking methods.
The actors.enableForkJoin JVM property can be set to false, in which case a ResizableThreadPoolScheduler is used by default to execute actors.