I am writing a simple chat server, and I want to keep it as simple as possible. My server listed below only receives connections and stores them in the clients set. Incoming messages are then broadcasted to all clients on that Server. The server works with no problem, but on the client side, the RemoteActor stops my program from termination. Is there a way to remove the Actor on my client without terminating the Actor on the Server?
I don't want to use a "one actor per client" model yet.
import actors.{Actor,OutputChannel}
import actors.remote.RemoteActor
object Server extends Actor{
val clients = new collection.mutable.HashSet[OutputChannel[Any]]
def act{
loop{
react{
case 'Connect =>
clients += sender
case 'Disconnect =>
clients -= sender
case message:String =>
for(client <- clients)
client ! message
}
}
}
def main(args:Array[String]){
start
RemoteActor.alive(9999)
RemoteActor.register('server,this)
}
}
my client would then look like this
val server = RemoteActor.select(Node("localhost",9999),'server)
server.send('Connect,messageHandler) //answers will be redirected to the messageHandler
/*do something until quit*/
server ! 'Disconnect
I would suggest placing the client side code into an actor itself - ie not calling alive/register in the main thread
(implied by http://www.scala-lang.org/api/current/scala/actors/remote/RemoteActor$.html)
something like
//body of your main:
val client = actor {
alive(..)
register(...)
loop {
receive {
case 'QUIT => exit()
}
}
}
client.start
//then to quit:
client ! 'QUIT
Or similar (sorry I am not using 2.8 so might have messed something up - feel free to edit if you make it actually work for you !).
Related
I have an Akka Actor that I want to send "control" messages to.
This Actor's core mission is to listen on a Kafka queue, which is a polling process inside a loop.
I've found that the following simply locks up the Actor and it won't receive the "stop" (or any other) message:
class Worker() extends Actor {
private var done = false
def receive = {
case "stop" =>
done = true
kafkaConsumer.close()
// other messages here
}
// Start digesting messages!
while (!done) {
kafkaConsumer.poll(100).iterator.map { cr: ConsumerRecord[Array[Byte], String] =>
// process the record
), null)
}
}
}
I could wrap the loop in a Thread started by the Actor, but is it ok/safe to start a Thread from inside an Actor? Is there a better way?
Basically you can but keep in mind that this actor will be blocking and a thumb of rule is to never block inside actors. If you still want to do this, make sure that this actor runs in a separate thread pool than the native one so you don't affect Actor System performances. One another way to do it would be to send messages to itself to poll new messages.
1) receive a order to poll a message from kafka
2) Hand over the
message to the relevant actor
3) Send a message to itself to order
to pull a new message
4) Hand it over...
Code wise :
case object PollMessage
class Worker() extends Actor {
private var done = false
def receive = {
case PollMessage ⇒ {
poll()
self ! PollMessage
}
case "stop" =>
done = true
kafkaConsumer.close()
// other messages here
}
// Start digesting messages!
def poll() = {
kafkaConsumer.poll(100).iterator.map { cr: ConsumerRecord[Array[Byte], String] =>
// process the record
), null)
}
}
}
I am not sure though that you will ever receive the stop message if you continuously block on the actor.
Adding #Louis F. answer; depending on the configuration of your actors they will either drop all messages that they receive if at the given moment they are busy or put them in a mailbox aka queue and the messages will be processed later (usually in FIFO manner). However, in this particular case you are flooding the actor with PollMessage and you have no guarantee that your message will not be dropped - which appears to happen in your case.
In the current Akka documentation there is a nice example of creating a client server architecture. I'm creating a Akka actor that can send and receive messages on the bitcoin protocol. So far I've been able to send messages & receive replies to the message I sent, but I haven't been able to receive unsolicited messages as required on the peer to peer protocol.
I've tried to use Tcp.Bind and Tcp.Connect to be able to listen to unsolicited messages on port 18333 whistle also being able to send messages to a peer on the network. However, I run into this issue where it will say that the port is already bound (by the Tcp.Connect event) or it won't be able to send messages from that port (due to the Tcp.Bind event).
How can I send messages and receive unsolicited messages on the same port? Am I missing something here?
sealed trait Client extends Actor with BitcoinSLogger {
/**
* The address of the peer we are attempting to connect to
* on the p2p network
* #return
*/
def remote: InetSocketAddress
/**
* The actor that is listening to all communications between the
* client and its peer on the network
* #return
*/
def listener : ActorRef
def actorSystem : ActorSystem
/**
* The manager is an actor that handles the underlying low level I/O resources (selectors, channels)
* and instantiates workers for specific tasks, such as listening to incoming connections.
*/
def manager : ActorRef = IO(Tcp)(actorSystem)
/**
* This actor signifies the node we are connected to on the p2p network
* This is set when we received a [[Tcp.Connected]] message
*/
private var peer : Option[ActorRef] = None
def receive = {
case message : Tcp.Message => message match {
case event : Tcp.Event =>
logger.debug("Event: " + event)
handleEvent(event)
case command : Tcp.Command =>
logger.debug("Command: " + command)
handleCommand(command)
}
case unknownMessage => throw new IllegalArgumentException("Unknown message for client: " + unknownMessage)
}
/**
* This function is responsible for handling a [[Tcp.Event]] algebraic data type
* #param event
*/
private def handleEvent(event : Tcp.Event) = event match {
case Tcp.Bound(localAddress) =>
logger.debug("Actor is now bound to the local address: " + localAddress)
case Tcp.CommandFailed(w: Tcp.Write) =>
logger.debug("Client write command failed: " + Tcp.CommandFailed(w))
logger.debug("O/S buffer was full")
// O/S buffer was full
//listener ! "write failed"
case Tcp.CommandFailed(command) =>
logger.debug("Client Command failed:" + command)
case Tcp.Received(data) =>
logger.debug("Received data from our peer on the network: " + BitcoinSUtil.encodeHex(data.toArray))
//listener ! data
case Tcp.Connected(remote, local) =>
logger.debug("Tcp connection to: " + remote)
logger.debug("Local: " + local)
peer = Some(sender)
peer.get ! Tcp.Register(listener)
listener ! Tcp.Connected(remote,local)
case Tcp.ConfirmedClosed =>
logger.debug("Client received confirmed closed msg: " + Tcp.ConfirmedClosed)
peer = None
context stop self
}
/**
* This function is responsible for handling a [[Tcp.Command]] algebraic data type
* #param command
*/
private def handleCommand(command : Tcp.Command) = command match {
case Tcp.ConfirmedClose =>
logger.debug("Client received connection closed msg: " + Tcp.ConfirmedClose)
listener ! Tcp.ConfirmedClose
peer.get ! Tcp.ConfirmedClose
}
}
case class ClientImpl(remote: InetSocketAddress, network : NetworkParameters,
listener: ActorRef, actorSystem : ActorSystem) extends Client {
manager ! Tcp.Bind(listener, new InetSocketAddress(network.port))
//this eagerly connects the client with our peer on the network as soon
//as the case class is instantiated
manager ! Tcp.Connect(remote)
}
object Client {
def props(remote : InetSocketAddress, network : NetworkParameters, listener : ActorRef, actorSystem : ActorSystem) : Props = {
Props(classOf[ClientImpl], remote, network, listener, actorSystem)
}
def apply(remote : InetSocketAddress, network : NetworkParameters, listener : ActorRef, actorSystem : ActorSystem) : ActorRef = {
actorSystem.actorOf(props(remote, network, listener, actorSystem))
}
def apply(network : NetworkParameters, listener : ActorRef, actorSystem : ActorSystem) : ActorRef = {
//val randomSeed = ((Math.random() * 10) % network.dnsSeeds.size).toInt
val remote = new InetSocketAddress(network.dnsSeeds(0), network.port)
Client(remote, network, listener, actorSystem)
}
EDIT: Adding test case that is using my actor
"Client" must "connect to a node on the bitcoin network, " +
"send a version message to a peer on the network and receive a version message back, then close that connection" in {
val probe = TestProbe()
val client = Client(TestNet3, probe.ref, system)
val conn : Tcp.Connected = probe.expectMsgType[Tcp.Connected]
val versionMessage = VersionMessage(TestNet3, conn.localAddress.getAddress,conn.remoteAddress.getAddress)
val networkMessage = NetworkMessage(TestNet3, versionMessage)
client ! networkMessage
val receivedMsg = probe.expectMsgType[Tcp.Received](5.seconds)
//~~~~~~~~THIS IS WHERE THE TEST IS FAILING~~~~~~~~~~~~~~~~~~
//the bitcoin protocol states that after exchanging version messages a verack message is sent if the version message is accepted
//this is appearing on wireshark, but not being found by my actor
val verackMessage = probe.expectMsgType[Tcp.Received](2.seconds)
}
EDIT2:
Wireshark output showing that I am receiving these messages, and akka is not registering them
The core abstraction of Akka is Actors, so peers in Tcp are just Actors that you can receive messages from AND send messages to.
In this case you can get the ActorRef of your peer by calling sender() once you've received a Tcp.Connected message. In your code you are already saving that ref in peer. It should be as simple as peer.get ! Write(data) to send arbitrary data back to that peer.
Since the connection could break at any point, the docs appear to be using actor supervision to handle this:
class SimpleClient(connection: ActorRef, remote: InetSocketAddress)
extends Actor with ActorLogging {
import Tcp._
// sign death pact: this actor terminates when connection breaks
context watch connection
...
}
Update
(This took me way too long to realize.) The issue you are having is that you are not explicitly handling message framing: i.e the mechanics of buffer accumulation and message reconstruction. Akka TCP only hands you raw buffers. These buffers do NOT necessarily break on message boundaries, or even know anything about the messages of the higher-level protocols, like BitCoin, that ride TCP.
If you run the test case, the listener receives a Tcp.Receive message containing 1244 bytes of data. From this the unit test extracts a NetworkHeader and a VersionMessage, but it's entirely possible there are more messages in this buffer to be extracted and processed, depending on the specifics of the bitcoin protocol, but that it not handled. Instead the buffer is discarded and the test case waits for a second buffer (that may or may not ever arrive) and constructs from this buffer another message, with the hidden expectation it just happens to be perfectly byte-aligned.
Architecturally I would recommend creating a new actor specifically to handle message framing. This actor would receive the raw bits and reconstruct completed messages to send on down to the listener.
TCP sockets have a propery SO_REUSEADDR, which I believe you can enable here using either
.reuseAddress(true)
on your socket object
or
here I see a socket-options array that includes this property:
socket-options {
so-receive-buffer-size = undefined
so-send-buffer-size = undefined
so-reuse-address = undefined
so-traffic-class = undefined
tcp-keep-alive = undefined
tcp-oob-inline = undefined
tcp-no-delay = undefined
}
I think this is what you were looking for, but I may have misunderstood the question.
I want to implement an client app that first send an request to server then wait for its reply(similar to http)
My client process may be
val topic = async.topic[ByteVector]
val client = topic.subscribe
Here is the api
trait Client {
val incoming = tcp.connect(...)(client)
val reqBus = topic.pubsh()
def ask(req: ByteVector): Task[Throwable \/ ByteVector] = {
(tcp.writes(req).flatMap(_ => tcp.reads(1024))).to(reqBus)
???
}
}
Then, how to implement the remain part of ask ?
Usually, the implementation is done with publishing the message via sink and then awaiting some sort of reply on some source, like your topic.
Actually we have a lot of idioms of this in our code :
def reqRply[I,O,O2](src:Process[Task,I],sink:Sink[Task,I],reply:Process[Task,O])(pf: PartialFunction[O,O2]):Process[Task,O2] = {
merge.mergeN(Process(reply, (src to sink).drain)).collectFirst(pf)
}
Essentially this first hooks to reply stream to await any resulting O confirming our request sent. Then we publish message I and consult pf for any incoming O to be eventually translated to O2 and then terminate.
I have an Actor that is similar to the following Actor in function.
case class SupervisingActor() extends Actor {
protected val processRouter = //round robin router to remote workers
override def receive = {
case StartProcessing => { //sent from main or someplace else
for (some specified number of process actions ){
processRouter ! WorkInstructions
}
}
case ProcessResults(resultDetails) => { //sent from the remote workers when they complete their work
//do something with the results
if(all of the results have been received){
//*********************
self ! EndProcess //This is the line in question
//*********************
}
}
case EndProcess {
//do some reporting
//shutdown the ActorSystem
}
}
}
}
How can I verify the EndProcess message is sent to self in tests?
I'm using scalatest 2.0.M4, Akka 2.0.3 and Scala 1.9.2.
An actor sending to itself is very much an intimiate detail of how that actor performs a certain function, hence I would rather test the effect of that message than whether or not that message has been delivered. I’d argue that sending to self is the same as having a private helper method on an object in classical OOP: you also do not test whether that one is invoked, you test whether the right thing happened in the end.
As a side note: you could implement your own message queue type (see https://doc.akka.io/docs/akka/snapshot/mailboxes.html#creating-your-own-mailbox-type) and have that allow the inspection or tracing of message sends. The beauty of this approach is that it can be inserted purely by configuration into the actor under test.
In the past, I have overridden the implementation for ! so that I could add debug/logging. Just call super.! when you're done, and be extra careful not to do anything that would throw an exception.
I had the same issue with an FSM actor. I tried setting up a custom mailbox as per the accepted answer but a few minutes didn't get it working. I also attempted to override the tell operator as per another answer but that was not possible as self is a final val. Eventually I just replaced:
self ! whatever
with:
sendToSelf(whatever)
and added that method into the actor as:
// test can override this
protected def sendToSelf(msg: Any) {
self ! msg
}
then in the test overrode the method to capture the self sent message and sent it back into the fsm to complete the work:
#transient var sent: Seq[Any] = Seq.empty
val fsm = TestFSMRef(new MyActor(x,yz) {
override def sendToSelf(msg: Any) {
sent = sent :+ msg
}
})
// yes this is clunky but it works
var wait = 100
while( sent.isEmpty && wait > 0 ){
Thread.sleep(10)
wait = wait - 10
}
fsm ! sent.head
I started learning the scala actors framework about two days ago. To make the ideas concrete in my mind, I decided to implement a TCP based echo server that could handle multiple simultaneous connections.
Here is the code for the echo server (error handling not included):
class EchoServer extends Actor {
private var connections = 0
def act() {
val serverSocket = new ServerSocket(6789)
val echoServer = self
actor { while (true) echoServer ! ("Connected", serverSocket.accept) }
while (true) {
receive {
case ("Connected", connectionSocket: Socket) =>
connections += 1
(new ConnectionHandler(this, connectionSocket)).start
case "Disconnected" =>
connections -= 1
}
}
}
}
Basically, the server is an Actor that handles the "Connected" and "Disconnected" messages. It delegates the connection listening to an anonymous actor that invokes the accept() method (a blocking operation) on the serverSocket. When a connection arrives it informs the server via the "Connected" message and passes it the socket to use for communication with the newly connected client. An instance of the ConnectionHandler class handles the actual communication with the client.
Here is the code for the connection handler (some error handling included):
class ConnectionHandler(server: EchoServer, connectionSocket: Socket)
extends Actor {
def act() {
for (input <- getInputStream; output <- getOutputStream) {
val handler = self
actor {
var continue = true
while (continue) {
try {
val req = input.readLine
if (req != null) handler ! ("Request", req)
else continue = false
} catch {
case e: IOException => continue = false
}
}
handler ! "Disconnected"
}
var connected = true
while (connected) {
receive {
case ("Request", req: String) =>
try {
output.writeBytes(req + "\n")
} catch {
case e: IOException => connected = false
}
case "Disconnected" =>
connected = false
}
}
}
close()
server ! "Disconnected"
}
// code for getInputStream(), getOutputStream() and close() methods
}
The connection handler uses an anonymous actor that waits for requests to be sent to the socket by calling the readLine() method (a blocking operation) on the input stream of the socket. When a request is received a "Request" message is sent to the handler which then simply echoes the request back to the client. If the handler or the anonymous actor experiences problems with the underlying socket then the socket is closed and a "Disconnect" message is sent to the echo server indicating that the client has been disconnected from the server.
So, I can fire up the echo server and let it wait for connections. Then I can open a new terminal and connect to the server via telnet. I can send it requests and it responds correctly. Now, if I open another terminal and connect to the server the server registers the connection but fails to start the connection handler for this new connection. When I send it messages via any of the existing connections I get no immediate response. Here's the interesting part. When I terminate all but one of the existing client connections and leave client X open, then all the responses to the request I sent via client X are returned. I've done some tests and concluded that the act() method is not being called on subsequent client connections even though I call the start() method on creating the connection handler.
I suppose I'm handling the blocking operations incorrectly in my connection handler. Since a previous connection is handled by a connection handler that has an anonymous actor blocked waiting for a request I'm thinking that this blocked actor is preventing the other actors (connection handlers) from starting up.
How should I handle blocking operations when using scala actors?
Any help would be greatly appreciated.
From the scaladoc for scala.actors.Actor:
Note: care must be taken when invoking thread-blocking methods other than those provided by the Actor trait or its companion object (such as receive). Blocking the underlying thread inside an actor may lead to starvation of other actors. This also applies to actors hogging their thread for a long time between invoking receive/react.
If actors use blocking operations (for example, methods for blocking I/O), there are several options:
The run-time system can be configured to use a larger thread pool size (for example, by setting the actors.corePoolSize JVM property).
The scheduler method of the Actor trait can be overridden to return a ResizableThreadPoolScheduler, which resizes its thread pool to avoid starvation caused by actors that invoke arbitrary blocking methods.
The actors.enableForkJoin JVM property can be set to false, in which case a ResizableThreadPoolScheduler is used by default to execute actors.