how to create multiple chatrooms using websockets in scala? - scala

I'm trying to learn how to use WebSockets and Akka using the Chat example in the Play for Scala book.
In the book, there is one "ChatRoom" being created, and that's instantiated in the Chat controller with something as simple as this:
val room = Akka.system.actorOf(Props[ChatRoom])
I want to expand this example and have multiple chat rooms available instead of just one. A user can provide a string, which can be a chatroom "name", and that would create a new chatroom. Anyone that tries to join this chatroom would share a broadcast with each other, but not with people in another chatroom. Very similar to IRC.
My questions are the following:
1: How do I create a ChatRoom with a unique name if one does not already exist?
2: How can I check if the existing ChatRoom exists and get a reference to it?
The chatroom name will come via either the URL or a query parameter, that part will be trivial. I'm just not entirely sure how to uniquely identify the Akka ChatRoom and later retrieve that Actor by name.

You can name actors in Akka, so instead of having:
Akka.system.actorOf(Props[ChatRoom])
You would have:
Akka.system.actorOf(Props[ChatRoom],"room1")
Then, depending on the Akka version you're using, use either Akka.system.actorFor("room1") or Akka.system.actorSelection("room1") to get a reference to the wanted chat room.

Use the Akka EventBus trait. You can use eventBus and LookupClassification to implement topic based publish-subscribe where the "topic" is the roomID and the subscribers are either the actors for each room or the web-socket actors for each client.
import akka.event.EventBus
import akka.event.LookupClassification
final case class MsgEnvelope(topic: String, payload: Any)
/**
* Publishes the payload of the MsgEnvelope when the topic of the
* MsgEnvelope equals the String specified when subscribing.
*/
class LookupBusImpl extends EventBus with LookupClassification {
type Event = MsgEnvelope
type Classifier = String
type Subscriber = ActorRef
// is used for extracting the classifier from the incoming events
override protected def classify(event: Event): Classifier = event.topic
// will be invoked for each event for all subscribers which registered themselves
// for the event’s classifier
override protected def publish(event: Event, subscriber: Subscriber): Unit = {
subscriber ! event.payload
}
// must define a full order over the subscribers, expressed as expected from
// `java.lang.Comparable.compare`
override protected def compareSubscribers(a: Subscriber, b: Subscriber): Int =
a.compareTo(b)
// determines the initial size of the index data structure
// used internally (i.e. the expected number of different classifiers)
override protected def mapSize: Int = 128
}
Then register your actors (in reality you would keep a count of how many users are in each room and subscribe when users enter the room and unsubscribe and kill the actor when no-one is in the room)
val lookupBus = new LookupBusImpl
lookupBus.subscribe(room1Actor, "room1")
lookupBus.subscribe(room2Actor, "room2")
The message will be switched based on the roomID
lookupBus.publish(MsgEnvelope("room1", "hello room1"))
lookupBus.publish(MsgEnvelope("room2", "hello room2"))
lookupBus.publish(MsgEnvelope("room3", "hello dead letter"))

Related

Can the state stores in Kafka be shared across streams?

We have a scenario where a statestore having some values from one kstream needs to be accessed in another kstream, is there any way to achieve this?
They can be accessed with Interactive Queries.
Between applications or instances of the same application, you need to use RPC calls such as adding an HTTP or gRPC server.
https://docs.confluent.io/platform/current/streams/developer-guide/interactive-queries.html
You can attach the same state store to multiple processors if you use the Processor API, but also if you use the Processor API Integration in the DSL.
There are two ways to do that (see javadocs). You can either manually add the store to the processors, like:
// create store
StoreBuilder<KeyValueStore<String,String>> keyValueStoreBuilder =
Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore("myProcessorState"),
Serdes.String(),
Serdes.String());
// add store
builder.addStateStore(keyValueStoreBuilder);
KStream outputStream = inputStream.processor(new ProcessorSupplier() {
public Processor get() {
return new MyProcessor();
}
}, "myProcessorState");
or you can implement stores() on the passed in ProcessorSupplier:
class MyProcessorSupplier implements ProcessorSupplier {
// supply processor
Processor get() {
return new MyProcessor();
}
// provide store(s) that will be added and connected to the associated processor
// the store name from the builder ("myProcessorState") is used to access the store later via the ProcessorContext
Set<StoreBuilder> stores() {
StoreBuilder<KeyValueStore<String, String>> keyValueStoreBuilder =
Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore("myProcessorState"),
Serdes.String(),
Serdes.String());
return Collections.singleton(keyValueStoreBuilder);
}
}
These are examples for KStream#process(), but it works similarly for the family of KStream#*transform*() methods.

Spark throws Not Serializable Exception inside a foreachRDD operation

i'm trying to implement an observer pattern using scala and spark streaming. the idea is that whenever i receive a record from the stream (from kafka) i notify the observer by calling the method "notifyObservers" inside the closure. here's the code:
the stream is provided by the kafka utils.
the method notifyObserver is defined into an abstract class following the rules of the pattern.
the error I think is related on the fact that methods cant be serialize.
Am I thinking correctly? and if it was, what kind of solution should I follow?
thanks
def onMessageConsumed() = {
stream.foreachRDD(rdd => {
rdd.foreach(consumerRecord => {
val record = new Record[T](consumerRecord.topic(),
consumerRecord.value())
//notify observers with the record to compute
notifyObservers(record)
})
})
}
Yes, the classes that are used in the code that is sent to other executors (executed in foreach, etc.), should implement Serializable interface.
also, if you're notification code requires connection to some resource, you need to wrap foreach into foreachPartition, something like this:
stream.foreachRDD(rdd => {
rdd.foreachPartition(rddPartition =>
// setup connection to external component
rddPartition.foreach(consumerRecord => {
val record = new Record[T](consumerRecord.topic(),
consumerRecord.value())
notifyObservers(record)
})
// close connection to external component
})
})

How to handle failures when one of the router actors throws an exception

Say I have an email sender actor that is responsible for sending out emails.
I want to limit the # of actors created for sending emails, so I created a router.
class EmailSender extends Actor {
val router = context.actorOf(RoundRobinRouter(4).props(EmailSender.props))
def recieve = {
case SendEmail(emailTo: String, ..) =>
router ! SendEmailMessage(emailTo,.. )
case ...
}
}
I want to understand 2 things here:
If an email message sending fails by one of the router actors, how will EmailSender get notified and will I get the exact email that failed?
If the email sending fails within the Routee actors, the default supervision strategy is to restart the actor.
So you should be able to hook into the preRestart method and see which message caused the EmailSender to fail.
class EmailSender extends Actor {
def recieve = {
case SendEmail(emailTo: String, ..) =>
router ! SendEmailMessage(emailTo,.. )
case ...
}
override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
// log the failed message. Or send it back to failedMsg processer
if (message.isDefined) {
failedEmailsProcessor ! message.get
}
}
}
Note: I the supervision strategy is AllForOneStrategy then preRestart would be called for all the child actors. So it's better to have OneForOneStrategy here, so that preRestart is called only for the failed actor.
In the class that contains the router you put the emails into a list pending email.
When a email has been sent out successfully to EmailSender, it sends back an message saying success and the router class will remove it if from the pending list or a failure message and the router can try one more time depending you your business logic.

How to implement 'Private Chat' module using akka, scala, websockets in play framework?

I have to implement a Chat module to enable privacy chatting b/w users. I have to do this in Play framework using Scala, Akka and java.net.*
I had got several examples over the net which are demonstrating the use of WebSockets but I didn't got any which can help me implementing Chat module using WebSockets. I have the idea of what i have to do but i am totally confused about what should be the structure of the objects, classes and how Should I start.
Please, if anyone can help me for this or refer me a good article, paper which can help me all the way through the implementation. Thankyou.
I did it in Java. This is what I modified from the exemple :
public class ChatRoom extends UntypedActor {
//Added hashmap to keep references to actors (rooms).
// (might be put in another class)
public static HashMap<String,ActorRef> openedChats=new HashMap<String,ActorRef>();
//Added unique identifier to know which room join
final String chatId;
public ChatRoom(String chatId) {
this.chatId = chatId;
}
public static void join(final User user, final String chatId , WebSocket.In<JsonNode> in, WebSocket.Out<JsonNode> out) throws Exception{
final ActorRef chatRoom;
//Find the good room to bind to in the hashmap
if(openedChats.containsKey(chatId)){
chatRoom = openedChats.get(chatId);
//Or create it and add it to the hashmap
}else{
chatRoom = Akka.system().actorOf(new Props().withCreator(new UntypedActorFactory() {
public UntypedActor create() {
return new ChatRoom(chatId);
}
})
);
openedChats.put(chatId,chatRoom);
}
// Send the Join message to the room
String result = (String)Await.result(ask(chatRoom,new Join(user.getId()+"", out), 10000), Duration.create(10, SECONDS));
// ..... Nothing to do in the rest
It's only the main modifications, you also have to adapt javascript and route file
Feel free to ask questions.
Have a look at the official sample in playframework
https://github.com/playframework/playframework/tree/master/samples/scala/websocket-chat

Mapper's toForm method does nothing on submit?

I have a very simple snippet to add a new row to the books table in the database:
def add = Book.toForm(Full("Add"), { _.save })
Calling this snippet in my template generates a form just fine, and submitting the form gives me a post request, but nothing happens, it never tries to talk to the database, no errors or exceptions occur:
09:03:53.631 [865021464#qtp-2111575312-18] INFO net.liftweb.util.TimeHelpers - Service request (POST) /books/ returned 200, took 531 Milliseconds
I am not sure if my model's save method is just not being called, or if the save method is not working. Based on examples in the book "Lift in Action", I am under the impression that the default Mapper save method should just work, and that is what I am using right now. My model class is simply:
class Book extends LongKeyedMapper[Book] with IdPK {
def getSingleton = Book
object name extends MappedString(this, 100)
}
object Book extends Book with LongKeyedMetaMapper[Book] {
override def dbTableName = "books"
}
Am I missing something in my model, or does this appear to be correct? If this should work, how do I debug it not working?
Forms don't work if you don't have a session (so you need cookies enabled). The session maps the form name to a function on the server. Unfortunately, lift doesn't log an error when the form's handler function isn't found.