Is this correct way to implement GMail widget in Lift? - scala

I'm trying to implement a GMail widget similar to those on iGoogle or Netvibse to practise how to use Comet in Lift web framework.
Currently what I have is the following code, its short and works amazingly.
But I'm not sure about is this best way to implement it. Because retrieve mails from GMail is a time-consuming job, and the following code only has one GMailListener, which will blocks when get mails from GMail.
I guess that means if there are two users on my website, for example UserA and UserB.
Although the following code is thread safe, but if they both on the page that using this Comet, UserB still have to wait until mail of UserA is processed to get his own result, right?
What is the best way to avoid the blocking?
import net.liftweb.actor.LiftActor
import net.liftweb.util.Schedule
import net.liftweb.util.Helpers._
import net.liftweb.http.CometActor
import net.liftweb.http.js.JsCmds.SetHtml
import net.liftweb.http.js.jquery.JqJsCmds._
case class FetchGMail(userID: Int, sender: CometActor)
case class NewStuffs(mails: List[Stuff])
object GMailListener extends LiftActor
{
def getMails(userID: Int) = {
// Get Mails from GMail
}
def messageHandler = {
case FetchGMail(userID, sender) =>
println("Get FetchMail request")
sender ! NewStuffs(getMails(userID))
Schedule.schedule(this, FetchGMail(userID, sender), 5 minutes)
}
}
class Inbox extends CometActor with JSImplicit
{
def render = <div>Empty Inbox</div>
GMailListener ! FetchGMail(1, this)
override def lowPriority = {
case NewStuffs(mails) =>
println("get new mails")
partialUpdate(AppendHtml("mails", <div>{mails}</div>))
}
}

Just keep in mind that an actor can only process one message at a time and will only consume resources when it is processing messages. Your GmailListener is a singleton, so it could be a bottleneck right now, but there is no reason you can't create an instance of GmailListener for each user. Each instance will only wake up and utilize a thread to do Gmail lookups when your schedule call dictates. Just make sure that you shut the corresponding GmailListener down when the Inbox shuts down. Take a look at net.liftweb.http.CometListener which I think should help with that.

Related

How to retrieve a whole mailbox with Spring Integration Mail

I'm trying to create a piece of code that would retrieve the whole content of an INBOX, using Spring Integration Mail.
By default, an ImapIdleChannelAdapter will only fetch recent, unseen, unanswered etc... emails.
So when I created my adapter, I tried to use a special strategy that would retrieve all emails without a given unique user flag (so basically, all the mails I haven't retrieve yet).
Here is my adapter:
Mail.imapIdleAdapter(imapUrl(user, pw, provider))
.javaMailProperties { p: PropertiesBuilder -> p.put("mail.debug", "false") }
.userFlag(uniqueFlag)
.shouldMarkMessagesAsRead(false)
.searchTermStrategy(GetAllMailsStrategy(uniqueFlag))
.shouldReconnectAutomatically(true)
.autoCloseFolder(false)
val integrationFlowBuilder = IntegrationFlows.from(imapIdleAdapter)
.handle { message -> onNewEmail(user, uniqueFlag, message as org.springframework.messaging.Message<Message>) }
val flow: IntegrationFlow = integrationFlowBuilder.get()
flowContext.registration(flow).register()
And here is my strategy:
inner class GetAllMailsStrategy(private val uniqueFlag: String) : SearchTermStrategy {
override fun generateSearchTerm(supportedFlags: Flags?, folder: Folder?): SearchTerm {
val userFlag = Flags()
userFlag.add(uniqueFlag)
return NotTerm(FlagTerm(userFlag, true))
}
}
This piece of code does work, on small inboxes. As soon as I try to retrieve mails from an inbox with thousands of mails, at some point it just stops (sometime with FolderClosedException, even though I set autoCloseFolder to false, and sometime without any exception or log...), and then won't retrieve missing emails even if I start it all over with the same unique user flag. As if all the mails where flagged even though I never retrieved them. It does work on all new incoming emails though...
Any idea on what strategy I should use? Is there a way to get all the mails once, without flagging them?
Should I use Spring mail only to get incoming new mail but something else for the task of retrieving all the mails once?
Thanks
I never did it, but I think you are right: you should use an ImapIdleChannelAdapter for new (in IMAP terms RECENT) message, and some other ImapMailReceiver instance for initial call with your custom SearchTermStrategy.
The folder might be closed for some other reason, e.g. too many messages too long processed.

Playframework User Actor with User Session

I'm pretty new to Scala, the Play Framework and Akka. In the project I currently work on, the user of the web application should be able to ask the server several things to do (like starting a particular computation) in an asynchronous way. If the server is done it should notify the user also async. I solve this demand by a WebSocket connection which is established when the user first connects with the Application and the WebSocket is handled by a UserActor, which is attached to the User Session:
def ws = WebSocket.tryAcceptWithActor[JsValue, JsValue] { implicit request =>
Future.successful(request.session.get(UID) match {
case None => Left(Forbidden)
case Some(uid) => Logger.info("WebSocket has accepted the request with uid " + uid)
Right(UserActor.props(uid))
})
}
Currently, the only thing the UserActor does is receiving messages from the WebSocket as JsValue. The UID of the session is generated when requesting index:
def index = Action { implicit request => {
val uid = request.session.get(UID).getOrElse {
counter += 1
counter.toString
}
Ok(views.html.index(uid)).withSession {
Logger.debug("create uid " + uid)
request.session + (UID -> uid)
}}
}
The UserActor should represent the actual user on the Server and thus include the logic of all actions that the user can perform on the Server. This works fine as long as I send all user interaction over the WebSocket.
Now what is the case with other user input, like form submission? The application includes a form whose data should not go over the WebSocket, but rather be submitted with a POST request (perhaps with AJAX) and bound in a controller to the Model like described in the documentation.
def saveContact = Action { implicit request =>
contactForm.bindFromRequest.fold(
formWithErrors => {
BadRequest(views.html.contact.form(formWithErrors))
},
contact => {
val contactId = Contact.save(contact)
Redirect(routes.Application.showContact(contactId)).flashing("success" -> "Contact saved!")
}
)
}
This example is taken from the Playframework documentation.
Now, how do I link the Form Submission handler with the UserActor? Say I want to tell the user actor that a form has been submitted. A trivial example would be that the UserActor sends one value of the form back over the WebSocket to the client as soon it is received. So basically the problem reduces to the issue that I want to send the UserActor Messages from any Controller.
I might come up with the idea to send all form data over the WebSocket, but I also want to realize the upload of large data in the future, which I want to tackle like described in this blog post. Then one scenario I could imagine is that the UserActor should be messaged for each chunk it receives.
I guess one problem is that the UserActor and the WebSocketActor are the same and I rather should split their logic, such that the UserActor is only associated with the Session, but I have no idea how to accomplish this. Maybe I need another actor, say a UserManager, which keeps track of present UserActors and enables access to UserActors?
Do you have any suggestions, recommendations or perhaps an example application which also deals with this case? Thank you very much in advance.
Best regards
Don't use the actor that you pass to tryAcceptWithActor as a representation of the User. It should represent a particular session with that user. Possibly, one of many concurrent sessions (multiple browsers, or tabs) a user could have open at a particular time.
Create a separate actor to represent the user and all of the actions it can perform. Now the session actors should forward their messages to the user actor. Traditional controller methods can also forward requests to the corresponding user actors.

Akka: send error from routee back to caller

In my project, I created UserRepositoryActor which create their own router with 10 UserRepositoryWorkerActor instances as routee, see hierarchy below:
As you see, if any error occur while fetching data from database, it will occur at worker.
Once I want to fetch user from database, I send message to UserRepositoryActor with this command:
val resultFuture = userRepository ? FindUserById(1)
and I set 10 seconds for timeout.
In case of network connection has problem, UserRepositoryWorkerActor immediately get ConnectionException from underlying database driver and then (what I think) router will restart current worker and send FindUserById(1) command to other worker that available and resultFuture will get AskTimeoutException after 10 seconds passed. Then some time later, once connection back to normal, UserRepositoryWorkerActor successfully fetch data from database and then try to send result back to the caller and found that resultFuture was timed out.
I want to propagate error from UserRepositoryWorkerActor up to the caller immediately after exception occur, so that will prevent resultFuture to wait for 10 seconds and stop UserRepositoryWorkerActor to try to fetch data again and again.
How can I do that?
By the way, if you have any suggestions to my current design, please suggest me. I'm very new to Akka.
Your assumption about Router resending the message is wrong. Router has already passed the message to routee and it doesnt have it any more.
As far as ConnectionException is concerned, you could wrap in a scala.util.Try and send response to sender(). Something like,
Try(SomeDAO.getSomeObjectById(id)) match {
case Success(s) => sender() ! s
case Failure(e) => sender() ! e
}
You design looks correct. Having a router allows you to distribute work and also to limit number of concurrent workers accessing the database.
Option 1
You can make your router watch its children and act accordingly when they are terminated. For example (taken from here):
import akka.routing.{ ActorRefRoutee, RoundRobinRoutingLogic, Router }
class Master extends Actor {
var router = {
val routees = Vector.fill(5) {
val r = context.actorOf(Props[Worker])
context watch r
ActorRefRoutee(r)
}
Router(RoundRobinRoutingLogic(), routees)
}
def receive = {
case w: Work =>
router.route(w, sender())
case Terminated(a) =>
router = router.removeRoutee(a)
val r = context.actorOf(Props[Worker])
context watch r
router = router.addRoutee(r)
}
}
In your case you can send some sort of a failed message from the repository actor to the client. Repository actor can maintain a map of worker ref to request id to know which request failed when worker terminates. It can also record the time between the start of the request and actor termination to decide whether it's worth retrying it with another worker.
Option 2
Simply catch all non-fatal exceptions in your worker actor and reply with appropriate success/failed messages. This is much simpler but you might still want to restart the worker to make sure it's in a good state.
p.s. Router will not restart failed workers, neither it will try to resend messages to them by default. You can take a look at supervisor strategy and Option 1 above on how to achieve that.

Play Framework - Store Information About Current Request

In my play framework 2 application I'd like to have a log message with the request, response, and some details about the response - such as the number of search results returned from an external web call.
What I have now is a filter like this:
object AccessLog extends Filter {
import play.api.mvc._
import play.api.libs.concurrent.Execution.Implicits._
def apply(next: RequestHeader => Future[SimpleResult])(request: RequestHeader): Future[SimpleResult] = {
val result = next(request)
result map { r =>
play.Logger.info(s"Request: ${request.uri} - Response: ${r.header.status}")
}
result
}
}
At the point of logging, I've alread converted my classes into json, so it seems wasteful to parse the json back into objects so I can log information about it.
Is it possible to compute the number of search results earlier in the request pipeline, maybe into a dictionary, and pull them out when I log the message here?
I was looking at flash, but don't want the values to be sent out in a cookie at any cost. Maybe I can clear the flash instead. Buf if there's a more suitable way I'd like to see that.
This is part of a read-only API that does not involve user accounts or sessions.
You could try using the play.api.cache.Cache object if you can come up with a reproducible unique request identifier. Once you have logged your request, you can remove it from the Cache.

DistributedPubSubMediator Subscription via Proxy Actor not working

My colleague and I have been puzzled with the different behaviour the DistributedPubSubMediator has for subscribing/unsubscribing directly or via a proxy Actor. We put together a test to show the different result below.
From our understanding, the ActorRef.forward should pass in the original sender, hence whether the message is sent directly to the Mediator or via a proxy Actor should not matter. Ie. http://www.scala-lang.org/api/current/index.html#scala.actors.ActorRef.
To work around, we have to extend the DIstributedPubSubMediator class and include the logic DistributedPubSubMediator object already provides. Ideally, we'd prefer to use the object directly and revert our code.
This seems like a bug. Does anyone know the underlying reason for this unusual behaviour? Please help...
[22-Oct-2013] Test is updated based on Roland's answer (Thank you) and added expectMsgType on SubscriberAck and UnsubscribeAck. We now receive the SubscribeAck, but strangely not the UnSubscribeAck. It is not a major issue but we would like to know why.
Another question, if we may ask altogether, is whether it is good practice to Subscribe remote actors to the DistributedPubSubMediator via proxy Actor running in the same ActorSystem?
At the moment we have:
The subscribing App discovers the publishing App (in non-Akka-way) and gets Cluster address.
The remote subscriber uses this address and the known proxy actor's path to send Identity request.
The remote subscriber gets the ActorIdentity response and then Subscribes/Unsubscribes via this (remote) proxy.
On the publisher App Subscribe/Unsubscribe messages are forwarded to the DistributedPubSubMediator and it is used to publish subsequent business messages.
We are not joining the Cluster as per Akka Reactor pubsub chat client example (ie. only using the DistributedPubSubMediator to publish) because we need to handle Failover on the Publisher side.
[5-Nov-2013] Added a test on Send message. It does not seem to work and we haven't figured it out yet.
package star.common.pubsub
import org.scalatest.{BeforeAndAfterAll, FunSuite}
import org.junit.runner.RunWith
import akka.contrib.pattern.DistributedPubSubExtension
import akka.contrib.pattern.DistributedPubSubMediator._
import akka.testkit.TestKit
import akka.actor.{Actor, ActorSystem, ActorRef, Props}
import scala.concurrent.duration._
import com.typesafe.config.ConfigFactory
object MediatorTest {
val config = ConfigFactory.parseString(s"""
akka.actor.provider="akka.cluster.ClusterActorRefProvider"
akka.remote.netty.tcp.port=0
akka.extensions = ["akka.contrib.pattern.DistributedPubSubExtension"]
""")
}
#RunWith(classOf[org.scalatest.junit.JUnitRunner])
class MediatorTest extends TestKit(ActorSystem("test", MediatorTest.config)) with FunSuite {
val mediator = DistributedPubSubExtension(system).mediator
val topic = "example"
val message = "Published Message"
// val joinAddress = Cluster(system).selfAddress
// Cluster(system).join(joinAddress)
test("Direct subscribe to mediator") {
mediator.!(Subscribe(topic, testActor))(testActor)
expectMsgType[SubscribeAck](5 seconds)
mediator.!(Publish(topic, message))(testActor)
expectMsg(2 seconds, message)
mediator.!(Unsubscribe(topic, testActor))(testActor)
expectMsgType[UnsubscribeAck](5 seconds)
mediator ! Publish(topic, message)
expectNoMsg(2 seconds)
}
test("Subscribe to mediator via proxy") {
class Proxy extends Actor {
override def receive = {
case subscribe: Subscribe =>
mediator forward subscribe
case unsubscribe: Unsubscribe =>
mediator forward unsubscribe
case publish: Publish =>
mediator.!(publish)
}
}
val proxy = system.actorOf(Props(new Proxy), "proxy")
proxy.!(Subscribe(topic,testActor))(testActor)
expectMsgType[SubscribeAck](2 seconds)
proxy ! Publish(topic, message)
expectMsg(5 seconds, message)
proxy.!(Unsubscribe(topic,testActor))(testActor)
expectMsgType[UnsubscribeAck](5 seconds)
proxy ! Publish(topic, message)
expectNoMsg(5 seconds)
}
test("Send message to address") {
val testActorAddress = testActor.path.toString
// val system2 = ActorSystem("test", MediatorTest.config)
// Cluster(system2).join(joinAddress)
mediator.!(Subscribe(topic, testActor))(testActor)
expectMsgType[SubscribeAck](5 seconds)
println(testActorAddress) // akka://test/system/testActor1
mediator.!(Publish(topic, message))(testActor)
expectMsg(2 seconds, message)
mediator ! Send(testActorAddress, message, false)
expectMsg(5 seconds, message)
}
}
Two things:
whether or not you use forward does not matter much, since you do not have a useful sender in scope in your test procedure (you are not mixing in ImplicitSender); but this is not the problem
you are not forwarding the Publish message, which is why it does not publish the message