I am new to Akka, Scala.
I have to build a service which sends emails with attachment to emailIds given. I am using Sendgrid as a gateway.
For the attachment I have a file uploaded in S3 of size 28KB.
I have REST service to which I can pass document Id through which I can fetch the document as InputStream. Now this input Stream has to be sent to many email Ids . All this downloading the file is handled by an actor called "attachmentActor" which I am creating below.
Now lets say I have two emailIds which I need to send that attachment to, the problem I am facing is its not sending complete file to both , infact 28KB file gets divided into 16KB and 12KB which are finally sent to emailIds.
so emailId 1 would receive 16KB //it should actually have 28KB
email 2 would receive 12KB //it should actually have 28KB
Following is the code:
class SendgridConsumer{
def receive(request: EmailRequest) = {
val service = Sendgrid(username , password)
val logData = request.logData
var errorMessage = new String
val attachmentRef = system.actorOf(Props[AttachmentRequestConsumer], "attachmentActor")
val future = attachmentRef ? AttachmentRequest(request.documentId.get)
var targetStream = Await.result(future, timeout.duration).asInstanceOf[InputStream]
val results = request.emailContacts.par.map( emailContact => {
val email=postData(new Email(),request , emailContact, targetStream,request.documentName.get)
val sendGridResponse=service.send(email)
}
}
// postData() creates an Email Object
// This is my Attachment Actor
class AttachmentRequestConsumer extends Actor with ActorLogging {
def receive = {
case request:AttachmentRequest => {
log.info(" inside Attachment RequestConsumer with document Id:" + request.documentId)
val req: HttpRequest = Http(url)
val response = req.asBytes
val targetStream = ByteSource.wrap(response.body).openStream()
log.info("response body :" + response.body)
sender ! targetStream
targetStream.close()
}
}
}
One of the things you should know about actors is that you should not be sending mutable objects (such as InputStream) as messages (technically you can as long as you won't mutate them). Another thing is that sending of messages is asynchronous. This means that the targetStream.close() is called before the other actor receives the message. That is probably the reason why you are getting truncated attachments.
One thing that you could do is send the data instead of an InputStream. Something like
def receive = {
case request:AttachmentRequest => {
log.info(" inside Attachment RequestConsumer with document Id:" + request.documentId)
val req: HttpRequest = Http(url)
val response = req.asBytes
val data = ByteSource.wrap(response.body).read.toVector
log.info("response body :" + response.body)
sender ! data
}
}
That is if you can comfortably fit the contents of the attachment into memory. If that is not the case, you can try to break it into chunks or something.
On a side note, you should not be blocking in receive (the Await.result). A better approach would be to just send a message to AttachmentRequestConsumer and then expect a message of type Seq[Byte] (or even better some wrapper like AttachmentResponse) back in SendgridConsumer's receive.
Related
I'm creating a proxy API using akka that does some preparations before forwarding the request to the actual API. For one of the endpoints, the response is streaming json data and the client may close the connection at any time. Akka seems to handle this automatically, but the issue is I need to do some cleanup after the client closes the connection.
path("query") {
post {
decodeRequest {
entity(as[Query]) { query =>
// proxy does some preparations
val json: String = query.prepared.toJson.toString()
// proxy sends request to actual server
val request = HttpRequest(
method = HttpMethods.POST,
uri = serverUrl + "/query",
entity = HttpEntity(ContentTypes.`application/json`, json)
)
val responseFuture = Http().singleRequest(request)
val response: HttpResponse = Await.result(responseFuture, PROXY_TIMEOUT)
// proxy forwards server's response to user
complete(response)
}
}
}
}
I've tried doing something like
responseFuture.onComplete(_ => doCleanup())
But that doesn't work because responseFuture completes immediately even though the server continues to send data until the client closes the connection. complete(response) also returns immediately.
So I'm wondering how I can make a call to doCleanup() only after the client has closed the connection.
Edit: The cleanup I need to do is because the proxy creates some data streams that are meant to be temporary and only persist until the last message is sent by the server. Once that happens these streams need to be deleted.
You can do it with minimal changes to you code like that:
val responseFuture = Http().singleRequest(request)
val response: HttpResponse = try {
Await.result(responseFuture, PROXY_TIMEOUT)
} finally {
doCleanup()
}
complete(response)
or you can do it without blocking:
val responseFuture = Http().singleRequest(request)
val cleaned = responseFuture.andThen{case _ => doCleanUp()}
complete(cleaned) //it's possible to complete response with Future
We have requirement to implements server sent events for following uses cases:
Send notification to UI after some processing on server. This processing is based on some logic
Send notification to UI after reading messages from RabbitMQ followed by performing some operation on it.
Technology set we are using Scala(2.11/2.12) with Play framework(2.6.x).
Library: akka.stream.scaladsl.Source
We started our proof of concept with following example https://github.com/playframework/play-scala-streaming-example and then we extended by creating different sources.
We tried creating source using Source.apply,soure.single.
But as soon as all elements in source has been pushed to UI, my event streams got closed. But I don't want event stream to close. Also I don't want to use some timer(Source.tick) or Source.repeat.
When my source was created, collection had let's say some x elements and then service added 4 more elements. But after x elements, event stream got closed and then again reopened.
Is there any way my event stream can be infinite and will be closed only my session is logged off or we can explicitly close it.
//Code for KeepAlive(as asked in comments)
object NotficationUtil {
var userNotificationMap = Map[Integer, Queue[String]]()
def addUserNotification(userId: Integer, message: String) = {
var queue = userNotificationMap.getOrElse(userId, Queue[String]())
queue += message
userNotificationMap.put(userId, queue)
}
def pushNotification(userId: Integer): Source[JsValue, _] = {
var queue = userNotificationMap.getOrElse(userId, Queue[String]())
Source.single(Json.toJson(queue.dequeueAll { x => true }))
}
}
#Singleton
class EventSourceController #Inject() (cc: ControllerComponents) extends AbstractController(cc) with FlowFactory{
def pushNotifications(user_id:Integer) = Action {
val stream = NotficationUtil.pushNotification(user_id)
Ok.chunked(stream.keepAlive(50.second, ()=>Json.obj("data"->"heartbeat")) via EventSource.flow).as(ContentTypes.EVENT_STREAM)
}
}
Use below code to create actorref and publisher
val (ref, sourcePublisher)= Source.actorRef[T](Int.MaxValue, OverflowStrategy.fail).toMat(Sink.asPublisher(true))(Keep.both).run()
And create your source from this publisher
val testsource = Source
.fromPublisher[T](sourcePublisher)
And register your listener as
Ok.chunked(
testsource.keepAlive(
50.seconds,
() => Json.obj("data"->"heartbeat")) via EventSource.flow)
.as(ContentTypes.EVENT_STREAM)
.withHeaders("X-Accel-Buffering" -> "no", "Cache-Control" -> "no-cache")
Send your json data to ref actor and data will flow as event stream through this source to front end.
Hope it helps.
I want to implement an client app that first send an request to server then wait for its reply(similar to http)
My client process may be
val topic = async.topic[ByteVector]
val client = topic.subscribe
Here is the api
trait Client {
val incoming = tcp.connect(...)(client)
val reqBus = topic.pubsh()
def ask(req: ByteVector): Task[Throwable \/ ByteVector] = {
(tcp.writes(req).flatMap(_ => tcp.reads(1024))).to(reqBus)
???
}
}
Then, how to implement the remain part of ask ?
Usually, the implementation is done with publishing the message via sink and then awaiting some sort of reply on some source, like your topic.
Actually we have a lot of idioms of this in our code :
def reqRply[I,O,O2](src:Process[Task,I],sink:Sink[Task,I],reply:Process[Task,O])(pf: PartialFunction[O,O2]):Process[Task,O2] = {
merge.mergeN(Process(reply, (src to sink).drain)).collectFirst(pf)
}
Essentially this first hooks to reply stream to await any resulting O confirming our request sent. Then we publish message I and consult pf for any incoming O to be eventually translated to O2 and then terminate.
my app lists hosts, and the list is dynamic and changing. it is based on Akka actors and Server Sent Events.
when a new client connects, they need to get the current list to display. but, i don't want to push the list to all clients every time a new one connects. so, followed the realtime elastic search example and emulated unicast by creating an (Enumerator, Channel) per Connect() and giving it an UUID. when i need to broadcast i will map over all and update them, with the intent of being able to do unicast to clients (and there should be very few of those).
my problem is - how do i get the new client its UUID so it can use it? the flow i am looking for is:
- client asks for EventStream
- server creates a new (Enumerator, channel) with a UUID, and returns Enumerator and UUID to client
- client asks for table using uuid
- server pushes table only on channel corresponding to the uuid
so, how would the client know about the UUID? had it been web socket, sending the request should have had the desired result, as it would have reached its own channel. but in SSE the client -> server is done on a different channel. any solutions to that?
code snippets:
case class Connected(uuid: UUID, enumerator: Enumerator[ JsValue ] )
trait MyActor extends Actor{
var channelMap = new HashMap[UUID,(Enumerator[JsValue], Channel[JsValue])]
def connect() = {
val con = Concurrent.broadcast[JsValue]
val uuid = UUID.randomUUID()
channelMap += (uuid -> con)
Connected(uuid, con._1)
}
...
}
object HostsActor extends MyActor {
...
override def receive = {
case Connect => {
sender ! connect
}
...
}
object Actors {
def hostsStream = {
getStream(getActor("hosts", Props (HostsActor)))
}
def getActor(actorPath: String, actorProps : Props): Future[ActorRef] = {
/* some regular code to create a new actor if the path does not exist, or return the existing one else */
}
def getStream(far: Future[ActorRef]) = {
far flatMap {ar =>
(ar ? Connect).mapTo[Connected].map { stream =>
stream
}
}
}
...
}
object AppController extends Controller {
def getHostsStream = Action.async {
Actors.hostsStream map { ac =>
************************************
** how do i use the UUID here?? **
************************************
Ok.feed(ac.enumerator &> EventSource()).as("text/event-stream")
}
}
I managed to solve it by asynchronously pushing the uuid after returning the channel, with some time in between:
override def receive = {
case Connect => {
val con = connect()
sender ! con
import scala.concurrent.ExecutionContext.Implicits.global
context.system.scheduler.scheduleOnce(0.1 seconds){
unicast(
con.uuid,
JsObject (
Seq (
"uuid" -> JsString(con.uuid.toString)
)
)
)
}
}
this achieved its goal - the client got the UUID and was able to cache and use it to push a getHostsList to the server:
#stream = new EventSource("/streams/hosts")
#stream.addEventListener "message", (event) =>
data = JSON.parse(event.data)
if data.uuid
#uuid = data.uuid
$.ajax
type: 'POST',
url: "/streams/hosts/" + #uuid + "/sendlist"
success: (data) ->
console.log("sent hosts request to server successfully")
error: () ->
console.log("failed sending hosts request to server")
else
****************************
* *
* handle parsing hosts *
* *
* *
****************************
#view.render()
while this works, i must say i don't like it. introducing an artificial delay so the client can get the channel and start listening (i tried with no delay, and the client didn't get the uuid) is dangerous, as it might still miss if the system get busier, but making it too long hurts the reactivity aspect.
if anyone has a solution in which this can be done synchronically - having the uuid returned as part of the original eventSource request - i would be more than happy to demote my solution.
I'd like to be able to send back a response to the client before I do my logging/cleanup for a request.
In play 1.x this was possible with the #Finally annotation. I've read through some posts that say that those annotations were replaced by action composition, but I'm unclear how to emulate the #Finally annotation using it.
It seems to me that the response will only be returned after all the logic in my custom actions has completed.
Have I missed something, or is there no way to do this in Play 2.0?
[EDIT FOR CLARITY]
In other words, I want to be able to run logic after I receive a request and send a response. So I'd like to be able to construct a timeline of the form:
Client sends a request to my server
Server sends back a 200 response, which the client receives
The server does additional processing, logging, etc
In play 1.x I believe I could annote my additional processing logic with a #Finally and have it work like I want it to.
Action composition is not sufficient to do the job, but Action composition + Future, or Action composition + Actors are good ways to achieve this.
Action Composition + Future
Generate your Response, launch your logging/processing in an async context and, in parallel, send the result.
def LoggedAction(f: Request[AnyContent] => Result) = Action { request =>
val result = f(request)
concurrent.future(myLogAction(request, result))
result
}
Action composition + Actors
It's a cleaner way to achieve that. As in the previous case, generate your response, send logging/processing event to your(s) actor(s), and in parallel, send the result.
import play.api._
import play.api.mvc._
import play.libs._
import akka.actor._
object Application extends Controller with Finally {
def index = LoggedAction { r =>
Ok(views.html.index("Your new application is ready."))
}
}
trait Finally {
self: Controller =>
lazy val logActor = Akka.system.actorOf(Props(new LogActor), name = "logActor")
def LoggedAction(f: Request[AnyContent] => Result) = Action { request =>
val result = f(request) // Generate response
logActor ! LogRequest(request) // Send log event to LogActor
println("-> now send page to client")
result
}
case class LogRequest(request: Request[AnyContent])
class LogActor extends Actor {
def receive = {
case LogRequest(req) => {
println(req.host)
// ....
}
}
}
}
// Console
-> now send page to client
127.0.0.1:9000