Use of Streams,Enumerator and Websockets in playframework - scala

I'm looking for the proper way to use play's Enumerator (play.api.libs.iteratee.Enumerator[A]) in my code, i have a stream of object of type "InfoBlock" and i want to redirect it to a websocket.What i actually do is:
The data structure holding the blocks
private lazy val buf:mutable.Queue[InfoBlock] = new mutable.SynchronizedQueue[InfoBlock]
The callback to be used in the Enumerator
def getCallback: Future[Option[InfoBlock]] = Future{
if (!buf.isEmpty)
Some(buf.dequeue)
else
None}
Block are produced by another thread and added to the queue using:
buf += new InfoBlock(...)
Then in the controller i want to set up a websocket to stream that data,doing:
def stream = WebSocket.using[String]{ request =>
val in = Iteratee.consume[String]()
val enu:Enumerator[InfoBlock] = Enumerator.fromCallback1(
isFirst => extractor.getCallback
)
val out:Enumerator[String] = enu &> Enumeratee.map(blk => blk.author+" -> "+blk.msg)
(in,out)}
It works but with a big problem, when a connection is open it sends a bunch of blocks (=~ 50) and stops, if i open a new websocket then i get another bunch of blocks but no more.I tried to set some property to the js object WebSocket in particular i tried setting
websocket.binaryType = "arraybuffer"
because i thought using "blob" may be the cause but i was wrong the problem must be server side and i have no clue..

From the Enumerator ScalaDocs on Enumerator.fromCallback, describing the retriever function:
The input function. Returns a future eventually redeemed with Some value if there is input to pass, or a future eventually redeemed with None if the end of the stream has been reached.
This means that the enumerator will start by pulling everything off the queue. When it is empty, the callback will return a None. The enumerator sees this as the end of the stream, and closes sends a Done state downstream. It won't be looking for any more data
Rather than using a mutable queue for message passing, try and push the Enumerator/Iteratee paradigm into your worker. Create an Enumerator that outputs instances of the what you're creating, and have the iteratee pull from that instead. You can stick some enumerates in the middle to do some transforms if you need to.

Related

Akka Stream Exception Thrown When Downloading File From S3

I am trying to download a file from S3 using the following code:
wsClient
.url(url)
.withMethod("GET")
.withHttpHeaders(my_headers: _*)
.withRequestTimeout(timeout)
.stream()
.map {
case AhcWSResponse(underlying) =>
underlying.bodyAsBytes
}
When I run this I get the following exception:
akka.stream.StreamLimitReachedException: limit of 13 reached
Is this because I am using bodyAsBytes? What does this error mean ? I also see this warning message which is probably related:
blockingToByteString is a blocking and unsafe operation!
This happens because if you use stream(), you need to consume the source using bodyAsSource. It is important to do so or it would otherwise backpressure the connection. body or bodyAsBytes are implemented and do consume the source but for some reason the implementor decided to let you know that you should have used execute() instead of stream() by limiting the body to 13 ByteStrings and 50ms timeout.
You are getting StreamLimitReachedExpcetion because the number of incoming elements is larger than max.
val MAX_ALLOWED_SIZE = 100
// OK. Future will fail with a `StreamLimitReachedException`
// if the number of incoming elements is larger than max
val limited: Future[Seq[String]] =
mySource.limit(MAX_ALLOWED_SIZE).runWith(Sink.seq)
// OK. Collect up until max-th elements only, then cancel upstream
val ignoreOverflow: Future[Seq[String]] =
mySource.take(MAX_ALLOWED_SIZE).runWith(Sink.seq)
You can find more information about streaming process here

Gatling: Producer and consumer users

I have a load test where three sets of users create something and a different set of users perform some actions on them.
What is the recommended way to co-ordinate this behaviour in Gatling?
I'm currently using an object which contains a LinkedBlockingQueue which the "producers" put the ID and consumers take, see below.
However, it causes the test to hang after ~20s (targeting 1tps).
I've also tried using poll with a timeout, but instead of hanging the poll almost always fails (after 30s) or causes a hang if the timeout is larger (1m+).
This seems to be because all the threads are blocked waiting for something from the queue so isn't compatible with the way Gatling tests run (i.e. not 1 thread per user). Is there a non-blocking way to wait in the Gatling DSL?
Producer.scala
// ...
scenario("Produce stuff")
.exec(/* HTTP call which extracts an ID*/)
.exec(session => Queue.ids.put(session("my-id").as[String])
// ...
Consumer.scala
// ...
scenario("Consume stuff")
.exec(session => session.set("my-id", Queue.ids.take()))
.exec(/* HTTP call which users ID*/)
// ...
Queue.scala
object Queue {
val ids = new LinkedBlockingQueue[String]()
}
As an alternative I've tried to use the application functionality but it seems a harder problem to ensure that each user picks a unique item from the app.
Acknowledging this is all a hack, my current solution in Consumer.scala is:
doIf(_ => Queue.ids.size() < MIN_COUNT)(
pause(30) // wait for 30s if queue is initially too small
)
.doWhile(_ => Queue.ids.size() >= MIN_COUNT)(
exec(session => session.set("my-id", Queue.ids.take()))
.exec(...)
.pause(30)
)

Update actor state only after all events are persisted

In the receive method of a persistent actor, I receive a bunch a events I want to persist, and only after all events are persisted, update again my state. How can I do that?
def receive: Receive = {
...
case NewEvents(events) =>
persist(events) { singleEvent =>
// Update state using this single event
}
// After every events are persisted, do one more thing
}
Note that the persist() call is not blocking so I cannot put my code just after that.
Update: Why I need this
These new events come from an external web-service. My persistent actor needs to store in its state the last event id, which will be used for the subsequent ws call when it receives a command. The thing is that these commands may come concurrently, so I need some kind of locking system:
Received ws call command: stash next commands until this one finishes (that is, to sum up, a boolean)
Received responses from ws: store them, update the state and save the last id, execute another, single ws call for all commands that are in the stash (I'm keeping the command senders to be able to respond to them all once done) otherwise don't stash commands anymore.
I haven't tried defer yet, my initial solution was to send myself a PersistEventsDone message. It works because the persist method will stash all incoming messages until all the events handlers are executed. If another command came in the process, it doesn't really matter if it's before or after PersistEventsDone:
def receive: Receive = {
...
case PersistEventsDone =>
...
case NewEvents(events) =>
persist(events) { singleEvent =>
// Update state using this single event
}
self ! PersistEventsDone
}
defer is a bit weird in my case because it requires an event I don't need. But it still looks more natural than my solution.

Idiomatic way to continuously poll a HTTP server and dispatch to an actor

I need to write a client that continuously polls a web server for commands. A response from the server indicates that a command is available (in which case the response contains the command) or an instruction that no command is available, and you should fire off a new request for incoming commands.
I'm trying to figure out how to do it using spray-client and Akka, and I can think of ways to do it, but none of them look like they're the idiomatic way to get it done. So the question is:
what's the most sensible way to have a couple of threads poll the same web server for incoming commands and hand the commands off to an actor?
This example uses spray-client, scala futures, and Akka scheduler.
Implementation varies depending on desired behavior (execute many requests in parallel at the same time, execute in different intervals, send responses to one actor to process one response at a time, send responses to many actors to process in parallel... etc).
This particular example shows how execute many requests in parallel at the same time, and then do something with each result as it completes, without waiting for any other requests that were fired off at the same time to complete.
The code below will execute two HTTP requests every 5 seconds to 0.0.0.0:9000/helloWorld and 0.0.0.0:9000/goodbyeWorld in parallel.
Tested in Scala 2.10, Spray 1.1-M7, and Akka 2.1.2:
Actual scheduling code that handles periodic job execution:
// Schedule a periodic task to occur every 5 seconds, starting as soon
// as this schedule is registered
system.scheduler.schedule(initialDelay = 0 seconds, interval = 5 seconds) {
val paths = Seq("helloWorld", "goodbyeWorld")
// perform an HTTP request to 0.0.0.0:9000/helloWorld and
// 0.0.0.0:9000/goodbyeWorld
// in parallel (possibly, depending on available cpu and cores)
val retrievedData = Future.traverse(paths) { path =>
val response = fetch(path)
printResponse(response)
response
}
}
Helper methods / boilerplate setup:
// Helper method to fetch the body of an HTTP endpoint as a string
def fetch(path: String): Future[String] = {
pipeline(HttpRequest(method = GET, uri = s"/$path"))
}
// Helper method for printing a future'd string asynchronously
def printResponse(response: Future[String]) {
// Alternatively, do response.onComplete {...}
for (res <- response) {
println(res)
}
}
// Spray client boilerplate
val ioBridge = IOExtension(system).ioBridge()
val httpClient = system.actorOf(Props(new HttpClient(ioBridge)))
// Register a "gateway" to a particular host for HTTP requests
// (0.0.0.0:9000 in this case)
val conduit = system.actorOf(
props = Props(new HttpConduit(httpClient, "0.0.0.0", 9000)),
name = "http-conduit"
)
// Create a simple pipeline to deserialize the request body into a string
val pipeline: HttpRequest => Future[String] = {
sendReceive(conduit) ~> unmarshal[String]
}
Some notes:
Future.traverse is used for running futures in parallel (ignores order). Using a for comprehension on a list of futures will execute one future at a time, waiting for each to complete.
// Executes `oneThing`, executes `andThenAnother` when `oneThing` is complete,
// then executes `finally` when `andThenAnother` completes.
for {
oneThing <- future1
andThenAnother <- future2
finally <- future3
} yield (...)
system will need to be replaced with your actual Akka actor system.
system.scheduler.schedule in this case is executing an arbitrary block of code every 5 seconds -- there is also an overloaded version for scheduling messages to be sent to an actorRef.
system.scheduler.schedule(
initialDelay = 0 seconds,
frequency = 30 minutes,
receiver = rssPoller, // an actorRef
message = "doit" // the message to send to the actorRef
)
For your particular case, printResponse can be replaced with an actor send instead: anActorRef ! response.
The code sample doesn't take into account failures -- a good place to handle failures would be in the printResponse (or equivalent) method, by using a Future onComplete callback: response.onComplete {...}
Perhaps obvious, but spray-client can be replaced with another http client, just replace the fetch method and accompanying spray code.
Update: Full running code example is here:
git clone the repo, checkout the specified commit sha, $ sbt run, navigate to 0.0.0.0:9000, and watch the code in the console where sbt run was executed -- it should print Hello World!\n'Goodbye World! OR Goodbye World!\nHelloWorld! (order is potentially random because of parallel Future.traverse execution).
You can use HTML5 Server-Sent Events. It is implemented in many Scala frameworks. For example in xitrum code looks like:
class SSE extends Controller {
def sse = GET("/sse") {
addConnectionClosedListener {
// The connection has been closed
// Unsubscribe from events, release resources etc.
}
future {
respondEventSource("command1")
//...
respondEventSource("command2")
//...
}
}
SSE is pretty simple and can be used in any software not only in browser.
Akka integrated in xitrum and we use it in similar system. But it uses netty for async server it is also good for processing thousands of request in 10-15 threads.
So in this way your client will keep connection with server and reconnect when connection will be broken.

How to use dispatch.json in lift project

i am confused on how to combine the json library in dispatch and lift to parse my json response.
I am apparently a scala newbie.
I have written this code :
val status = {
val httpPackage = http(Status(screenName).timeline)
val json1 = httpPackage
json1
}
Now i am stuck on how to parse the twitter json response
I've tried to use the JsonParser:
val status1 = JsonParser.parse(status)
but got this error:
<console>:38: error: overloaded method value parse with alternatives:
(s: java.io.Reader)net.liftweb.json.JsonAST.JValue<and>
(s: String)net.liftweb.json.JsonAST.JValue
cannot be applied to (http.HttpPackage[List[dispatch.json.JsObject]])
val status1 = JsonParser.parse(status1)
I unsure and can't figure out what to do next in order to iterate through the data, extract it and render it to my web page.
Here's another way to use Dispatch HTTP with Lift-JSON. This example fetches JSON document from google, parses all "titles" from it and prints them.
import dispatch._
import net.liftweb.json.JsonParser
import net.liftweb.json.JsonAST._
object App extends Application {
val http = new Http
val req = :/("www.google.com") / "base" / "feeds" / "snippets" <<? Map("bq" -> "scala", "alt" -> "json")
val json = http(req >- JsonParser.parse)
val titles = for {
JField("title", title) <- json
JField("$t", JString(name)) <- title
} yield name
titles.foreach(println)
}
The error that you are getting back is letting your know that the type of status is neither a String or java.io.Reader. Instead, what you have is a List of already parsed JSON responses as Dispatch has already done all of the hard work in parsing the response into a JSON response. Dispatch has a very compact syntax which is nice when you are used to it but it can be very obtuse initially, especially when you are first approaching Scala. Often times, you'll find that you have to dive into the source code of the library when you are first learning to see what is going on. For instance, if you look into the dispatch-twitter source code, you can see that the timeline method actually performs a JSON extraction on the response:
def timeline = this ># (list ! obj)
What this method is defining is a Dispatch Handler which converts the Response object into a JsonResponse object, and then parses the response into a list of JSON Objects. That's quite a bit going on in one line. You can see the definition for the operand ># in the JsHttp.scala file in the http+json Dispatch module. Dispatch defines lots of Handlers that do a conversion behind the scenes into different types of data which you can then pass to block to work with. Check out the StdOut Walkthrough and the Common Tasks pages for some of the handlers but you'll need to dive into the various modules source code or Scaladoc to see what else is there.
All of this is a long way to get to what you want, which I believe is essentially this:
val statuses = http(Status(screenName).timeline)
statuses.map(Status.text).foreach(println _)
Only instead of doing a println, you can push it out to your web page in whatever way you want. Check out the Status object for some of the various pre-built extractors to pull information out of the status response.