I have found an [example][1] where akka-http is used with Source.single to make a request. Now I'd like to use Source.tick to implement polling requests which are execute every X seconds like this:
import akka.http.scaladsl.model._
import scala.concurrent.duration._
val request: HttpRequest = RequestBuilding.Get(Uri("http://api.someSite.com"))
val source: Source[HttpRequest, Cancellable] = Source.tick(1.seconds, 1.seconds, request)
val sourceWithDest = source.via(Http().superPool())
However, I get a compile error in the last line which I cant resolve(Type mismatch). Any ideas on what I am doing wrong or suggestions for alternatives?
[1]: https://gist.github.com/steinybot/a1f79fe9a67693722164
As per the docs:
The Flow returned by Http().superPool(...) is very similar to the one
from the Host-Level Client-Side API, so the Using a Host Connection
Pool section also applies here.
And then
The “pool client flow” returned by
Http().cachedHostConnectionPool(...) has the following type:
Flow[(HttpRequest, T), (Try[HttpResponse], T), HostConnectionPool]
This is to give client-side code the possibility to implement some logic to match the original requests to the corresponding response. Assuming you don't need this kind of behaviour in your case, you can always proceed by appending NotUsed to your request before feeding it to the pool flow. E.g.
val sourceWithDest: Source[Try[HttpResponse], Cancellable] =
source.map(req ⇒ (req, NotUsed)).via(Http().superPool[NotUsed]()).map(_._1)
Related
I am trying to do asynchronous http calls with akka streams.
This is what I tried.
Source(listEndpoints)
.mapAsync(20)(endpoint => Future(Await.result(request(HttpMethods.POST, endpoint, List(authHeader)), timeout)))
.runWith(Sink.seq[HttpResponse])
I am using akka-http within the request method and it returns Future[HttpResponse]
I think I am abusing Future here. The code above would give me a Future[List[HttpResponse]] and I have to use Await again to get a List[HttpResponse]. Is there a more elegant way to timeout functions within mapAsync?
Assuming your request method at some point does
Http().singleRequest
to get a Future[HttpResponse], you can pass a timeout for the request through:
// inside def request(...), will probably need to add a timeout argument here
val request = ??? // Build the HttpRequest
Http().singleRequest(
request = request,
settings = ConnectionPoolSettings.default.withMaxConnectionLifetime(timeout)
Then your stream would just be
Source(listEndpoints)
.mapAsync(request(...))
.runWith(Sink.seq[HttpResponse])
and you'd only need to Await at the "end of the world" for the Future[List[HttpResponse]] to complete.
You can also change the default max connection lifetime with akka.http.host-connection-pool.max-connection-lifetime in application.conf
I am trying to download a file from S3 using the following code:
wsClient
.url(url)
.withMethod("GET")
.withHttpHeaders(my_headers: _*)
.withRequestTimeout(timeout)
.stream()
.map {
case AhcWSResponse(underlying) =>
underlying.bodyAsBytes
}
When I run this I get the following exception:
akka.stream.StreamLimitReachedException: limit of 13 reached
Is this because I am using bodyAsBytes? What does this error mean ? I also see this warning message which is probably related:
blockingToByteString is a blocking and unsafe operation!
This happens because if you use stream(), you need to consume the source using bodyAsSource. It is important to do so or it would otherwise backpressure the connection. body or bodyAsBytes are implemented and do consume the source but for some reason the implementor decided to let you know that you should have used execute() instead of stream() by limiting the body to 13 ByteStrings and 50ms timeout.
You are getting StreamLimitReachedExpcetion because the number of incoming elements is larger than max.
val MAX_ALLOWED_SIZE = 100
// OK. Future will fail with a `StreamLimitReachedException`
// if the number of incoming elements is larger than max
val limited: Future[Seq[String]] =
mySource.limit(MAX_ALLOWED_SIZE).runWith(Sink.seq)
// OK. Collect up until max-th elements only, then cancel upstream
val ignoreOverflow: Future[Seq[String]] =
mySource.take(MAX_ALLOWED_SIZE).runWith(Sink.seq)
You can find more information about streaming process here
The Akka HTTP client API allows passing a Source[ChunkStreamPart, Any] to a HttpEntity.Chunked, which makes it possible to push a stream of ByteStrings into a single HTTP request with backpressure handling:
val data: Source[ByteString, Future[ImportantInformation]]
val chunkedEntity = HttpEntity.Chunked(
ContentTypes.`application/octet-stream`,
data.map(ChunkStreamPart(_)))
val request = HttpRequest(HttpMethods.POST,
Uri("http://targethost/path"), entity = chunkedEntity)
val downstreamResp : Future[HttpResponse] = Http().singleRequest(request)
Now, the source is consumed far down in the transport layer, and I can't find a way to access the Future[ImportantInformation] materialized value from my Source. Is there a way to work around this problem, i.e. either a method that would let me access the materialized value, or even some kind of Sink in the library that sinks a stream of ByteStrings into a single HTTP request?
You can use mapMaterializedValue on your source to access its materialized value.
val data: Source[ByteString, Future[ImportantInformation]]
val mappeddata =
data.mapMaterializedValue(future => processImportantInformation(future))
If you don't need to specify ImportantInformation but just want to know when the Source receives a termination message then you can use Source.watchTermination. This will materialize a Future[Done].
There is a good example found here.
I'm new to Play and Scala. I'm trying to build an Application using Play and Scala. I need to make post call internally to get data from my server. But this should be synchronous. After getting the data from this post request, I need to send that data to front end. I've seen many resources but all are asynchronous. Please help me.
I'm fetching data from DB and then should return the data as response.
DB is at remote server not in the hosted server.
I think you should not block anyway.
def action = Action.async {
WS.url("some url")
.post(Json.toJson(Map("query"->query)))
.map { response =>
val jsonResponse = response.json
// in this place you have your response from your call
// now just do whatever you need to do with it,
// in this example I will return it as `Ok` result
Ok(jsonResponse)
}
}
Just map the result of your call and modify it staying in context of Future and use Action.async that takes a Future.
If you really want to block use Await.result(future, 5 seconds), importing
import scala.concurrent.duration._
import scala.concurrent.Await
See docs for Await here
All requests are asynchronous but nothing prevents you from waiting the response with await in your code.
val response = await(yourFutureRequest).body
The line written above will block until the future has finished.
i am confused on how to combine the json library in dispatch and lift to parse my json response.
I am apparently a scala newbie.
I have written this code :
val status = {
val httpPackage = http(Status(screenName).timeline)
val json1 = httpPackage
json1
}
Now i am stuck on how to parse the twitter json response
I've tried to use the JsonParser:
val status1 = JsonParser.parse(status)
but got this error:
<console>:38: error: overloaded method value parse with alternatives:
(s: java.io.Reader)net.liftweb.json.JsonAST.JValue<and>
(s: String)net.liftweb.json.JsonAST.JValue
cannot be applied to (http.HttpPackage[List[dispatch.json.JsObject]])
val status1 = JsonParser.parse(status1)
I unsure and can't figure out what to do next in order to iterate through the data, extract it and render it to my web page.
Here's another way to use Dispatch HTTP with Lift-JSON. This example fetches JSON document from google, parses all "titles" from it and prints them.
import dispatch._
import net.liftweb.json.JsonParser
import net.liftweb.json.JsonAST._
object App extends Application {
val http = new Http
val req = :/("www.google.com") / "base" / "feeds" / "snippets" <<? Map("bq" -> "scala", "alt" -> "json")
val json = http(req >- JsonParser.parse)
val titles = for {
JField("title", title) <- json
JField("$t", JString(name)) <- title
} yield name
titles.foreach(println)
}
The error that you are getting back is letting your know that the type of status is neither a String or java.io.Reader. Instead, what you have is a List of already parsed JSON responses as Dispatch has already done all of the hard work in parsing the response into a JSON response. Dispatch has a very compact syntax which is nice when you are used to it but it can be very obtuse initially, especially when you are first approaching Scala. Often times, you'll find that you have to dive into the source code of the library when you are first learning to see what is going on. For instance, if you look into the dispatch-twitter source code, you can see that the timeline method actually performs a JSON extraction on the response:
def timeline = this ># (list ! obj)
What this method is defining is a Dispatch Handler which converts the Response object into a JsonResponse object, and then parses the response into a list of JSON Objects. That's quite a bit going on in one line. You can see the definition for the operand ># in the JsHttp.scala file in the http+json Dispatch module. Dispatch defines lots of Handlers that do a conversion behind the scenes into different types of data which you can then pass to block to work with. Check out the StdOut Walkthrough and the Common Tasks pages for some of the handlers but you'll need to dive into the various modules source code or Scaladoc to see what else is there.
All of this is a long way to get to what you want, which I believe is essentially this:
val statuses = http(Status(screenName).timeline)
statuses.map(Status.text).foreach(println _)
Only instead of doing a println, you can push it out to your web page in whatever way you want. Check out the Status object for some of the various pre-built extractors to pull information out of the status response.