How to programmatically call Route in Akka Http - scala

In Akka Http, it is possible to define the route system to manage a REST infrastructure in this way, as stated here: https://doc.akka.io/docs/akka-http/current/routing-dsl/overview.html
val route =
get {
pathSingleSlash {
complete(HttpEntity(ContentTypes.`text/html(UTF-8)`,"<html><body>Hello world!</body></html>"))
} ~
path("ping") {
complete("PONG!")
} ~
path("crash") {
sys.error("BOOM!")
}
}
Is there a way to programmatically invoke one of the route inside the same application, in a way that could be similar to the following statement?
val response = (new Invoker(route = route, method = "GET", url = "/ping", body = null)).Invoke()
where Response would be the same result of a remote HTTP call to the service?
The aforementioned API it's only to give an idea of what I have in mind, I would expect the capability to set the content type, headers, and so on.

In the end I managed to find out the answer to my own question by digging a bit more in Akka HTTP documentation.
As stated here: https://doc.akka.io/docs/akka-http/current/routing-dsl/routes.html, the Route is a type defined as follows:
type Route = RequestContext => Future[RouteResult]
where RequestContext is a wrapper for the HttpRequest. But is true as well that a Route can be converted, implicitly or not, to other function types, like this:
def asyncHandler(route: Route)(...): HttpRequest ⇒ Future[HttpResponse]
Hence, it is indeed possible to "call" a route by converting it to another function type, and then simply passing a HttpRequest build ad hoc, receiving a Future containing the desired response. The conversion required a little more time than the rest of the operations, but it's something that could be done while bootrstrapping the application.
Note: the conversion requires these imports, as stated here: https://doc.akka.io/docs/akka-http/current/introduction.html
implicit val system = ActorSystem("my-system")
implicit val materializer = ActorMaterializer()
implicit val executionContext = system.dispatcher
But these imports are already mandatory for the create of the service itself.

If this is for unit tests, you can use akka-http's test kit.
If this is for the application itself, you should not go through the route, you should just invoke the relevant services that the controller would use directly. If that is inconvenient (too much copy-pasta), refactor until it becomes possible.
As for the reason, I want my application to be wrapped inside both a web server (then use the route the “normal” way) and a daemon that responds to a message broker inbound message.
I have an application that does something like that actually.
But I came at this from the other way: I consider the broker message to be the "primary" format. It is "routed" inside of the consumer based purely on properties of the message itself (body contents, message key, topic name). The HTTP gateway is built on top of that: It has only a very limited number of API endpoints and routes (mostly for caller convenience, might as well have just a single one) and constructs a message that it then passes off to the message consumer (in my case, via the broker actually, so that the HTTP gateway does not even have to be on the same host as the consumer).
As a result, I don't have to "re-use" the HTTP route because that does not really do anything. All the shared processing logic happens at the lower level (inside the service, inside the consumer).

Related

How can I intercept HTTP client requests made with Akka HTTP?

I have several components that proxy requests to a REST service and deserializes the result as appropriate. For example, something like:
import akka.http.scaladsl.HttpExt
trait UsersResource {
val http: HttpExt
def getUser(id: String): Future[User] = http.singleRequest(HttpRequest(...))
.flatMap(r => Unmarshal(r.entity).to[User])
def findUsers(query: Any): Future[List[User]]
}
I'd like to somehow proxy each of these requests, so that I can modify the request (eg. to add headers) or modify the response. Specifically, I'm interested in adding some code that adds:
logging and monitoring request/response
add cookies to each request
add auth to request
transform the response body
Since each specific resource typically has these three steps in common (and in some cases this logic is common across all resources), I'd like to change the http: HttpExt field somehow to apply these steps.
Is anything like this possible with Akka HTTP?
I came across this question, which seems to touch on part of this question (specifically the part about logging/monitoring) however the accepted answer appears to be using HTTP Directives on the server side, rather than at the client.

What is the best way to combine akka-http flow in a scala-stream flow

I have an use case where after n flows of Akka-stream, I have to take the result of one of them and made a request to a HTTP REST API.
The last akka-stream flow type, before the HTTP request is a string:
val stream1:Flow[T,String,NotUsed] = Flow[T].map(_.toString)
Now, HTTP request should be specified, I thought about something like:
val stream2: Flow[String,Future[HttpRespone],NotUsed] = Flow[String].map(param => Http.singleRequest(HttpRequest(uri=s"host.com/$param")))
and then combine it:
val stream3 = stream1 via stream2
Is it the best way to do it? Which ways you guys would actually recommend and why? A couple of best praxis examples in the scope of this use case would be great!
Thanks in advance :)
Your implementation would create a new connection to "host.com" for each new param. This is unnecessary and prevents akka from making certain optimizations. Under the hood akka actually keeps a connection pool around to reuse open connections but I think it is better to specify your intentions in the code and not rely on the underlying implementation.
You can make a single connection as described in the documentation:
val connectionFlow: Flow[HttpRequest, HttpResponse, _] =
Http().outgoingConnection("host.com")
To utilize this connection Flow you'll need to convert your String paths to HttpRequest objects:
import akka.http.scaladsl.model.Uri
import akka.http.scaladsl.model.Uri.Path
def pathToRequest(path : String) = HttpRequest(uri=Uri.Empty.withPath(Path(path)))
val reqFlow = Flow[String] map pathToRequest
And, finally, glue all the flows together:
val stream3 = stream1 via reqFlow via connectionFlow
This is the most common pattern for continuously querying the same server with different request objects.

Spray API blocked by bad Execution Context in Scala?

I use the Spray API to listen for requests from a server. A computation in one specific scala class ends up blocking Spray from responding across the whole application. This is a slight simplification of the problem, but I can provide more info. if needed.
class SomeClass(implicit execc: ExecutionContext){
implicit val x = ...
val foo = Await.result(...someFunc(x))
}
I added this import and it resolved my issue:
import scala.concurrent.ExecutionContext.Implicits.global
Can anyone explain how or why this worked?
===================================================
Edit:
OuterClass instantiates SomeClass, but itself is never instantiated with the ExecutionContext parameter. It appears that it may be using the global execution context by default, and that is why it is blocking then?
class OuterClass(executor: ExecutionContext){
val s = new someClass
}
val x = (new OuterClass).someFunction
Spray route handler is a single actor that receives HTTP requests from Spray IO/Spray-can/library and passes them to the route handling function - essentially a partial function that has no concurrency on it's own. Thus if your route blocks, Spray will also block and requests will queue in the route handler actor queue.
There are 3 ways to properly handle blocking request processing in the route: spawn an actor per request, return Future of response or take request completion function and use it somewhere else unblocking the route (search for more detailed explanation if interested).
I can't be sure which execution context was used in your case, but it must have been very limited in terms of allocated threads and/or shared by your Spray route handler and long running task. This would lead to them both running on the same thread.
If you didn't have any execution context imported explicitly it must have been found through implicit resolution from regular scopes. You must have had one since you have it as a constructor parameter. Try to check your implicits in the scope to see which one it was. I'm curious myself whether it was one provided by Spray or something else.

Spray/Scala - Setting timeout on specific request

I currently have a REST call set up using spray pipeline. If I don't get a response within x number of seconds, I want it to timeout, but only on that particular call. When making a spray client pipeline request, is there a good way to specify a timeout specific to that particular call?
As far as I can tell, as of spray-client 1.3.1 there is no way to customise the pipe after it has been created.
However, you can create custom pipes for different types of requests.
It's worth mentioning the fact that the timeouts defined below are the timeouts for the ask() calls, not for the network operations, but I guess this is what you need from your description.
I found the following article very useful in understanding a bit better how the library works behind the scenes: http://kamon.io/teamblog/2014/11/02/understanding-spray-client-timeout-settings/
Disclaimer: I haven't actually tried this, but I guess it should work:
val timeout1 = Timeout(5 minutes)
val timeout2 = Timeout(1 minutes)
val pipeline1: HttpRequest => Future[HttpResponse] = sendReceive(implicitly[ActorRefFactory],
implicitly[ExecutionContext], timeout1)
val pipeline2: HttpRequest => Future[HttpResponse] = sendReceive(implicitly[ActorRefFactory],
implicitly[ExecutionContext], timeout2)
and then you obviously use the appropriate pipe for each request

What effect does using Action.async have, since Play uses Netty which is non-blocking

Since Netty is a non-blocking server, what effect does changing an action to using .async?
def index = Action { ... }
versus
def index = Action.async { ... }
I understand that with .async you will get a Future[SimpleResult]. But since Netty is non-blocking, will Play do something similar under the covers anyway?
What effect will this have on throughput/scalability? Is this a hard question to answer where it depends on other factors?
The reason I am asking is, I have my own custom Action and I wanted to reset the cookie timeout for every page request so I am doing this which is a async call:
object MyAction extends ActionBuilder[abc123] {
def invokeBlock[A](request: Request[A], block: (abc123[A]) => Future[SimpleResult]) = {
...
val result: Future[SimpleResult] = block(new abc123(..., result))
result.map(_.withCookies(...))
}
}
The take away from the above snippet is I am using a Future[SimpleResult], is this similar to calling Action.async but this is inside of my Action itself?
I want to understand what effect this will have on my application design. It seems like just for the ability to set my cookie on a per request basis I have changed from blocking to non-blocking. But I am confused since Netty is non-blocking, maybe I haven't really changed anything in reality as it was already async?
Or have I simply created another async call embedded in another one?
Hoping someone can clarify this with some details and how or what effect this will have in performance/throughput.
def index = Action { ... } is non-blocking you are right.
The purpose of Action.async is simply to make it easier to work with Futures in your actions.
For example:
def index = Action.async {
val allOptionsFuture: Future[List[UserOption]] = optionService.findAll()
allOptionFuture map {
options =>
Ok(views.html.main(options))
}
}
Here my service returns a Future, and to avoid dealing with extracting the result I just map it to a Future[SimpleResult] and Action.async takes care of the rest.
If my service was returning List[UserOption] directly I could just use Action.apply, but under the hood it would still be non-blocking.
If you look at Action source code, you can even see that apply eventually calls async:
https://github.com/playframework/playframework/blob/2.3.x/framework/src/play/src/main/scala/play/api/mvc/Action.scala#L432
I happened to come across this question, I like the answer from #vptheron, and I also want to share something I read from book "Reactive Web Applications", which, I think, is also great.
The Action.async builder expects to be given a function of type Request => Future[Result]. Actions declared in this fashion are not much different from plain Action { request => ... } calls, the only difference is that Play knows that Action.async actions are already asynchronous, so it doesn’t wrap their contents in a future block.
That’s right — Play will by default schedule any Action body to be executed asynchronously against its default web worker pool by wrapping the execution in a future. The only difference between Action and Action.async is that in the second case, we’re taking care of providing an asynchronous computation.
It also presented one sample:
def listFiles = Action { implicit request =>
val files = new java.io.File(".").listFiles
Ok(files.map(_.getName).mkString(", "))
}
which is problematic, given its use of the blocking java.io.File API.
Here the java.io.File API is performing a blocking I/O operation, which means that one of the few threads of Play's web worker pool will be hijacked while the OS figures out the list of files in the execution directory. This is the kind of situation you should avoid at all costs, because it means that the worker pool may run out of threads.
-
The reactive audit tool, available at https://github.com/octo-online/reactive-audit, aims to point out blocking calls in a project.
Hope it helps, too.