I have an Akka-Http route where I'm calling Bing Search API. I want to add some extra records to the result, before sending the response to client. I'm using circe to handle json.
Here is the code which doesn't work but illustrates the idea:
val extraData = Map("key1"->"value1","key2"->"val2").asJson
val query = URLEncoder.encode(q, "utf8")
val responseFuture: Future[HttpResponse] = Http().singleRequest(
HttpRequest(uri = "https://api.cognitive.microsoft.com/bing/v7.0/search?q=" + query)
.withHeaders(RawHeader("Ocp-Apim-Subscription-Key", k1)))
val alteredResponse = responseFuture.map{ response => {
response.entity.toStrict(2 seconds) flatMap { e =>
e.dataBytes
.runFold(ByteString.empty) { case (acc, b) => acc ++ b }
.map(k => parse(k.utf8String)
match {
case Left(failure) => "Can't parse"
case Right(json) => Try {
json.hcursor.withFocus{
_.mapObject(x =>
x.add("extraData",extraData)
)
}
}}
)
}
}}
complete(alteredResponse)
Is it a good approach to take ? How can I get it to work ?
I ended up using flatMap and creating HttpResponse object manually:
val responseFuture: Future[HttpResponse] = Http().singleRequest(
HttpRequest(uri = "https://api.cognitive.microsoft.com/bing/v7.0/search?q=" + query)
.withHeaders(RawHeader("Ocp-Apim-Subscription-Key", k1)))
.flatMap{ response => {
response.entity.toStrict(2 seconds) flatMap { e =>
e.dataBytes
.runFold(ByteString.empty) { case (acc, b) => acc ++ b }
.map(k => parse(k.map(_.toChar).mkString)
match {
case Left(failure) => HttpResponse(
StatusCodes.OK,
List(),
HttpEntity("NO RESULTS".map(_.toByte).toArray),
HttpProtocol("HTTP/1.1")
)
case Right(json) => {
json.hcursor.withFocus{
_.mapObject(x =>
x.add("extraData",extraData)
)
}.top match {
case Some(jsn) => {
HttpResponse(
StatusCodes.OK,
List(headers.`Content-Type`(ContentType(MediaTypes.`application/json`,() => HttpCharsets.`UTF-8`))),
HttpEntity(jsn.noSpaces.toCharArray.map(_.toByte)),
HttpProtocol("HTTP/1.1")
)
}
}
}}
)
}
}}
complete(responseFuture)
When converting response Bytes from Bing, i tried to use .utf8String function, but it messed up the Json, so I eneded up parsing bytes with map.(_toChar).mkString.
If there is a better way of writing this code, it would be nice to see it. Thanks.
Related
In my playframework application I want to wait until my future is completed and the return it to the view.
my code looks like:
def getContentComponentUsageSearch: Action[AnyContent] = Action.async { implicit request =>
println(request.body.asJson)
request.body.asJson.map(_.validate[StepIds] match {
case JsSuccess(stepIds, _) =>
println("VALIDE SUCCESS -------------------------------")
val fList: List[Seq[Future[ProcessTemplatesModel]]] = List() :+ stepIds.s.map(s => {
processTemplateDTO.getProcessStepTemplate(s.processStep_id).flatMap(stepTemplate => {
processTemplateDTO.getProcessTemplate(stepTemplate.get.processTemplate_id.get).map(a => {
a.get
})
})
})
fList.map(u => {
val a: Seq[Future[ProcessTemplatesModel]] = u
Future.sequence(a).map(s => {
println(s)
})
})
Future.successful(Ok(Json.obj("id" -> "")))
case JsError(_) =>
println("NOT VALID -------------------------------")
Future.successful(BadRequest("Process Template not create client"))
case _ => Future.successful(BadRequest("Process Template create client"))
}).getOrElse(Future.successful(BadRequest("Process Template create client")))
}
the pirntln(s) is printing the finished stuff. But how can I wait until it is complete and return it then to the view?
thanks in advance
UPDATE:
also tried this:
val process = for {
fList: List[Seq[Future[ProcessTemplatesModel]]] <- List() :+ stepIds.s.map(s => {
processTemplateDTO.getProcessStepTemplate(s.processStep_id).flatMap(stepTemplate => {
processTemplateDTO.getProcessTemplate(stepTemplate.get.processTemplate_id.get).map(a => {
a.get
})
})
})
} yield (fList)
process.map({ case (fList) =>
Ok(Json.obj(
"processTemplate" -> fList
))
})
but then I got this:
UPDATE:
My problem is that the futures in fList do not complete before an OK result is returned
The code in the question didn't seem compilable, so here is an untested very rough sketch, that hopefully provides enough inspiration for further search of the correct solution:
def getContentComponentUsageSearch: = Action.async { implicit req =>
req.body.asJson.map(_.validate[StepIds] match {
case JsSuccess(stepIds, _) => {
// Create list of futures
val listFuts: List[Future[ProcessTemplatesModel]] = (stepIds.s.map(s => {
processTemplateDTO.
getProcessStepTemplate(s.processStep_id).
flatMap{ stepTemplate =>
processTemplateDTO.
getProcessTemplate(stepTemplate.get.processTemplate_id.get).
map(_.get)
}
})).toList
// Sequence all the futures into a single future of list
val futList = Future.sequence(listFuts)
// Flat map this single future to the OK result
for {
listPTMs <- futList
} yield {
// Apparently some debug output?
listPTMs foreach printl
Ok(Json.obj("id" -> ""))
}
}
case JsError(_) => {
println("NOT VALID -------------------------------")
Future.successful(BadRequest("Process Template not create client"))
}
case _ => Future.successful(BadRequest("Process Template create client"))
}).getOrElse(Future.successful(BadRequest("Process Template create client")))
}
If I understood your question correctly, what you wanted was to make sure that all futures in the list complete before you return the OK. Therefore I have first created a List[Future[...]]:
val listFuts: List[Future[ProcessTemplatesModel]] = // ...
Then I've combined all the futures into a single future of list, which completes only when every element has completed:
// Sequence all the futures into a single future of list
val futList = Future.sequence(listFuts)
Then I've used a for-comprehension to make sure that the listPTMs finishes computation before the OK is returned:
// Flat map this single future to the OK result
for {
listPTMs <- futList
} yield {
// Apparently some debug output?
listPTMs foreach printl
Ok(Json.obj("id" -> ""))
}
The for-yield (equivalent to map here) is what establishes the finish-this-before-doing-that behavior, so that listPTMs is fully evaluated before OK is constructed.
In order to wait until a Future is complete, it is most common to do one of two things:
Use a for-comprehension, which does a bunch of mapping and flatmapping behind the scenes before doing anything in the yield section (see Andrey's comment for a more detailed explanation). A simplified example:
def index: Action[AnyContent] = Action.async {
val future1 = Future(1)
val future2 = Future(2)
for {
f1 <- future1
f2 <- future2
} yield {
println(s"$f1 + $f2 = ${f1 + f2}") // prints 3
Ok(views.html.index("Home"))
}
}
Map inside a Future:
def index: Action[AnyContent] = Action.async {
val future1 = Future(1)
future1.map{
f1 =>
println(s"$f1")
Ok(views.html.index("Home"))
}
}
If there are multiple Futures:
def index: Action[AnyContent] = Action.async {
val future1 = Future(1)
val future2 = Future(2)
future1.flatMap{
f1 =>
future2.map {
f2 =>
println(s"$f1 + $f2 = ${f1 + f2}")
Ok(views.html.index("Home"))
}
}
}
}
When you have multiple Futures though, the argument for for-yield comprehensions gets much stronger as it gets easier to read. Also, you are probably aware but if you work with futures you may need to following imports:
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
Hi in my scala application I want to return a Seq[Model] to my Frontend.
def getContentComponentUsageSearch: Action[AnyContent] = Action.async { implicit request =>
println(request.body.asJson)
request.body.asJson.map(_.validate[StepIds] match {
case JsSuccess(stepIds, _) =>
println("VALIDE SUCCESS -------------------------------")
var templates: Seq[Future[Option[ProcessTemplatesModel]]] = Future.sequence(stepIds.s.map(s => {
processTemplateDTO.getProcessStepTemplate(s.processStep_id).flatMap(stepTemplate => {
templates :+ processTemplateDTO.getProcessTemplate(stepTemplate.get.processTemplate_id.get)
})
}))
templates.map(done => {
Future.sequence(templates).map(a => {
Ok(Json.obj("id" -> a))
})
})
case JsError(_) =>
println("NOT VALID -------------------------------")
Future.successful(BadRequest("Process Template not create client"))
case _ => Future.successful(BadRequest("Process Template create client"))
}).getOrElse(Future.successful(BadRequest("Process Template create client")))
}
I need to wait until its finished and return then. What would be a good way to achieve this?
thanks in advance.
UPDATE:
at the moment I try this:
val fList: List[Future[ProcessTemplatesModel]] +: stepIds.s.map(s => {
processTemplateDTO.getProcessStepTemplate(s.processStep_id).map(stepTemplate => {
processTemplateDTO.getProcessTemplate(stepTemplate.get.processTemplate_id.get).map(a => {
a.get
})
})
})
Future.successful( Ok(Json.obj("id" -> fList)))
Issue in this case is the +: I think.
I think you only have to rewrite this part:
templates.map(done => {
Future.sequence(templates).map(a => {
Ok(Json.obj("id" -> a))
})
})
If you would do something like this:
val futSeqOpt: Future[Seq[Option[ProcessTemplatesModel]]] = Future.sequence(templates)
val futSeq: Future[Seq[ProcessTemplatesModel]] = futSeqOpt.map(_.getOrElse("defaultValue"))
futSeq.map(seq => Ok(Json.toJson(seq)))
it should work. (I only added the types for demonstration).
I'd like to use akka streams in order to pipe some json webservices together. I'd like to know the best approach to make a stream from an http request and stream chunks to another.
Is there a way to define such a graph and run it instead of the code below?
So far I tried to do it this way, not sure if it is actually really streaming yet:
override def receive: Receive = {
case GetTestData(p, id) =>
// Get the data and pipes it to itself through a message as recommended
// https://doc.akka.io/docs/akka-http/current/client-side/request-level.html
http.singleRequest(HttpRequest(uri = uri.format(p, id)))
.pipeTo(self)
case HttpResponse(StatusCodes.OK, _, entity, _) =>
val initialRes = entity.dataBytes.via(JsonFraming.objectScanner(Int.MaxValue)).map(bStr => ChunkStreamPart(bStr.utf8String))
// Forward the response to next job and pipes the request response to dedicated actor
http.singleRequest(HttpRequest(
method = HttpMethods.POST,
uri = "googl.cm/flow",
entity = HttpEntity.Chunked(ContentTypes.`application/json`,
initialRes)
))
case resp # HttpResponse(code, _, _, _) =>
log.error("Request to test job failed, response code: " + code)
// Discard the flow to avoid backpressure
resp.discardEntityBytes()
case _ => log.warning("Unexpected message in TestJobActor")
}
This should be a graph equivalent to your receive:
Http()
.cachedHostConnectionPool[Unit](uri.format(p, id))
.collect {
case (Success(HttpResponse(StatusCodes.OK, _, entity, _)), _) =>
val initialRes = entity.dataBytes
.via(JsonFraming.objectScanner(Int.MaxValue))
.map(bStr => ChunkStreamPart(bStr.utf8String))
Some(initialRes)
case (Success(resp # HttpResponse(code, _, _, _)), _) =>
log.error("Request to test job failed, response code: " + code)
// Discard the flow to avoid backpressure
resp.discardEntityBytes()
None
}
.collect {
case Some(initialRes) => initialRes
}
.map { initialRes =>
(HttpRequest(
method = HttpMethods.POST,
uri = "googl.cm/flow",
entity = HttpEntity.Chunked(ContentTypes.`application/json`, initialRes)
),
())
}
.via(Http().superPool[Unit]())
The type of this is Flow[(HttpRequest, Unit), (Try[HttpResponse], Unit), HostConnectionPool], where the Unit is a correlation ID you can use if you want to know which request corresponds to the response arrived, and HostConnectionPool materialized value can be used to shut down the connection to the host. Only cachedHostConnectionPool gives you back this materialized value, superPool probably handles this on its own (though I haven't checked). Anyway, I recommend you just use Http().shutdownAllConnectionPools() upon shutdown of your application unless you need otherwise for some reason. In my experience, it's much less error prone (e.g. forgetting the shutdown).
You can also use Graph DSL, to express the same graph:
val graph = Flow.fromGraph(GraphDSL.create() { implicit b =>
import GraphDSL.Implicits._
val host1Flow = b.add(Http().cachedHostConnectionPool[Unit](uri.format(p, id)))
val host2Flow = b.add(Http().superPool[Unit]())
val toInitialRes = b.add(
Flow[(Try[HttpResponse], Unit)]
.collect {
case (Success(HttpResponse(StatusCodes.OK, _, entity, _)), _) =>
val initialRes = entity.dataBytes
.via(JsonFraming.objectScanner(Int.MaxValue))
.map(bStr => ChunkStreamPart(bStr.utf8String))
Some(initialRes)
case (Success(resp # HttpResponse(code, _, _, _)), _) =>
log.error("Request to test job failed, response code: " + code)
// Discard the flow to avoid backpressure
resp.discardEntityBytes()
None
}
)
val keepOkStatus = b.add(
Flow[Option[Source[HttpEntity.ChunkStreamPart, Any]]]
.collect {
case Some(initialRes) => initialRes
}
)
val toOtherHost = b.add(
Flow[Source[HttpEntity.ChunkStreamPart, Any]]
.map { initialRes =>
(HttpRequest(
method = HttpMethods.POST,
uri = "googl.cm/flow",
entity = HttpEntity.Chunked(ContentTypes.`application/json`, initialRes)
),
())
}
)
host1Flow ~> toInitialRes ~> keepOkStatus ~> toOtherHost ~> host2Flow
FlowShape(host1Flow.in, host2Flow.out)
})
I am trying to consume a paginated REST call, currently doing something like:
def depaginateGetEnvironmentUuids(uri: Uri, filters: Seq[BasicNameValuePair], pageNumber: Int = 1): Future[Seq[UUID]] = {
val paginationFilters = Seq(new BasicNameValuePair("per_page", "1000"), new BasicNameValuePair("page", pageNumber.toString))
serviceManager.getAssignment(uri, filters ++ paginationFilters: _*).flatMap { cP =>
(cP.getStatus, cP.getBody.parseJson) match {
case (HttpStatus.SC_OK, JsArray.empty) =>
Future (Seq.empty[UUID] )
case (HttpStatus.SC_OK, json) =>
depaginateGetEnvironmentUuids (uri, filters, pageNumber + 1).map (_ ++ json.convertTo[Seq[AssignmentView]].map (se => se.environment.uuid).distinct)
case (_, error) =>
Future.failed(new Throwable(s"call to retrieve environment assignment: $error"))
}
}
}
Is there a better way of handling a RESTful service endpoint with pagination?
I'm trying to combine two Play Framework Enumerators together but merging values that come through which have the same key value. For the most part it works, except, the Map used to keep previous values that do not have a match as of yet gets lost each time a match is found and a Done Iteratee is returned.
Is there a way to provide the state to the next invocation of step after a Done has been returned?
Any examples I've found thus far all seem to be around grouping consecutive values together and then passing the whole grouping along, and none on grouping some arbitrary values from the stream and only passing specific values along once grouped.
Ideally once the match is made it'll send the matched values along.
What I've gotten to thus far, (pretty much based off of Creating a time-based chunking Enumeratee )
def virtualSystemGrouping[E](system:ParentSystem): Iteratee[Detail, Detail] = {
def step(state: Map[String, Detail])(input:Input[Detail]): Iteratee[Detail, Detail] = {
input match {
case Input.EOF => {Done(null, Input.EOF)}
case Input.Empty =>{Cont[Detail, Detail](i => step(state)(i))}
case Input.El(e) => {
if (!system.isVirtual) Done(e)
if (state.exists((k) =>{k._1.equals(e.name)})) {
val other = state(e.name)
// ??? should have a; state - e.name
// And pass new state and merged value out.
Done(e + other)
} else {
Cont[Detail, Detail](i => step(state + (e.name -> e))(i))
}
}
}
}
Cont(step(Map[String,Detail]()))
}
The calling of this looks like;
val systems:List[ParentSystem] = getSystems()
val start = Enumerator.empty[Detail]
val send = systems.foldLeft(start){(b,p) =>
b interleave Concurrent.unicast[Detail]{channel =>
implicit val timeout = Timeout (1 seconds)
val actor = SystemsActor.lookupActor(p.name + "/details")
actor map {
case Some(a) => {a ! SendDetailInformation(channel)}
case None => {channel.eofAndEnd}
} recover {
case t:Throwable => {channel.eofAndEnd}
}
}
} &> Enumeratee.grouped(virtualSystemGrouping(parent)) |>> Iteratee.foreach(e => {output.push(e)})
send.onComplete(t => output.eofAndEnd)
The one method that I've been able to come up with that works, is to use a Concurrent.unicast and pass the channel into the combining function. I'm sure there is a way to create an Iteratee/Enumerator that does the work all in one nice neat package, but that is eluding me at the time being.
Updated combining function;
def virtualSystemGrouping[E](system:ParentSystem, output:Channel): Iteratee[Detail, Detail] = {
def step(state: Map[String, Detail])(input:Input[Detail]): Iteratee[Detail, Detail] = {
input match {
case Input.EOF => {
state.mapValues(r=>output.push(r))
output.eofAndEnd
Done(null, Input.EOF)
}
case Input.Empty =>{Cont[Detail, Detail](i => step(state)(i))}
case Input.El(e) => {
if (!system.isVirtual) {output.push(e); Done(e, Input.Empty)}
if (state.exists((k) =>{k._1.equals(e.name)})) {
val other = state(e.name)
output.push(e + other)
Cont[Detail, Detail](i => step(state - e.name)(i))
} else {
Cont[Detail, Detail](i => step(state + (e.name -> e))(i))
}
}
}
}
Cont(step(Map[String,Detail]()))
}
Here any combined values are pushed into the output channel and then subsequently processed.
The usage of this looks like the following;
val systems:List[ParentSystem] = getSystems(parent)
val start = Enumerator.empty[Detail]
val concatDetail = systems.foldLeft(start){(b,p) =>
b interleave Concurrent.unicast[Detail]{channel =>
implicit val timeout = Timeout (1 seconds)
val actor = SystemsActor.lookupActor(p.name + "/details")
actor map {
case Some(a) => {a ! SendRateInformation(channel)}
case None => {channel.eofAndEnd}
} recover {
case t:Throwable => {channel.eofAndEnd}
}
}
}
val combinedDetail = Concurrent.unicast[Detail]{channel =>
concatDetail &> Enumeratee.grouped(virtualSystemGrouping(parent, channel)) |>> Iteratee.ignore
}
val send = combinedDetail |>> Iteratee.foreach(e => {output.push(e)})
send.onComplete(t => output.eofAndEnd)
Very similar to the original except now the calling to the combining function is done within the unicast onStart block (where channel is defined). concatDetail is the Enumerator created from the interleaved results of the child systems. This is fed through the system grouping function which in turn pushes any combined results (and remaining results at EOF) through the provided channel.
The combinedDetails Enumerator is then taken in and pushed through to the upstream output channel.
EDIT:
The virtualSystemGrouping can be generalized as;
def enumGroup[E >: Null, K, M](
key:(E) => K,
merge:(E, Option[E]) => M,
output:Concurrent.Channel[M]
): Iteratee[E, E] = {
def step(state: Map[K, E])(input:Input[E]): Iteratee[E, E] = {
input match {
case Input.EOF => {
state.mapValues(f => output.push(merge(f, None))) //Push along any remaining values.
output.eofAndEnd();
Done(null, Input.EOF)
}
case Input.Empty =>{ Cont[E, E](i => step(state)(i))}
case Input.El(e) => {
if (state.contains(key(e))) {
output.push(merge(e, state.get(key(e))))
Cont[E, E](i => step(state - key(e))(i))
} else {
Cont[E, E](i => step(state + (key(e) -> e))(i))
}
}
}
}
Cont(step(Map[K,E]()))
}
With a call such as;
Enumeratee.grouped(
enumGroup(
(k=>k.name),
((e1, e2) => e2.fold(e1)(v => e1 + v)),
channel)
)