Let's imagine proxy application based on akka-streams and akka-http which takes (as TCP server) messages in some home-grown format, makes http requests from them, asks some other http server, converts http response back to home-grown format and replies back to the client. Simpified code below:
// as Client part
val connPool = Http().cachedHostConnectionPool[CustHttpReq](someHost, somePort)
val asClientFlow = Flow[CustHttpReq]
.via (connPool)
.map (procHttpResp)
def procHttpResp (p: (Try[HttpResponse], CustHttpReq)): Future[ByteString] = {
val (rsp, src) = p
rsp match {
case Success(response: HttpResponse) =>
for (buf <- cvtToHomeGrown (response, src))
yield buf
case Failure(ex) => ...
}
}
def cvtToHomeGrown (rsp: HttpResponse): Future[ByteString] = {
rsp.entity.dataBytes.runWith (Sink.fold (ByteString.empty)(_ ++ _))
.map (cvtToHomeGrownActually) // has signature String => ByteString
}
// as Server part
val parseAndAskFlow = Flow[ByteString]
.via(Framing.delimiter(
ByteString('\n'))
.map (buf => cvtToCustHttpReq (buf))
.via (asClientFlow) // plug-in asClient part, the problem is here
val asServerConn: Source[IncomingConnection, Future[ServerBinding]] = Tcp().bind("localhost",port)
asServerConn.runForeach (conn => conn.handleWith(parseAndAskFlow)
The problem is that conn.handleWith requires Flow[ByteString,ByteString,], but http client code (rsp.entity.dataBytes...) returns Future[ByteSring], so parseAndAskFlow has Flow[ByteString,Future[ByteString],] type and I have no idea where to complete it better. I even guess it's not a good idea at all as far as all of these are streams and Await somethere will stop nice async processing, but code is not compiled.
Use mapAsync instead of map to change the type of asClientFlow to Flow[CustHttpReq, ByteString]:
val asClientFlow: Flow[CustHttpReq, ByteString] =
Flow[CustHttpReq]
.via(connPool)
.mapAsync(1)(procHttpResp)
Then parseAndAskFlow can be of type Flow[ByteString, ByteString]:
val parseAndAskFlow: Flow[ByteString, ByteString] =
Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"))
.map(cvtToCustHttpReq)
.via(asClientFlow)
Related
I am a bit lost using the akka-http libraries to create a server. The communication I need to establish is as following:
There is one server and n clients (n < 5)
Sometimes the clients send a command to the server, the server evaluates/delegates the command and answers the client
There are constant broadcast messages from the server to all clients
Given that:
my server needs to manage multiple 'sessions' that are connected via a websocket
Here is my websocket endpoint:
path("socket") {
handleWebSocketMessages(listen())
}
And here it the listen() method:
// stores offers to broadcast to all clients
private var offers: List[TextMessage => Unit] = List()
def listen(): Flow[Message, Message, NotUsed] = {
val inbound: Sink[Message, Any] = Sink.foreach(m => /* handle the message */) // (*)
val outbound: Source[Message, SourceQueueWithComplete[Message]] =
Source.queue[Message](16, OverflowStrategy.fail)
Flow.fromSinkAndSourceMat(inbound, outbound)((_, outboundMat) => {
offers ::= outboundMat.offer
NotUsed
})
}
def sendText(text: String): Unit = {
for (connection <- offers) connection(TextMessage.Strict(text))
}
With this approach I can register multiple clients and answer them using the sendText(text: String) method. But, there is one big problem: How do I answer only a specific client after I evaluated it's command. (see (*))
[Another thing that's bugging me is that offers is a var, which seems wrong when programming in a purely FP way, but I can accept that if the rest is working]
Edit:
To elaborate I basically need to be able to implement a method looking like this:
def onMessageReceived(m: Message, answer: TextMessage => Unit): Unit = {
val response: TextMessage = handleMessage(m)
answer(response)
}
But I cannot figure out on where to call this method in my websocket Flow.
I am not really sure if that is the way to go, but this seems to be working:
var actors: List[ActorRef] = Nil
private def wsFlow(implicit materializer: ActorMaterializer): Flow[ws.Message, ws.Message, NotUsed] = {
val (actor, source) = Source.actorRef[String](10, akka.stream.OverflowStrategy.dropTail)
.toMat(BroadcastHub.sink[String])(Keep.both)
.run()
actors = actor :: actors
val wsHandler: Flow[ws.Message, ws.Message, NotUsed] =
Flow[ws.Message]
.merge(source)
.map {
case TextMessage.Strict(tm) => handleMessage(actor, tm)
case _ => TextMessage.Strict("Ignored message!")
}
wsHandler
}
def broadcast(msg: String): Unit = {
actors.foreach(_ ! TextMessage.Strict(msg))
}
I'd like to use akka streams in order to pipe some json webservices together. I'd like to know the best approach to make a stream from an http request and stream chunks to another.
Is there a way to define such a graph and run it instead of the code below?
So far I tried to do it this way, not sure if it is actually really streaming yet:
override def receive: Receive = {
case GetTestData(p, id) =>
// Get the data and pipes it to itself through a message as recommended
// https://doc.akka.io/docs/akka-http/current/client-side/request-level.html
http.singleRequest(HttpRequest(uri = uri.format(p, id)))
.pipeTo(self)
case HttpResponse(StatusCodes.OK, _, entity, _) =>
val initialRes = entity.dataBytes.via(JsonFraming.objectScanner(Int.MaxValue)).map(bStr => ChunkStreamPart(bStr.utf8String))
// Forward the response to next job and pipes the request response to dedicated actor
http.singleRequest(HttpRequest(
method = HttpMethods.POST,
uri = "googl.cm/flow",
entity = HttpEntity.Chunked(ContentTypes.`application/json`,
initialRes)
))
case resp # HttpResponse(code, _, _, _) =>
log.error("Request to test job failed, response code: " + code)
// Discard the flow to avoid backpressure
resp.discardEntityBytes()
case _ => log.warning("Unexpected message in TestJobActor")
}
This should be a graph equivalent to your receive:
Http()
.cachedHostConnectionPool[Unit](uri.format(p, id))
.collect {
case (Success(HttpResponse(StatusCodes.OK, _, entity, _)), _) =>
val initialRes = entity.dataBytes
.via(JsonFraming.objectScanner(Int.MaxValue))
.map(bStr => ChunkStreamPart(bStr.utf8String))
Some(initialRes)
case (Success(resp # HttpResponse(code, _, _, _)), _) =>
log.error("Request to test job failed, response code: " + code)
// Discard the flow to avoid backpressure
resp.discardEntityBytes()
None
}
.collect {
case Some(initialRes) => initialRes
}
.map { initialRes =>
(HttpRequest(
method = HttpMethods.POST,
uri = "googl.cm/flow",
entity = HttpEntity.Chunked(ContentTypes.`application/json`, initialRes)
),
())
}
.via(Http().superPool[Unit]())
The type of this is Flow[(HttpRequest, Unit), (Try[HttpResponse], Unit), HostConnectionPool], where the Unit is a correlation ID you can use if you want to know which request corresponds to the response arrived, and HostConnectionPool materialized value can be used to shut down the connection to the host. Only cachedHostConnectionPool gives you back this materialized value, superPool probably handles this on its own (though I haven't checked). Anyway, I recommend you just use Http().shutdownAllConnectionPools() upon shutdown of your application unless you need otherwise for some reason. In my experience, it's much less error prone (e.g. forgetting the shutdown).
You can also use Graph DSL, to express the same graph:
val graph = Flow.fromGraph(GraphDSL.create() { implicit b =>
import GraphDSL.Implicits._
val host1Flow = b.add(Http().cachedHostConnectionPool[Unit](uri.format(p, id)))
val host2Flow = b.add(Http().superPool[Unit]())
val toInitialRes = b.add(
Flow[(Try[HttpResponse], Unit)]
.collect {
case (Success(HttpResponse(StatusCodes.OK, _, entity, _)), _) =>
val initialRes = entity.dataBytes
.via(JsonFraming.objectScanner(Int.MaxValue))
.map(bStr => ChunkStreamPart(bStr.utf8String))
Some(initialRes)
case (Success(resp # HttpResponse(code, _, _, _)), _) =>
log.error("Request to test job failed, response code: " + code)
// Discard the flow to avoid backpressure
resp.discardEntityBytes()
None
}
)
val keepOkStatus = b.add(
Flow[Option[Source[HttpEntity.ChunkStreamPart, Any]]]
.collect {
case Some(initialRes) => initialRes
}
)
val toOtherHost = b.add(
Flow[Source[HttpEntity.ChunkStreamPart, Any]]
.map { initialRes =>
(HttpRequest(
method = HttpMethods.POST,
uri = "googl.cm/flow",
entity = HttpEntity.Chunked(ContentTypes.`application/json`, initialRes)
),
())
}
)
host1Flow ~> toInitialRes ~> keepOkStatus ~> toOtherHost ~> host2Flow
FlowShape(host1Flow.in, host2Flow.out)
})
I have the following route in my app:
val myRoute = Route { context =>
val handler = Source.single(getRequest(context))
.via(flow(server, port))
.runWith(Sink.head).flatMap { r =>
// Add cookie to response depending on certain preconditions
context.complete(r)
}
}
My problem is that I can't use the out-of-the-box setCookie method (or can I?) because I am inside a route, so I will get a type error. I thought about manually adding a header element to the HttpResponse (in the example above r), but that is quite cumbersome.
Any ideas how I can easily add the Set-Cookie header element?
setCookie Directive
A Route is just a type definition: (RequestContext) => Future[RouteResult]. Therefore you can use function composition to add a cookie to the HttpResponse coming from the downstream service.
First create a forwarder that utilizes the predefined flow:
val forwardRequest : HttpRequest => Future[HttpResponse] =
Source
.single(_)
.via(flow(server, port))
.runWith(Sink.head)
Then compose that function with getRequest and a converter from HttpResponse to RouteResult:
val queryExternalService : Route =
getRequest andThen forwardRequest andThen (_ map RouteResult.Complete)
Finally, set the cookie:
val httpCookie : HttpCookie = ??? //not specified in question
val myRoute : Route = setCookie(httpCookie)(queryExternalService)
Manual Addendum in Route
You can manually set the cookie:
val updateHeaders : (HttpHeader) => (HttpResponse) => HttpResponse =
(newHeader) =>
(httpResponse) =>
httpResponse withHeaders {
Some(httpResponse.headers.indexWhere(_.name equalsIgnoreCase newHeader.name))
.filter(_ >= 0)
.map(index => httpResponse.headers updated (index, newHeader) )
.getOrElse( httpResponse.headers +: newHeader )
}
...
.runWith(Sink.head).flatMap { response =>
context complete updateHeaders(httpCookie)(response)
}
Pure Flow
You can even avoid using Routes altogether by passing a Flow to HttpExt#bindAndHandle:
val myRoute : Flow[HttpRequest, HttpResponse, _] =
flow(server,port) map updateHeaders(httpCookie)
Server code :
object EchoService {
def route: Route = path("ws-echo") {
get {
handleWebSocketMessages(flow)
}
} ~ path("send-client") {
get {
sourceQueue.map(q => {
println(s"Offering message from server")
q.offer(BinaryMessage(ByteString("ta ta")))
} )
complete("Sent from server successfully")
}
}
val (source, sourceQueue) = {
val p = Promise[SourceQueue[Message]]
val s = Source.queue[Message](100, OverflowStrategy.backpressure).mapMaterializedValue(m => {
p.trySuccess(m)
m
})
(s, p.future)
}
val flow =
Flow.fromSinkAndSourceMat(Sink.ignore, source)(Keep.right)
}
Client Code :
object Client extends App {
implicit val actorSystem = ActorSystem("akka-system")
implicit val flowMaterializer = ActorMaterializer()
val config = actorSystem.settings.config
val interface = config.getString("app.interface")
val port = config.getInt("app.port")
// print each incoming strict text message
val printSink: Sink[Message, Future[Done]] =
Sink.foreach {
case message: TextMessage.Strict =>
println(message.text)
case _ => println(s"received unknown message format")
}
val (source, sourceQueue) = {
val p = Promise[SourceQueue[Message]]
val s = Source.queue[Message](100, OverflowStrategy.backpressure).mapMaterializedValue(m => {
p.trySuccess(m)
m
})
(s, p.future)
}
val flow =
Flow.fromSinkAndSourceMat(printSink, source)(Keep.right)
val (upgradeResponse, sourceClosed) =
Http().singleWebSocketRequest(WebSocketRequest("ws://localhost:8080/ws-echo"), flow)
val connected = upgradeResponse.map { upgrade =>
// just like a regular http request we can get 404 NotFound,
// with a response body, that will be available from upgrade.response
if (upgrade.response.status == StatusCodes.SwitchingProtocols || upgrade.response.status == StatusCodes.OK ) {
Done
} else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
}
}
connected.onComplete(println)
}
when i hit http://localhost:8080/send-client i see messages coming to client but after a while if try to send to client again i don't see any messages on client side :s . I also tried source.concatMat(Source.maybe)(Keep.right) but no luck :(
Edit : I tested with js client, somehow connection/flow closed on server end , is there anyway to prevent this ? and how can i listen to this event while using akka-http websocket client :s
Hi,
The reason why it does not keep connected is because by default all
HTTP connections have idle-timeout on by default to keep the system
from leaking connections if clients disappear without any signal.
One way to overcome this limitation (and actually my recommended
approach) is to inject keep-alive messages on the client side
(messages that the server otherwise ignore, but informs the underlying
HTTP server that the connection is still live).
You can override the idle-timeouts in the HTTP server configuration to
a larger value but I don't recommend that.
If you are using stream based clients, injecting heartbeats when
necessary is as simple as calling keepAlive and providing it a time
interval and a factory for the message you want to inject:
http://doc.akka.io/api/akka/2.4.7/index.html#akka.stream.scaladsl.Flow#keepAliveU>:Out:FlowOps.this.Repr[U]
That combinator will make sure that no periods more than T will be
silent as it will inject elements to keep this contract if necessary
(and will not inject anything if there is enough background traffic)
-Endre
thank you Endre :) , working snippet ..
// on client side
val (source, sourceQueue) = {
val p = Promise[SourceQueue[Message]]
val s = Source.queue[Message](Int.MaxValue, OverflowStrategy.backpressure).mapMaterializedValue(m => {
p.trySuccess(m)
m
}).keepAlive(FiniteDuration(1, TimeUnit.SECONDS), () => TextMessage.Strict("Heart Beat"))
(s, p.future)
}
I'm trying to create an endpoint on my Akka Http Server which tells the users it's IP address using an external service (I know this can be performed way easier but I'm doing this as a challenge).
The code that doesn't make use of streams on the upper most layer is this:
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
val requestHandler: HttpRequest => Future[HttpResponse] = {
case HttpRequest(GET, Uri.Path("/"), _, _, _) =>
Http().singleRequest(HttpRequest(GET, Uri("http://checkip.amazonaws.com/"))).flatMap { response =>
response.entity.dataBytes.runFold(ByteString(""))(_ ++ _) map { string =>
HttpResponse(entity = HttpEntity(MediaTypes.`text/html`,
"<html><body><h1>" + string.utf8String + "</h1></body></html>"))
}
}
case _: HttpRequest =>
Future(HttpResponse(404, entity = "Unknown resource!"))
}
Http().bindAndHandleAsync(requestHandler, "localhost", 8080)
and it is working fine. However, as a challenge, I wanted to limit myself to only using streams (no Future's).
This is the layout I thought I'd use for this kind of an approach:
Source[Request] -> Flow[Request, Request] -> Flow[Request, Response] ->Flow[Response, Response] and to accommodate the 404 route, also Source[Request] -> Flow[Request, Response]. Now, if my Akka Stream knowledge serves me well, I need to use a Flow.fromGraph for such a thing, however, this is where I'm stuck.
In a Future I can do an easy map and flatMap for the various endpoints but in streams that would mean dividing up the Flow into multiple Flow's and I'm not quite sure how I'd do that. I thought about using UnzipWith and Options or a generic Broadcast.
Any help on this subject would be much appreciated.
I don't if this would be necessary? -- http://doc.akka.io/docs/akka-stream-and-http-experimental/2.0-M2/scala/stream-customize.html
You do not need to use Flow.fromGraph. Instead, a singular Flow that uses flatMapConcat will work:
//an outgoing connection flow
val checkIPFlow = Http().outgoingConnection("checkip.amazonaws.com")
//converts the final html String to an HttpResponse
def byteStrToResponse(byteStr : ByteString) =
HttpResponse(entity = new Default(MediaTypes.`text/html`,
byteStr.length,
Source.single(byteStr)))
val reqResponseFlow = Flow[HttpRequest].flatMapConcat[HttpResponse]( _ match {
case HttpRequest(GET, Uri.Path("/"), _, _, _) =>
Source.single(HttpRequest(GET, Uri("http://checkip.amazonaws.com/")))
.via(checkIPFlow)
.mapAsync(1)(_.entity.dataBytes.runFold(ByteString(""))(_ ++ _))
.map("<html><body><h1>" + _.utf8String + "</h1></body></html>")
.map(ByteString.apply)
.map(byteStrToResponse)
case _ =>
Source.single(HttpResponse(404, entity = "Unknown resource!"))
})
This Flow can then be used to bind to incoming requests:
Http().bindAndHandle(reqResponseFlow, "localhost", 8080)
And all without Futures...