How to log incoming request and response? - scala

I am using Akka HTTP and would like to log every incoming request and outgoing result. I know, that it exists a logRequestResult directive, but how to use it? Or is it the right for my purpose?

Yes, this is the directive you are looking for, and I agree - the official documentation is a bit hard to grasp on.
Here is how an endpoint with logRequestResult would look like:
val requestHandler: Route = logRequestResult("req/resp", Logging.InfoLevel) {
handleExceptions(errorHandler) {
endpointRoutes
}
}
def start()(implicit actorSystem: ActorSystem,
actorMaterializer: ActorMaterializer): Future[Http.ServerBinding] =
Http().bindAndHandle(
handler = requestHandler,
interface = host,
port = port)
Notice you can choose a generic prefix for each request-response entry, i.e, req/resp, as well as the logging level on which the request-response log is available, i.e. Logging.InfoLevel.
The above example produces log lines similar to the one below:
[your-actor-system-akka.actor.default-dispatcher-19] INFO akka.actor.ActorSystemImpl - req/resp: Response for
Request : HttpRequest(HttpMethod(GET),http://<host>/<path>,List(Host: <host>, Connection: close: <function1>),HttpEntity.Strict(none/none,ByteString()),HttpProtocol(HTTP/1.1))
Response: Complete(HttpResponse(200 OK,List(),HttpEntity.Strict(text/plain; charset=UTF-8,OK),HttpProtocol(HTTP/1.1)))
Happy hakking :)

Related

Terminate Akka-Http Web Socket connection asynchronously

Web Socket connections in Akka Http are treated as an Akka Streams Flow. This seems like it works great for basic request-reply, but it gets more complex when messages should also be pushed out over the websocket. The core of my server looks kind of like:
lazy val authSuccessMessage = Source.fromFuture(someApiCall)
lazy val messageFlow = requestResponseFlow
.merge(updateBroadcastEventSource)
lazy val handler = codec
.atop(authGate(authSuccessMessage))
.join(messageFlow)
handleWebSocketMessages {
handler
}
Here, codec is a (de)serialization BidiFlow and authGate is a BidiFlow that processes an authorization message and prevents outflow of any messages until authorization succeeds. Upon success, it sends authSuccessMessage as a reply. requestResponseFlow is the standard request-reply pattern, and updateBroadcastEventSource mixes in async push messages.
I want to be able to send an error message and terminate the connection gracefully in certain situations, such as bad authorization, someApiCall failing, or a bad request processed by requestResponseFlow. So basically, basically it seems like I want to be able to asynchronously complete messageFlow with one final message, even though its other constituent flows are still alive.
Figured out how to do this using a KillSwitch.
Updated version
The old version had the problem that it didn't seem to work when triggered by a BidiFlow stage higher up in the stack (such as my authGate). I'm not sure exactly why, but modeling the shutoff as a BidiFlow itself, placed further up the stack, resolved the issue.
val shutoffPromise = Promise[Option[OutgoingWebsocketEvent]]()
/**
* Shutoff valve for the connection. It is triggered when `shutoffPromise`
* completes, and sends a final optional termination message if that
* promise resolves with one.
*/
val shutoffBidi = {
val terminationMessageSource = Source
.maybe[OutgoingWebsocketEvent]
.mapMaterializedValue(_.completeWith(shutoffPromise.future))
val terminationMessageBidi = BidiFlow.fromFlows(
Flow[IncomingWebsocketEventOrAuthorize],
Flow[OutgoingWebsocketEvent].merge(terminationMessageSource)
)
val terminator = BidiFlow
.fromGraph(KillSwitches.singleBidi[IncomingWebsocketEventOrAuthorize, OutgoingWebsocketEvent])
.mapMaterializedValue { killSwitch =>
shutoffPromise.future.foreach { _ => println("Shutting down connection"); killSwitch.shutdown() }
}
terminationMessageBidi.atop(terminator)
}
Then I apply it just inside the codec:
val handler = codec
.atop(shutoffBidi)
.atop(authGate(authSuccessMessage))
.join(messageFlow)
Old version
val shutoffPromise = Promise[Option[OutgoingWebsocketEvent]]()
/**
* Shutoff valve for the flow of outgoing messages. It is triggered when
* `shutoffPromise` completes, and sends a final optional termination
* message if that promise resolves with one.
*/
val shutoffFlow = {
val terminationMessageSource = Source
.maybe[OutgoingWebsocketEvent]
.mapMaterializedValue(_.completeWith(shutoffPromise.future))
Flow
.fromGraph(KillSwitches.single[OutgoingWebsocketEvent])
.mapMaterializedValue { killSwitch =>
shutoffPromise.future.foreach(_ => killSwitch.shutdown())
}
.merge(terminationMessageSource)
}
Then handler looks like:
val handler = codec
.atop(authGate(authSuccessMessage))
.join(messageFlow via shutoffFlow)

How to use Flink streaming to process Data stream of Complex Protocols

I'm using Flink Stream for the handling of data traffic log in 3G network (GPRS Tunnelling Protocol). And I'm having trouble in the synthesis of information in a user session of the user.
For example: how to map the start and end one session. I don't know that there Flink streaming suited to handle complex protocols like that?
p/s:
We capture data exchanging between SGSN and GGSN in 3G network (use GTP protocol with GTP-C/U messages). A session is started when the SGSN sends the CreateReq (TEID, Seq, IMSI, TEID_dl,TEID_data_dl) message and GGSN responses CreateRsp(TEID_dl, Seq, TEID_ul, TEID_data_ul) message.
After the session is established, others GTP-C messages (ex: UpdateReq, DeleteReq) sent from SGSN to GGSN uses TEID_ul and response message uses TEID_dl, GTP- U message uses TEID_data_ul (SGSN -> GGSN) and TEID_data_dl (GGSN -> SGSN). GTP-U messages contain information such as AppID (facebook, twitter, web), url,...
Finally, I want to handle continuous log data stream and map the GTP-C messages and GTP-U of the same one user (IMSI) to make a report.
I've tried this:
val sessions = createReqs.connect(createRsps).flatMap(new CoFlatMapFunction[CreateReq, CreateRsp, Session] {
// holds CreateReqs indexed by (tedid_dl,seq)
private val createReqs = mutable.HashMap.empty[(String, String), CreateReq]
// holds CreateRsps indexed by (tedid,seq)
private val createRsps = mutable.HashMap.empty[(String, String), CreateRsp]
override def flatMap1(req: CreateReq, out: Collector[Session]): Unit = {
val key = (req.teid_dl, req.header.seqNum)
val oRsp = createRsps.get(key)
if (!oRsp.isEmpty) {
val rsp = oRsp.get
println("OK")
out.collect(new Session(rsp.header.time, req.imsi, req.teid_dl, req.teid_ddl, rsp.teid_upl, rsp.teid_dupl, req.rat, req.apn))
createRsps.remove(key)
} else {
createReqs.put(key, req)
}
}
override def flatMap2(rsp: CreateRsp, out: Collector[Session]): Unit = {
val key = (rsp.header.teid, rsp.header.seqNum)
val oReq = createReqs.get(key)
if (!oReq.isEmpty) {
val req = oReq.get
out.collect(new Session(rsp.header.time, req.imsi, req.teid_dl, req.teid_ddl, rsp.teid_upl, rsp.teid_dupl, req.rat, req.apn))
createReqs.remove(key)
} else {
createRsps.put(key, rsp)
}
}
}).print()
This code always returns empty result. The fact that the input stream contains CreateRsp and CreateReq message of the same session. They appear very close together (within 1 second). When I debug, the oReq.isEmpty == true every time.
What i'm doing wrong?
To be honest it is a bit difficult to see through the telco specifics here, but if I understand correctly you have at least 3 streams, the first two being the CreateReq and the CreateRsp streams.
To detect the establishment of a session I would use the ConnectedDataStream abstraction to share state between the two aforementioned streams. Check out this example for usage or the related Flink docs.
Is this what you are trying to achieve?

scalaz-stream how to implement `ask-then-wait-reply` tcp client

I want to implement an client app that first send an request to server then wait for its reply(similar to http)
My client process may be
val topic = async.topic[ByteVector]
val client = topic.subscribe
Here is the api
trait Client {
val incoming = tcp.connect(...)(client)
val reqBus = topic.pubsh()
def ask(req: ByteVector): Task[Throwable \/ ByteVector] = {
(tcp.writes(req).flatMap(_ => tcp.reads(1024))).to(reqBus)
???
}
}
Then, how to implement the remain part of ask ?
Usually, the implementation is done with publishing the message via sink and then awaiting some sort of reply on some source, like your topic.
Actually we have a lot of idioms of this in our code :
def reqRply[I,O,O2](src:Process[Task,I],sink:Sink[Task,I],reply:Process[Task,O])(pf: PartialFunction[O,O2]):Process[Task,O2] = {
merge.mergeN(Process(reply, (src to sink).drain)).collectFirst(pf)
}
Essentially this first hooks to reply stream to await any resulting O confirming our request sent. Then we publish message I and consult pf for any incoming O to be eventually translated to O2 and then terminate.

As websocket connections increase app starts to hang

Just getting invited to look at an issue in a third party application where the behavior is as more clients connect (using web socket) app hangs after a certain connections. I am trying to get more info and better access to the codebase but below is what I have right now which looks like a standard code flow. Any gottachs to keep in mind when play, akka, web sockets are in the mix? Will post more info as it becomes available.
Controller has
def service = WebSocket.async[JsValue] { request =>
Service.createConnection
}
Service.createConnection looks as
def createConnection: Future[(Iteratee[JsValue, _], Enumerator[JsValue])] = {
val serviceActor = Akka.system.actorOf(Props[ServiceActor])
val socket_id = UUID.randomUUID().toString
val (enumerator, mChannel) = Concurrent.broadcast[JsValue]
(serviceActor ? Connect(socket_id, mChannel)).map{
...........
}
}

Use lift as a proxy

I want to forbid the full access to the Solr core from outside, and let it be used only for querying. Thus I am launching secondary server w/ connector instance inside Jetty servlet container (besides, the main webapp) on the port, that is not accessible from the WWW.
When there is incoming HTTP request to the liftweb application, I hook with RestHelper:
object Dispatcher extends RestHelper {
serve {
case List("api", a # _*) JsonGet _ => JString("API is not implemented yet. rest: " + a)
}
}
Targeting my browser to http://localhost/api/solr/select?q=region I get a response "API is not implemented yet. rest: List(solr, select)", so it seems to work. Now I want to do a connection on internal port (where Solr resides) in order to pass the query using the post-api part of the URL (i.e. http://localhost:8080/solr/select?q=region). I am catching the trailing REST-part of the URL (by means of a # _*), but how can I access URL parameters? It would be ideal to pass a raw string (after api path element) to the Solr instance, just to prevent redundant parse/build steps. So applies to the Solr's response: I would like to avoid parsing building JsonResponse.
This seems to be a good example on doing some HTTP-redirection, but then I would have to open the hidden Solr's port, as far as I can understand.
What is the most effective way to cope with this task?
EDIT:
Well, I missed that after JsonGet comes Req value, which has all the needed info. But is there still a way to avoid unwanted parsing/composing URL to hidden port and JSON-response?
SOLUTION:
This is what I've got consdering Dave's suggestion:
import net.liftweb.common.Full
import net.liftweb.http.{JsonResponse, InMemoryResponse}
import net.liftweb.http.provider.HTTPRequest
import net.liftweb.http.rest.RestHelper
import dispatch.{Http, url}
object ApiDispatcher extends RestHelper {
private val SOLR_PORT = 8080
serve { "api" :: Nil prefix {
case JsonGet(path # List("solr", "select"), r) =>
val u = localApiUrl(SOLR_PORT, path, r.request)
Http(url(u) >> { is =>
val bytes = Stream.continually(is.read).takeWhile(-1 !=).map(_.toByte).toArray
val headers = ("Content-Length", bytes.length.toString) ::
("Content-Type", "application/json; charset=utf-8") :: JsonResponse.headers
Full(InMemoryResponse(bytes, headers, JsonResponse.cookies, 200))
})
}}
private def localApiUrl(port: Int, path: List[String], r: HTTPRequest) =
"%s://localhost:%d/%s%s".format(r.scheme, port, path mkString "/", r.queryString.map("?" + _).openOr(""))
}
I'm not sure that I understand your question, but if you want to return the JSON you receive from solr without parsing it you could use a net.liftweb.http.InMemoryResponse that contains a byte[] representation of the JSON.