Akka-http logging request identifier - scala

I've been using akka-http for a while now, and so far I've mostly logged things using scala-logging by extending either StrictLogging or LazyLogging and then calling the:
log.info
log.debug
....
This is kinda ok, but its hard to understand which logs were generated for which request.
As solutions for this go, I've only seen:
adding an implicit logging context that gets passed around (this is kinda verbose and would force me to add this context to all method calls) + custom logger that adds the context info to the logging message.
using the MDC and a custom dispatcher; in order to implement this approach one would have to use the prepare() call which has just been deprecated.
using AspectJ
Are there any other solutions that are more straightforward and less verbose ? It would be ok to change the logging library btw..

Personally I would go with implicit context approach. I'd start with:
(path("api" / "test") & get) {
val context = generateContext
action(requestId)
}
Then I'd would make it implicit:
(path("api" / "test") & get) {
implicit val context = generateContext
action
}
Then I would make the context generation a directive, like e.g.:
val withContext: Directive1[MyContext] = Directive[Tuple1[MyContext]] {
inner => ctx => inner(Tuple1(generateContext))(ctx)
}
withContext { implicit context =>
(path("api" / "test") & get) {
action
}
}
Of course, you would have to take context as an implicit parameter to every action. But, it would have some advantages over MDC and AspectJ - it would be easier to test things, as you just need to pass value. Besides, who said you only ever need to pass request id and use it for logging? The context could as well pass data about logged in user, its entitlements and other things that you could resolve once, use even before calling action and reuse inside action.
As you probably guessed, this would not work if you want the ability to e.g. remove logging completely. In such case AspectJ would make more sense.
I would have most doubts with MDC. If I understand correctly it has build in assumption that all logic would happen in the same thread. If you are using Futures or Tasks, could you actually guarantee such thing? I would expect that at best all logging calls would happen in the same thread pool, but not necessarily the same thread.
Bottom line is, all possible posiltions would be some variant of what you already figured out, so the question is your exact use case.

Related

Scala Dynamic Variable problem with Akka-Http asynchronous requests

My application is using Akka-Http to handle requests. I would like to somehow pass incoming extracted request context (fe. token) from routes down to other methods calls without explicitly passing them (because that would require modyfing a lot of code).
I tried using Dynamic Variable to handle this but it seems to not work as intended.
Example how I used and tested this:
object holding Dynamic Variable instance:
object TestDynamicContext {
val dynamicContext = new DynamicVariable[String]("")
}
routes wrapper for extracting and setting request context (token)
private def wrapper: Directive0 = {
Directive { routes => ctx =>
val newValue = UUID.randomUUID().toString
TestDynamicContext.dynamicContext.withValue(newValue) {
routes(())(ctx)
}
}
I expected to all calls of TestDynamicContext.dynamicContext.value for single request under my wrapper to return same value defined in said wrapper but thats not the case. I verified this by generating for each request separate UUID and passing it explicitly down the method calls - but for single request TestDynamicContext.dynamicContext.value sometimes returns different values.
I think its worth mentioning that some operations underneath use Futures and I though that this may be the issue but solution proposed in this thread did not solve this for me: https://stackoverflow.com/a/49256600/16511727.
If somebody has any suggestions how to handle this (not necessarily using Dynamic Variable) I would be very grateful.

How can I perform session based logging in Play Framework

We are currently using the Play Framework and we are using the standard logging mechanism. We have implemented a implicit context to support passing username and session id to all service methods. We want to implement logging so that it is session based. This requires implementing our own logger. This works for our own logs but how do we do the same for basic exception handling and logs as a result. Maybe there is a better way to capture this then with implicits or how can we override the exception handling logging. Essentially, we want to get as many log messages to be associated to the session.
It depends if you are doing reactive style development or standard synchronous development:
If standard synchronous development (i.e. no futures, 1 thread per request) - then I'd recommend you just use MDC, which adds values onto Threadlocal for logging. You can then customise the output in logback / log4j. When you get the username / session (possibly in a Filter or in your controller), you can then set the values there and then and you do not need to pass them around with implicits.
If you are doing reactive development you have a couple options:
You can still use MDC, except you'd have to use a custom Execution Context that effectively copies the MDC values to the thread, since each request could in theory be handled by multiple threads. (as described here: http://code.hootsuite.com/logging-contextual-info-in-an-asynchronous-scala-application/)
The alternative is the solution which I tend to use (and close to what you have now): You could make a class which represents MyAppRequest. Set the username, session info, and anything else, on that. You can continue to pass it around as an implicit. However, instead of using Action.async, you make your own MyAction class which an be used like below
myAction.async { implicit myRequest => //some code }
Inside the myAction, you'd have to catch all Exceptions and deal with future failures, and do the error handling manually instead of relying on the ErrorHandler. I often inject myAction into my Controllers and put common filter functionality in it.
The down side of this is, it is just a manual method. Also I've made MyAppRequest hold a Map of loggable values which can be set anywhere, which means it had to be a mutable map. Also, sometimes you need to make more than one myAction.async. The pro is, it is quite explicit and in your control without too much ExecutionContext/ThreadLocal magic.
Here is some very rough sample code as a starter, for the manual solution:
def logErrorAndRethrow(myrequest:MyRequest, x:Throwable): Nothing = {
//log your error here in the format you like
throw x //you can do this or handle errors how you like
}
class MyRequest {
val attr : mutable.Map[String, String] = new mutable.HashMap[String, String]()
}
//make this a util to inject, or move it into a common parent controller
def myAsync(block: MyRequest => Future[Result] ): Action[AnyContent] = {
val myRequest = new MyRequest()
try {
Action.async(
block(myRequest).recover { case cause => logErrorAndRethrow(myRequest, cause) }
)
} catch {
case x:Throwable =>
logErrorAndRethrow(myRequest, x)
}
}
//the method your Route file refers to
def getStuff = myAsync { request:MyRequest =>
//execute your code here, passing around request as an implicit
Future.successful(Results.Ok)
}

My http request becomes null inside an Akka future

My server application uses Scalatra, with json4s, and Akka.
Most of the requests it receives are POSTs, and they return immediately to the client with a fixed response. The actual responses are sent asynchronously to a server socket at the client. To do this, I need to getRemoteAddr from the http request. I am trying with the following code:
case class MyJsonParams(foo:String, bar:Int)
class MyServices extends ScalatraServlet {
implicit val formats = DefaultFormats
post("/test") {
withJsonFuture[MyJsonParams]{ params =>
// code that calls request.getRemoteAddr goes here
// sometimes request is null and I get an exception
println(request)
}
}
def withJsonFuture[A](closure: A => Unit)(implicit mf: Manifest[A]) = {
contentType = "text/json"
val params:A = parse(request.body).extract[A]
future{
closure(params)
}
Ok("""{"result":"OK"}""")
}
}
The intention of the withJsonFuture function is to move some boilerplate out of my route processing.
This sometimes works (prints a non-null value for request) and sometimes request is null, which I find quite puzzling. I suspect that I must be "closing over" the request in my future. However, the error also happens with controlled test scenarios when there are no other requests going on. I would imagine request to be immutable (maybe I'm wrong?)
In an attempt to solve the issue, I have changed my code to the following:
case class MyJsonParams(foo:String, bar:Int)
class MyServices extends ScalatraServlet {
implicit val formats = DefaultFormats
post("/test") {
withJsonFuture[MyJsonParams]{ (addr, params) =>
println(addr)
}
}
def withJsonFuture[A](closure: (String, A) => Unit)(implicit mf: Manifest[A]) = {
contentType = "text/json"
val addr = request.getRemoteAddr()
val params:A = parse(request.body).extract[A]
future{
closure(addr, params)
}
Ok("""{"result":"OK"}""")
}
}
This seems to work. However, I really don't know if it is still includes any bad concurrency-related programming practice that could cause an error in the future ("future" meant in its most common sense = what lies ahead :).
Scalatra is not so well suited for asynchronous code. I recently stumbled on the very same problem as you.
The problem is that scalatra tries to make the code as declarative as possible by exposing a dsl that removes as much fuss as possible, and in particular does not require you to explicitly pass data around.
I'll try to explain.
In your example, the code inside post("/test") is an anonymous function. Notice that it does not take any parameter, not even the current request object.
Instead, scalatra will store the current request object inside a thread local value just before it calls your own handler, and you can then get it back through ScalatraServlet.request.
This is the classical Dynamic Scope pattern. It has the advantage that you can write many utility methods that access the current request and call them from your handlers, without explicitly passing the request.
Now, the problem comes when you use asynchronous code, as you do.
In your case, the code inside withJsonFuture executes on another thread than the original thread that the handler was initially called (it will execute on a thread from the ExecutionContext's thread pool).
Thus when accessing the thread local, you are accessing a totally distinct instance of the thread local variable.
Simply put, the classical Dynamic Scope pattern is no fit in an asynchronous context.
The solution here is to capture the request at the very start of your handler, and then exclusively reference that:
post("/test") {
val currentRequest = request
withJsonFuture[MyJsonParams]{ params =>
// code that calls request.getRemoteAddr goes here
// sometimes request is null and I get an exception
println(currentRequest)
}
}
Quite frankly, this is too easy to get wrong IMHO, so I would personally avoid using Scalatra altogether if you are in an synchronous context.
I don't know Scalatra, but it's fishy that you are accessing a value called request that you do not define yourself. My guess is that it is coming as part of extending ScalatraServlet. If that's the case, then it's probably mutable state that it being set (by Scalatra) at the start of the request and then nullified at the end. If that's happening, then your workaround is okay as would be assigning request to another val like val myRequest = request before the future block and then accessing it as myRequest inside of the future and closure.
I do not know scalatra but at first glance, the withJsonFuture function returns an OK but also creates a thread via the future { closure(addr, params) } call.
If that latter thread is run after the OK is processed, the response has been sent and the request is closed/GCed.
Why create a Future to run you closure ?
if withJsonFuture needs to return a Future (again, sorry, I do not know scalatra), you should wrap the whole body of that function in a Future.
Try to put with FutureSupport on your class declaration like this
class MyServices extends ScalatraServlet with FutureSupport {}

Avoiding Squeryl transaction in Play! controllers

I am learning Play! and I have followed the To do List tutorial. Now, I would like to use Squeryl in place of Anorm, so I tried to translate the tutorial, and actually it works.
Still, there is one little thing that irks me. Here is the relevant part of my model
def all: Iterable[Task] = from(tasks) {s => select(s)}
and the corresponding action in the controller to list all tasks
def tasks = Action {
inTransaction {
Ok(views.html.index(Task.all, taskForm))
}
}
The view contains, for instance
<h1>#tasks.size task(s)</h1>
What I do not like is that, unlike in the methods to update or delete tasks, I had to manage the transaction inside the controller action.
If I move inTransaction to the all method, I get an exception,
[RuntimeException: No session is bound to current thread, a session must be created via Session.create and bound to the thread via 'work' or 'bindToCurrentThread' Usually this error occurs when a statement is executed outside of a transaction/inTrasaction block]
because the view tries to obtain the size of tasks, but the transaction is already closed at that point.
Is there a way to use Squeryl transaction only in the model and not expose these details up to the controller level?
Well. It's because of lazy evaluations on Iterable that require session bound (size() method).
This might work if you turn Iterable into List or Vector (IndexedSeq) I suppose.
from(tasks)(s => select(s)).toIndexedSeq //or .toList

ScalaTest: Issues with Singleton Object re-initialization

I am testing a parser I have written in Scala using ScalaTest. The parser handles one file at a time and it has a singleton object like following:
class Parser{...}
object Resolver {...}
The test case I have written is somewhat like this
describe("Syntax:") {
val dir = new File("tests\\syntax");
val files = dir.listFiles.filter(
f => """.*\.chalice$""".r.findFirstIn(f.getName).isDefined);
for(inputFile <- files) {
val parser = new Parser();
val c = Resolver.getClass.getConstructor();
c.setAccessible(true);
c.newInstance();
val iserror = errortest(inputFile)
val result = invokeparser(parser,inputFile.getAbsolutePath) //local method
it(inputFile.getName + (if (iserror)" ERR" else " NOERR") ){
if (!iserror) result should be (ResolverSuccess())
else if(result.isInstanceOf[ResolverError]) assert(true)
}
}
}
Now at each iteration the side effects of previous iterations inside the singleton object Resolver are not cleaned up.
Is there any way to specify to scalatest module to re-initialize the singleton objects?
Update: Using Daniel's suggestion, I have updated the code, also added more details.
Update: Apparently it is the Parser which is doing something fishy. At subsequent calls it doesn't discard the previous AST. strange. since this is off topic, I would dig more and probably use a separate thread for the discussion, thanks all for answering
Final Update: The issue was with a singleton object other than Resolver, it was in some other file so I had somehow missed it. I was able to solve this using Daniel Spiewak's reply. It is dirty way to do things but its also the only thing, given my circumstances and also given the fact I am writing a test code, which is not going into production use.
According to the language spec, no, there is no way to recreate singleton objects. However, it is possible to reflectively invoke the constructor of a singleton, which overwrites the internal MODULE$ field which contains the actual singleton value:
object Test
Test.hashCode // => e.g. 779942019
val c = Test.getClass.getConstructor()
c.setAccessible(true)
c.newInstance()
Test.hashCode // => e.g. 1806030550
Now that I've shared the evil secret with you, let me caution you never, ever to do this. I would try very very hard to adjust the code, rather than playing sneaky tricks like this one. However, if things are as you say, and you really do have no other option, this is at least something.
ScalaTest has several ways to let you reinitialize things between tests. However, this particular question is tough to answer without knowing more. The main question would be, what does it take to reinitialize the singleton object? If the singleton object can't be reinitialized without instantiating a new singleton object, then you'd need to make sure each test loaded the singleton object anew, which would require using custom class loaders. I find it hard to believe someone would design something that way, though. Can you update your question with more details like that? I'll take a look again later and see if the extra details makes the answer more obvious.
ScalaTest has a runpath that loads classes anew for each run, but not a testpath. So you'll have to roll your own. The real problem here is that someone has designed this in a way that it is not easily tested. I would look at loading Resolver and Parser with a URLClassLoader inside each test. That way you'd get a new Resolver each test.
You'll need to take Parser & Resolver off of the classpath and off of the runpath. Put them into a directory of their own. Then create a URLClassLoader for each test that points to that directory. Then call findClass("Parser") on that class loader to get it. I'm assuming Parser refers to Resolver, and in that case the JVM will go back to the class loader that loaded Parser to get Resolver, which is your URLClassLoader. Do a newInstance on the Parser to get the instance. That should solve your problem, because you'll get a new Resolver singleton object for each test.
No answer, but I do have a simple example of where you might want to reset the singleton object in order to test the singleton construction in multiple, potential situations. Consider something stupid like the following code. You may want to write tests that validates that an exception is thrown when the environment isn't setup correctly and also write a test validates that an exception does not occur when the environment is not setup correctly. I know, I know everyone says, "Provide a default when the environment isn't setup correctly." but I DO NOT want to do this; it would cause issues because there would be no notification that you're using the wrong system.
object RequiredProperties extends Enumeration {
type RequiredProperties = String
private def getRequiredEnvProp(propName: String) = {
sys.env.get(propName) match {
case None => throw new RuntimeException(s"$propName is required but not found in the environment.")
case Some(x) => x
}
}
val ENVIRONMENT: String = getRequiredEnvProp("ENVIRONMENT")
}
Usage:
Init(RequiredProperties.ENVIRONMENT)
If I provided a default then the user would never know that it wasn't set and defaulted to the dev environment. Or something along these lines.