I am learning Play! and I have followed the To do List tutorial. Now, I would like to use Squeryl in place of Anorm, so I tried to translate the tutorial, and actually it works.
Still, there is one little thing that irks me. Here is the relevant part of my model
def all: Iterable[Task] = from(tasks) {s => select(s)}
and the corresponding action in the controller to list all tasks
def tasks = Action {
inTransaction {
Ok(views.html.index(Task.all, taskForm))
}
}
The view contains, for instance
<h1>#tasks.size task(s)</h1>
What I do not like is that, unlike in the methods to update or delete tasks, I had to manage the transaction inside the controller action.
If I move inTransaction to the all method, I get an exception,
[RuntimeException: No session is bound to current thread, a session must be created via Session.create and bound to the thread via 'work' or 'bindToCurrentThread' Usually this error occurs when a statement is executed outside of a transaction/inTrasaction block]
because the view tries to obtain the size of tasks, but the transaction is already closed at that point.
Is there a way to use Squeryl transaction only in the model and not expose these details up to the controller level?
Well. It's because of lazy evaluations on Iterable that require session bound (size() method).
This might work if you turn Iterable into List or Vector (IndexedSeq) I suppose.
from(tasks)(s => select(s)).toIndexedSeq //or .toList
Related
I've been using akka-http for a while now, and so far I've mostly logged things using scala-logging by extending either StrictLogging or LazyLogging and then calling the:
log.info
log.debug
....
This is kinda ok, but its hard to understand which logs were generated for which request.
As solutions for this go, I've only seen:
adding an implicit logging context that gets passed around (this is kinda verbose and would force me to add this context to all method calls) + custom logger that adds the context info to the logging message.
using the MDC and a custom dispatcher; in order to implement this approach one would have to use the prepare() call which has just been deprecated.
using AspectJ
Are there any other solutions that are more straightforward and less verbose ? It would be ok to change the logging library btw..
Personally I would go with implicit context approach. I'd start with:
(path("api" / "test") & get) {
val context = generateContext
action(requestId)
}
Then I'd would make it implicit:
(path("api" / "test") & get) {
implicit val context = generateContext
action
}
Then I would make the context generation a directive, like e.g.:
val withContext: Directive1[MyContext] = Directive[Tuple1[MyContext]] {
inner => ctx => inner(Tuple1(generateContext))(ctx)
}
withContext { implicit context =>
(path("api" / "test") & get) {
action
}
}
Of course, you would have to take context as an implicit parameter to every action. But, it would have some advantages over MDC and AspectJ - it would be easier to test things, as you just need to pass value. Besides, who said you only ever need to pass request id and use it for logging? The context could as well pass data about logged in user, its entitlements and other things that you could resolve once, use even before calling action and reuse inside action.
As you probably guessed, this would not work if you want the ability to e.g. remove logging completely. In such case AspectJ would make more sense.
I would have most doubts with MDC. If I understand correctly it has build in assumption that all logic would happen in the same thread. If you are using Futures or Tasks, could you actually guarantee such thing? I would expect that at best all logging calls would happen in the same thread pool, but not necessarily the same thread.
Bottom line is, all possible posiltions would be some variant of what you already figured out, so the question is your exact use case.
We are currently using the Play Framework and we are using the standard logging mechanism. We have implemented a implicit context to support passing username and session id to all service methods. We want to implement logging so that it is session based. This requires implementing our own logger. This works for our own logs but how do we do the same for basic exception handling and logs as a result. Maybe there is a better way to capture this then with implicits or how can we override the exception handling logging. Essentially, we want to get as many log messages to be associated to the session.
It depends if you are doing reactive style development or standard synchronous development:
If standard synchronous development (i.e. no futures, 1 thread per request) - then I'd recommend you just use MDC, which adds values onto Threadlocal for logging. You can then customise the output in logback / log4j. When you get the username / session (possibly in a Filter or in your controller), you can then set the values there and then and you do not need to pass them around with implicits.
If you are doing reactive development you have a couple options:
You can still use MDC, except you'd have to use a custom Execution Context that effectively copies the MDC values to the thread, since each request could in theory be handled by multiple threads. (as described here: http://code.hootsuite.com/logging-contextual-info-in-an-asynchronous-scala-application/)
The alternative is the solution which I tend to use (and close to what you have now): You could make a class which represents MyAppRequest. Set the username, session info, and anything else, on that. You can continue to pass it around as an implicit. However, instead of using Action.async, you make your own MyAction class which an be used like below
myAction.async { implicit myRequest => //some code }
Inside the myAction, you'd have to catch all Exceptions and deal with future failures, and do the error handling manually instead of relying on the ErrorHandler. I often inject myAction into my Controllers and put common filter functionality in it.
The down side of this is, it is just a manual method. Also I've made MyAppRequest hold a Map of loggable values which can be set anywhere, which means it had to be a mutable map. Also, sometimes you need to make more than one myAction.async. The pro is, it is quite explicit and in your control without too much ExecutionContext/ThreadLocal magic.
Here is some very rough sample code as a starter, for the manual solution:
def logErrorAndRethrow(myrequest:MyRequest, x:Throwable): Nothing = {
//log your error here in the format you like
throw x //you can do this or handle errors how you like
}
class MyRequest {
val attr : mutable.Map[String, String] = new mutable.HashMap[String, String]()
}
//make this a util to inject, or move it into a common parent controller
def myAsync(block: MyRequest => Future[Result] ): Action[AnyContent] = {
val myRequest = new MyRequest()
try {
Action.async(
block(myRequest).recover { case cause => logErrorAndRethrow(myRequest, cause) }
)
} catch {
case x:Throwable =>
logErrorAndRethrow(myRequest, x)
}
}
//the method your Route file refers to
def getStuff = myAsync { request:MyRequest =>
//execute your code here, passing around request as an implicit
Future.successful(Results.Ok)
}
have a play controller method
def insertDepartment = Action(parse.json) { request =>
MyDataSourceProvider.db.withSession{ implicit session =>
val departmentRow = DepartmentRow(1, Option("Department1"))
departmentService.insert(departmentRow)
}
}
note MyDataSourceProvider.db is providing slick.driver.PostgresDriver.simple.Database and creating a withSession provides an implicit session to departmentService.insert
when I test departmentService session is provided by a text fixture as mentioned in this post. sessionWrapper is a simple function which creates a session, provides that session to a test block and rolls back data after test finishes.
sessionWrapper { implicit session =>
val departmentRow = DepartmentRow(1, Option("Department1"))
departmentService.insert(departmentRow)
}
This works nicely and as expected by not polluting database when service tests run. tests should not persist anything in the db but rollback after executing successfully.
now when testing play controller need a way to use sessionWrapper. to be able to roll back controller tests in a similar fashion to service tests.
note MyDataSourceProvider.db.withSession in controller insertDepartment.
wrapping controller test with sessionWrapper has no significance since controller def isn't accepting any implicit session but using one from MyDataSourceProvider.db.withSession
what's the best way to handle this? tried creating a trait controller, to be able to inject impl for a trait so mixin can be different for a test and real code but haven't found a way to "pass" session for test and not for production code. Any ideas?
Since Slick is blocking, you don't need Action.async. I wonder why that even compiles, because I don't see a future there, but I am not that familiar with Play.
There are several alternatives for what you can do:
My favorite: not use transaction rollback for testing, but use a test database, which you re-create for each test.
pull out
val departmentRow = DepartmentRow(1, Option("Department1"))
departmentService.insert(departmentRow)
into a method and only test that method, not the controller
use sessionWrapper in the controller and let it check for a configuration flag, that tells it if it is in test mode and should do a rollback, or if it is in production mode.
Apologies, in advance, if this seems like a duplicate question. This question was the closest I could find, but it doesn't really solve the issues I am facing.
I'm using Entity Framework 5 in an ASP.NET MVC4 application and attempting to implement the Unit of Work pattern.
My unit of work class implements IDisposable and contains a single instance of my DbContext-derived object context class, as well as a number of repositories, each of which derives from a generic base repository class that exposes all the usual repository functionality.
For each HTTP request, Ninject creates a single instance of the Unit of Work class and injects it into the controllers, automatically disposing it when the request is complete.
Since EF5 abstracts away the data storage and Ninject manages the lifetime of the object context, it seems like the perfect way for consuming code to access in-memory entity objects without the need to explcitly manage their persistence. In other words, for optimum separation of concerns, I envisage my controller action methods being able to use and modify repository data without the need to explicitly call SaveChanges afterwards.
My first (naiive) attempt to implement this idea employed a call to SaveChanges within every repository base-class method that modified data. Of course, I soon realized that this is neither performance optimized (especially when making multiple successive calls to the same method), nor does it accommodate situations where an action method directly modifies a property of an object retrieved from a repository.
So, I evolved my design to eliminate these premature calls to SaveChanges and replace them with a single call when the Unit of Work instance is disposed. This seemed like the cleanest implementation of the Unit of Work pattern in MVC, since a unit of work is naturally scoped to a request.
Unfortunately, after building this concept, I discovered its fatal flaw - the fact that objects added to or deleted from a DbContext are not reflected, even locally, until SaveChanges has been called.
So, what are your thoughts on the idea that consuming code should be able to use objects without explicitly persisting them? And, if this idea seems valid, what's the best way to achieve it with EF5?
Many thanks for your suggestions,
Tim
UPDATE: Based on #Wahid's response, I am adding below some test code that shows some of the situations in which it becomes essential for the consuming code to explicitly call SaveChanges:
var unitOfWork = _kernel.Get<IUnitOfWork>();
var terms = unitOfWork.Terms.Entities;
// Purge the table so as to start with a known state
foreach (var term in terms)
{
terms.Remove(term);
}
unitOfWork.SaveChanges();
Assert.AreEqual(0, terms.Count());
// Verify that additions are not even reflected locally until committed.
var created = new Term { Pattern = "Test" };
terms.Add(created);
Assert.AreEqual(0, terms.Count());
// Verify that additions are reflected locally once committed.
unitOfWork.SaveChanges();
Assert.AreEqual(1, terms.Count());
// Verify that property modifications to entities are reflected locally immediately
created.Pattern = "Test2";
var another = terms.Single(term => term.Id == created.Id);
Assert.AreEqual("Test2", another.Pattern);
Assert.True(ReferenceEquals(created, another));
// Verify that queries against property changes fail until committed
Assert.IsNull(terms.FirstOrDefault(term => term.Pattern == "Test2"));
// Verify that queries against property changes work once committed
unitOfWork.SaveChanges();
Assert.NotNull(terms.FirstOrDefault(term => term.Pattern == "Test2"));
// Verify that deletions are not even reflected locally until committed.
terms.Remove(created);
Assert.AreEqual(1, terms.Count());
// Verify that additions are reflected locally once committed.
unitOfWork.SaveChanges();
Assert.AreEqual(0, terms.Count());
First of all SaveChanges should NOT be ever in the repositories at all. Because that's leads you to lose the benefit of UnitOfWork.
Second you need to make a special method to save changes in the UnitOfWork.
And if you want to call this method automatically then you may fine some other solution like ActionFilter or maybe by making all your Controllers inherits from BaseController class and handle the SaveChanges in it.
Anyway the UnitOfWork should always have SaveChanges method.
I am testing a parser I have written in Scala using ScalaTest. The parser handles one file at a time and it has a singleton object like following:
class Parser{...}
object Resolver {...}
The test case I have written is somewhat like this
describe("Syntax:") {
val dir = new File("tests\\syntax");
val files = dir.listFiles.filter(
f => """.*\.chalice$""".r.findFirstIn(f.getName).isDefined);
for(inputFile <- files) {
val parser = new Parser();
val c = Resolver.getClass.getConstructor();
c.setAccessible(true);
c.newInstance();
val iserror = errortest(inputFile)
val result = invokeparser(parser,inputFile.getAbsolutePath) //local method
it(inputFile.getName + (if (iserror)" ERR" else " NOERR") ){
if (!iserror) result should be (ResolverSuccess())
else if(result.isInstanceOf[ResolverError]) assert(true)
}
}
}
Now at each iteration the side effects of previous iterations inside the singleton object Resolver are not cleaned up.
Is there any way to specify to scalatest module to re-initialize the singleton objects?
Update: Using Daniel's suggestion, I have updated the code, also added more details.
Update: Apparently it is the Parser which is doing something fishy. At subsequent calls it doesn't discard the previous AST. strange. since this is off topic, I would dig more and probably use a separate thread for the discussion, thanks all for answering
Final Update: The issue was with a singleton object other than Resolver, it was in some other file so I had somehow missed it. I was able to solve this using Daniel Spiewak's reply. It is dirty way to do things but its also the only thing, given my circumstances and also given the fact I am writing a test code, which is not going into production use.
According to the language spec, no, there is no way to recreate singleton objects. However, it is possible to reflectively invoke the constructor of a singleton, which overwrites the internal MODULE$ field which contains the actual singleton value:
object Test
Test.hashCode // => e.g. 779942019
val c = Test.getClass.getConstructor()
c.setAccessible(true)
c.newInstance()
Test.hashCode // => e.g. 1806030550
Now that I've shared the evil secret with you, let me caution you never, ever to do this. I would try very very hard to adjust the code, rather than playing sneaky tricks like this one. However, if things are as you say, and you really do have no other option, this is at least something.
ScalaTest has several ways to let you reinitialize things between tests. However, this particular question is tough to answer without knowing more. The main question would be, what does it take to reinitialize the singleton object? If the singleton object can't be reinitialized without instantiating a new singleton object, then you'd need to make sure each test loaded the singleton object anew, which would require using custom class loaders. I find it hard to believe someone would design something that way, though. Can you update your question with more details like that? I'll take a look again later and see if the extra details makes the answer more obvious.
ScalaTest has a runpath that loads classes anew for each run, but not a testpath. So you'll have to roll your own. The real problem here is that someone has designed this in a way that it is not easily tested. I would look at loading Resolver and Parser with a URLClassLoader inside each test. That way you'd get a new Resolver each test.
You'll need to take Parser & Resolver off of the classpath and off of the runpath. Put them into a directory of their own. Then create a URLClassLoader for each test that points to that directory. Then call findClass("Parser") on that class loader to get it. I'm assuming Parser refers to Resolver, and in that case the JVM will go back to the class loader that loaded Parser to get Resolver, which is your URLClassLoader. Do a newInstance on the Parser to get the instance. That should solve your problem, because you'll get a new Resolver singleton object for each test.
No answer, but I do have a simple example of where you might want to reset the singleton object in order to test the singleton construction in multiple, potential situations. Consider something stupid like the following code. You may want to write tests that validates that an exception is thrown when the environment isn't setup correctly and also write a test validates that an exception does not occur when the environment is not setup correctly. I know, I know everyone says, "Provide a default when the environment isn't setup correctly." but I DO NOT want to do this; it would cause issues because there would be no notification that you're using the wrong system.
object RequiredProperties extends Enumeration {
type RequiredProperties = String
private def getRequiredEnvProp(propName: String) = {
sys.env.get(propName) match {
case None => throw new RuntimeException(s"$propName is required but not found in the environment.")
case Some(x) => x
}
}
val ENVIRONMENT: String = getRequiredEnvProp("ENVIRONMENT")
}
Usage:
Init(RequiredProperties.ENVIRONMENT)
If I provided a default then the user would never know that it wasn't set and defaulted to the dev environment. Or something along these lines.