Transaction management in Play Framework (scala) - scala

I'm new in Play Framework and used to manage transactions in java/spring style with controller, transactional service and dao layers. It's pretty usual case for me to have multiply operations with dao in service method and make him #Transactional to rollback all changes if something goes wrong. Service isolated from dao and know nothing about database.
But I didn't find something like this in Anorm framework and Play. All logic placed in controllers and you can make transaction only this ugly way - Database transactions in Play framework scala applications (anorm)
We have several problems here:
Service turns into dao
If we need to call same dao method from another service we have change it same way
Is there nice way to manage transactions in Play? Other frameworks like Slick? How to use Play in production with such restricments

Anorm's DB.withTransaction creates and commits a transaction when it exits, so there is no out-of-the-box support for your use case. Though it is quite straightforward to create your own transaction engine based on what Anorm offers that spans multiple services: it creates a transaction if none is present in ThreadLocal and stores it there or uses the one obtained from it in subsequent 'transactional' usages. Then you could have one, big transaction that rollbacks on an error deep down the dao layer. We have a solution like this in production and works just fine.
However, there is a conceptual problem, that you should be aware of. As soon as you need to call a service that returns a Future you no longer have the transaction (you are possibly on another thread) or you should block (which is not a good thing in production).

Related

Does JpaTokenStore have any downsides when compared to JdbcTokenStore for spring security oauth

I currently use Jpa via Hibernate in my application. Since spring security oauth2 provides JdbcTokenStore, I started using it. But the problem with that is, I cannot use cache (which all my entities in the application currently share).
It hits the database in a separate flow.
I am thinking implementing JpaTokenStore thats backed by Jpa & leverage the cache advantages that comes with it.
Did anyone try implementing this/see any downsides using this approach?
In one project I've implmented org.springframework.security.oauth2.client.token.ClientTokenServices with JPA and didn't notice any problems. I was able to use all standard features of JPA including #Transactional for JPAClientTokenServices#saveAccessToken
There is nothing stopping you from doing it, and plenty of people do use JPA for all sorts of things, but IMO JPA is not ideal for handling storage of identity data. JPA is designed and optimized for cacheing data for the duration of a JDBC connection (a transaction basically), while identity data have a typically different and much longer lifetime. If you store long lived data using JPA, you have to deal with the consequences of what happens when you access it outside its normal lifetime, e.g. use DTOs, which ends up negating the benefits of using it in the first place to some extent.

Encapsulating an external data source in a repository pattern

I am creating the high level design for a new service. The complexity of the service warrants using DDD (I think). So I did the conventional thing and created domain services, aggregates, repositories, etc. My repositories encapsulate the data source. So a query can look for an object in the cache, failing that look in the db, failing that make a REST call to an external service to fetch the required information. This is fairly standard. Now the argument put forward by my colleagues is that abstracting the data source this way is dangerous because the developer using the repository will not be aware of the time required to execute the api and consequently not be able to calculate the execution time for any apis he writes above it. May be he would want to set up his component's behaviour differently if he knew that his call would result in a REST call. They are suggesting I move the REST call outside of the repository and maybe even the caching strategy along with it. I can see their point but the whole idea behind the repository pattern is precisely to hide this kind of information and not have each component deal with caching strategies and data access. My question is, is there a pattern or model which addresses this concern?
They are suggesting I move the REST call outside of the repository
Then you won't have a repository. The repository means we don't know persistence details, not that we don't know there is persistence. Every time we're using a repository, regardless of its implementation (from a in memory list to a REST call) we expect 'slowness' because it's common knowledge that persistence usually is the bottleneck.
Someone who will use a certain repository implementation (like REST based) will know it will deal with latency and transient errors. A service having just a IRepository dependency still knows it deals with persistence.
About caching strategies, you can have some service level (more generic) caching and repository level (persistence specific) caching. These probably should be implementation details.
Now the argument put forward by my colleagues is that abstracting the data source this way is dangerous because the developer using the repository will not be aware of the time required to execute the api and consequently not be able to calculate the execution time for any apis he writes above it. May be he would want to set up his component's behaviour differently if he knew that his call would result in a REST call.
This is wasting time trying to complicate your life. The whole point of an abstraction is to hide the dirty details. What they suggest is basically: let's make the user aware of some implementation detail, so that the user can couple its code to that.
The point is, a developer should be aware of the api they're using. If a component is using an external service (db, web service), this should be known. Once you know there's data to be fetched, you know you'll have to wait for it.
If you go the DDD route then you have bounded contexts (BC). Making a model dependent on another BC is a very bad idea . Each BC should publish domain events and each interested BC should subscribe and keep their very own model based on those events. This means the queries will be 'local' but you'll still be hitting a db.
Repository pattern aim to reduce the coupling with persistence layer. In my opinion I wouldn't risk to make a repository so full of responsibility.
You could use an Anti Corruption Layer against changes in external service and a Proxy to hide the caching related issues.
Then in the application layer I will code the fallback strategy.
I think it all depends where you think the fetching/fallback strategy belongs, in the Service layer or in the Infrastructure layer (latter sounds more legit to me).
It could also be a mix of the two -- the Service is passed an ordered series of Repositories to use one after the other in case of failure. Construction of the series of Repos could be placed in the Infrastructure layer or somewhere else. Fallback logic in one place, fallback configuration in another.
As a side note, asynchrony seems like a good way to signal the users that something is potentially slow and would be blocking if you waited for it. Better than hiding everything behind a vanilla, inconspicuous Repository name and better than adding some big threatening "this could be slow" prefix to your type, IMO.

Calling two WCF services in one transaction. Both talk to the same database. Is MSDTC is the only option

I have a WCF Service ServiceA.
That in turn has to call two individual WCF services ServiceB and ServiceC which do two different things, but if the call to ServiceC fails I want to rollback what ServiceB did.
I did implement it using TransactionScope (I am using EF 6.0) however without enabling MSDTC it is not working. Is there a workaround for this? I really do not want to go through the MSDTC route because I am afraid that would cause a lot of performance issues and the web admins are strongly against it.
What you want is a distributed transaction, because your transaction contains cross boundary participants. To the best of my knowledge, the only solution for a distributed transaction in windows platform is MSDTC, there is no way around this.
I would, however, recommend that you try to seek a more eventual consistent solution, rather than strictly transactional, because your solution will most likely perform and scale better. Granted, you will have to deal with special cases where your data is not consistent, which will lead to more complex code.
From my experience, it will be worth it, but its up to you.

Drools 5 exposing it to web application and webservices(SOAP) using jaxb

We have reqmt. where we need to expose drools 5 with ESB and similteniously with the web application.Although i have figured out ways to run drools with eclipse,however finding it difficult to configure Drools 5 with same web-app at the moment and shift it esb in future.
Guvnor and Drool-Server are not just sufficient to help me out neither does googling it helps
,even spring support is also not available.
Any help will be highly appreciated...Thanks
At what level do you need to "expose" Drools within the ESB? I use Drools in an Enterprise solution that uses asynchronous web services; many of my workflows are extremely long running (2 weeks to a month). The key is to temporarily persist the StatefulKnowledgeSession between calls. There is a JPAStatefulKnowledgeSession that serializes the session and stores it as a blob in a relational database. I decided not to use this solution because many of my asynchronous tasks finish within a second of being called. The performance cost of persisting the process in a RDBMS was too much for my needs. My solution was to store the session in an in-memory cache. Infinispan was ridiculously simple to configure and use, and I haven't had a single issue with the framework.
Do you need to have the ESB and Web Application use the same KnowledgeSession? Does it have to be a StatefulKnowledgeSession? If you need to maintain state, you should consider a queue-based system and fireAllRules() at some interval. If your actions are command based (insert object, start process, etc), I believe Drools already has an API for the pattern (I believe this is what Drools Server does under the hood). You could also make the KnowledgeSession a singleton; but consider using a ReentrantLock to prevent concurrent calls on the object. If you are isolating sessions, creating your own repository works best. Infinispan's Cache implements the ConcurrentHashMap, so you could use the ID of the session as the key and KnowledgeSession as the value.

Ado Entity Best Practice

I’m just working on this interesting thing with ADO.net entities and need your opinion. Often a solution would be created to provide a service (WCF or web service) to allow access to the DB via the entity framework, but I working on an application that runs internally and has domain access pretty much all the time. The question is if it’s good practice to create a data service for the application to interface from or could I go from the WPF application directly to the entity framework. What’s the best practice in this case and what are some of the pros’ and cons’ to the two different approach.
By using entity framework directly, do you mean that the WPF application would connect to the database, or that it would still use services but re-use the entities?
If it's the first approach, I tend to be against this because it means multiple clients connecting to the database, which a) is an additional security concern, b) could make it more expensive from a licensing perspective, and c) means you don't get the benefits of connection pooling. Databases are the most expensive things to scale so I'd try to design the solution to use services and reduce the pressure on the database. But there are times when it's appropriate. One thing I've noticed is that applications which do start out connecting directly tend to get refactored to go via a service later; it seldom happens the other way around. But it might also be a case of YAGNI.
If it's the second approach, I think that's fine. It's common for people looking at WCF to think "service oriented" - that is, there should be a strict contract between services and things shouldn't be shared. But a "multi-tier" application, which is only designed to have one client, is also a perfectly valid architecture and doesn't need to be so decoupled. In that case, reusing the entities on both sides of the service boundary should be fine. However, I'm not sure how easy this is to do with EF specifically, since I haven't used it except in experiments.
It really depends on the level of complexity and the required level of coupling/modularity. I think a good compromise would be to create a EF model in it's own library or the like with a simple level of abstraction. In that scenario if you chose to change the model to use an exposed service instead of direct access it shouldn't be a big deal to refactor existing code and the new service could utilize the existing library.