Is there a way to hook into Play evolutions framework such that when it succeeds migrating from n.sql to n+1.sql to n+2.sql ..., it calls some post-success hook in the Play app (something like postSchemaMigration(n: Int)?
Can I manually check and apply evolutions one by one in the global object somewhere before the server bootstraps?
As it stands, Play has no in-built mechanism to allow you to control the evolution process. Either it succeeds completely, or it fails. If your application runs, then all evolutions have been applied
Depending on your use case, you have a few options. The most flexible is simply to not use Play's evolution framework, and apply your database evolutions with custom code in the global object, using plain-ol' JDBC. On roughly the same lines, you could implement a custom Play plugin that applies your evolutions.
Or you could modify the existing evolution framework. Play is open source, after all, and if your code solves a common problem, it may even make sense to submit it for inclusion in the standard Play distribution.
Related
I am developing a new project with spring boot and graphql. I am confused on how to proceed because there are 2 ways to develop it, one is via the graphqls file and Annotation based approach. I prefer Annotation based approach but are they stable. exmaple : https://github.com/leangen/graphql-spqr.
I second AllirionX's answer and just want to add a few details.
Firstly, to answer your question: yes, SPQR has been pretty stable for quite a while now. Many teams are successfully using it in production. The only reason it is still in 0.X versions is the lack of documentation, but an occasional small breaking change in the API does occur.
Secondly, I'd also like to add that going code-first doesn't mean you can't also go contract-first. In fact, I'd argue you should still develop in that style. The only difference is that you get to write your contracts as Java interfaces instead of a new language.
As I highlight in SPQR's README:
Note that developing in the code-first style is still effectively
schema-first, the difference is that you develop your schema not in
yet another language, but in Java, with your IDE, the compiler and all
your tools helping you. Breaking changes to the schema mean the
compilation will fail. No need for linters or other fragile hacks.
So whether the API (as described by the interfaces) changes as the other code changes is entirely up to you. And if you need the SDL for any reason, it can always be generated from the executable schema or the introspection result.
I don't think there is a good or a bad answer to the "how to proceed" question.
There are two different approaches to build your graphql server (with graphl-java, graphql-java-tools, graphql-spqr), and each method has its advantages and inconvenients. All those library propose a springboot starter. Note that I never used graphql-spqr.
Schema first (with graphql-java or graphql-java-tools)
In this approach you first create a SDL file. The graphql library will parse it, and "all" you have to do is wire each graphql type to its data fetcher. The graphql-java-tools can even do the wiring for you.
Advantage
no need to enter into the detail of how the Graphql schema is built server side
you have a nice graphqls schema file that can be read and used by a client, easying the charge of building a graphql client
you actually define your api first (SDL schema): changing the implementation of the api will not require any change client side
Inconvenient
no compile-time check. If something is not wired properly, an exception will be thrown at runtime. But this can be negated by using graphql-java-codegen that will generate for you the java classes and interfaces for your graphql types, unions, queries, enums, etc.
if using graphql-java (no auto wiring), I felt I had to write long boring data fetchers. So I switched to graphql-java-tools.
Code first (with graphql-java or grapqhl-java-tools or graphql-spqr)
The graphql schema is built programmatically (through annotation with graphql-spqr or by building a GraphQLSchema object in graphql-java)
Advantage
compile-time check
no need to maintain both the SDL and the Domain class
Inconvenient
as your schema is generated from your code base, changing your code base will change the api, which might not be great for the clients depending on it.
This is my opinion on those different framework and I would be happy to be shown that I am in the wrong. The ultimate decision depends on your project: the size, if there is an existing code base, etc.
I'm new in Play Framework and used to manage transactions in java/spring style with controller, transactional service and dao layers. It's pretty usual case for me to have multiply operations with dao in service method and make him #Transactional to rollback all changes if something goes wrong. Service isolated from dao and know nothing about database.
But I didn't find something like this in Anorm framework and Play. All logic placed in controllers and you can make transaction only this ugly way - Database transactions in Play framework scala applications (anorm)
We have several problems here:
Service turns into dao
If we need to call same dao method from another service we have change it same way
Is there nice way to manage transactions in Play? Other frameworks like Slick? How to use Play in production with such restricments
Anorm's DB.withTransaction creates and commits a transaction when it exits, so there is no out-of-the-box support for your use case. Though it is quite straightforward to create your own transaction engine based on what Anorm offers that spans multiple services: it creates a transaction if none is present in ThreadLocal and stores it there or uses the one obtained from it in subsequent 'transactional' usages. Then you could have one, big transaction that rollbacks on an error deep down the dao layer. We have a solution like this in production and works just fine.
However, there is a conceptual problem, that you should be aware of. As soon as you need to call a service that returns a Future you no longer have the transaction (you are possibly on another thread) or you should block (which is not a good thing in production).
I am using the built in cache in a scala playframework 2.4 application.
During development, I would like to be able to deactivate the whole cache temporarily.
How would I do that?
If you're using play's default cache implementation, which is EhCache, you can run your play application with net.sf.ehcache.disabled=true in order to turn off the cache. Of course this is not so desirable for automated testing and only applicable to EhCache implementation.
I want to make a Java function in Play Framework Project, and want to execute it in terminal like Django command, my purpose is to run it in cron after ready.
Is it possible to do that? I'm sorry, maybe it rather sound silly, but I'm not a java developer, I'm python/Django developer and requested to help another team. Thanks anyway.
As play strongly supports RESTful approaches to development, you should simply be able to call your Play action via a well defined URL, and then use CURL to call your action via that URL.
However, you could also use the concept of Jobs in Play. Play jobs was designed to give CRON like functionality within your applications without needing to rely on external scheduling mechanisms.
I have quite a large code base using a variety of different ADO technologies (i.e. some EF and in some cases using ADO.Net directly).
I'm wondering if there is any way to globally intercept any ADO.Net calls so that I can start auditing information - exact SQL statements that executed, time taken, results returned, etc.
The main idea being that if I can do this, I shouldn't have to change any of my existing code and that I should be able to just intercept/wrap the ADO.Net calls... Is this possible?
You can globally intercept any methods that you have access to (ie: your generated models & context). If you're needing to intercept methods in framework BCL then no.
If you just want to get the SQL generated from your EF models then intercept one of the desired methods with the OnMethodBoundaryAspect and you can do your logging in the OnEntry and OnExit methods.
Remember, you can intercept only code you have access to. Generated EF code is accessable but will over write any changes you make to it so you will need to apply the aspect using either a partial class or with an assembly declaration. I would suggest the latter since you want global interception.
Just my 2 cents: You might want to look at other alternatives for this such as SQL profiler or redesigning your architecture.
Afterthought is an open source tool that supports modifying an existing dll without requiring you to recompile from source to add aspect attributes. For this to work, you would need to create amendments (the way you describe your changes in Afterthought) in a separate dll, and this dll would need to have an assembly-level attribute implementing IAmendmentAttribute that would identify the types in your target assembly to process.
Take a look at the logging example to see how this works and let me know if you have any questions/issues.
Please note that Afterthought modifies your target assembly to make calls to static methods in another assembly (your tool). If you want to intercept calls with modifying the target assembly in any way, then I recommending looking into the .NET profiling API.
Jamie Thomas (primary author of Afterthought)