Creating rdbms DDL from scala classes - scala

Is there a straightforward way to generate rdbms ddl, for a set of scala classes?
I.e. to derive a table ddl for each class (whereby each case class field would translate to field of the table, with a corresponding rdbms type).
Or, to directly create the database objects in the rdbms.
I have found some documentation about Ebean being embedded in Play framework, but was not sure what side-effects may enabling Ebean in play have, and how much taming would Ebean require to avoid any of them. I have never even used Ebean before...
I would actually rather use something outside of Play, but if it's simple to accomplish in Play I would dearly like to know a clean way. Thanks in advance!

Is there a straightforward way to generate rdbms ddl, for a set of
scala classes?
Yes
Ebean
Ebean a default orm provided by play you just have to create entity and enable evolution(which is set to enable as default).It will create a (dot)sql file in conf/evolution/default directory and when you hit localhost:9000 it will show you apply script .But your tag say you are using scala so you can't really use EBean with Scala .If you do that you will have to
sacrifice the immutability of your Scala class, and to use the Java
collections API instead of the Scala one.
Using Scala this way will just bring more troubles than using Java directly.
Source
JPA
JPA (using Hibernate as implementation) is the default way to access and manage an SQL database in a standard Play Java application. It is still possible to use JPA from a Play Scala application, but it is probably not the best way, and it should be considered as legacy and deprecated.Source
Anorm(if you want to write ddl)
Anorm is Not an Object Relational Mapper so you have to manually write ddl. Source
Slick
Function relation mapping for scala .Source
Activate
Activate is a framework to persist objects in Scala.Source
Skinny
It is built upon ScalikeJDBC library which is a thin but powerful JDBC wrapper.Details1,Details2
Also check RDBMS with scala,Best data access option for play scala

Related

JPA Entity CRUD type Operations support in MyBatis

Due to some odd reason, I cannot go with JPA vendor like Hibernate, etc and I must use MyBatis.
Is there any implementation where we can enrich similar facility of CRUD operation in Mybatis?
(Like GenericDAO save, persist, merge, etc)
I have managed to come up with single interface implementation of CRUD type of operations (like Generic DAO) but still each table has to write it's own query in XML file (as table name, column names are different).
Will that make sense to come up with generic implementation?
Where I can give any table object for any CRUD operation through only 4 XML queries. (insert, update, read, delete) passing arguments of table name, column names, column values..etc.
Does it look like re-inventing the wheel in MyBatis or does MyBatis has some similar support?
you can try Mybatis Plus.This is for these cases.
MyBatis is not an ORM, instead it maps the result from SQL statements to objects.
You need to write SQL.
You will have a hard time if you try and apply the JPA model to working in MyBatis. You need to learn how MyBatis works instead.
You may be interested in the MyBatis Generator. Here is a screenshot of the introduction paragraph.
And here is the URL.
The generator looks at the Physical tables in an RDBMS and generates the CRUD mapping.That is half the job done. The other half is to utilize these mappings in your actual code.
Let this assumption also be cleared. The generator generates only the CRUD. For more complex operations like aggregations or joins et al, you may need to write the mappers on your own.

Play: Exclude certain tables from being managed by the evolutions?

I have a Scala Play app over a MySQL instance. I store my evolutions as conf/evolutions/$db/$step.sql files. However, some of my tables are dynamic i.e. their schema may be modified during the runtime of the Play app. What is the best way to exclude these tables from Play's evolutions framework?
I have couple of choices and none of them look especially elegant:
1) Move all the offending tables to a separate database where the evolutions plugin is disabled - This is not that great since I have to move all related tables that have foreign key constraints out of the current database too.
2) Somehow override Play's evolutions framework - Unfortunately, Play's evolution framework is not modular nor is it extendable. I was hoping it would have some Scala or Java hooks for def onUp(tableName: String) and def onDown(tableName: String) that I can override but Play's evolutions framework has no such nice abstractions it seems and is quite monolithic.
3) I know Play creates an entry in a table called play_evolutions - I can modify that table from my app in onStart to manually take out all offending table related stuff. That would work but is very hackish and has hard dependency on Play's internal representation/handling of schema changes.
4) Simply move all offending table sql statements to conf/evolutions/$db/ignore_evolution_$step.sql- This way these tables are out of the watchful eyes of the evolutions framework but I essentially have to roll my own framework to parse these files and execute them.
5) Anything else I missed?

Generating DAO with Hibernate Tools on Eclipse

How can I use the tools to generate DAOs?
In fact, instead of passing through the hbm files, I need to configure hibernate tools to generate the DAO and the annotations.
See Hibernate Tools - DAO generation and How generate DAO with Hibernate Tools in Eclipse?
First let me assume DAO as POJO/Entity beans. Basically you can accomplish your task either through forward or reverse engineering. In case of forward engineering, probably you can look into AndroMDA tool. In case If u wish to accomplish it through reverse engineering Click here ..
Hope this will be helpful.
Welcome. You got to write all your data access logic by your hand (if I’m not wrong). Hiberante let you interact with database in three ways.
Native SQL which is nothing but DDL/plain SQL query. This can be used very rarely in hibernate projects even though this is faster than below mentioned options. Reason is simple “One of the key advantage of hibernate or any other popular ORM framework over JDBC Is you can get rid of database specific queries from your application code!”
HQL stands for hibernate query language which is proprietary query language of hibernate. This looks similar to native SQL queries but the key difference is object/class name will be used instead of table name and public variable names will be used instead of column names. This is more Object oriented approach. Some interesting things will be happening in background and check if you are keen!
Criteria API is a more object oriented and elegant alternative to Hibernate Query Language (HQL). It’s always a good solution to an application which has many optional search criteria.
You can find lots of examples on internet. Please post your specific requirements for further clarification on your problem.
Cheers!

Integration tests in Scala when using compagnons with Play2? -> Cake pattern?

I'm working on my first Scala application, where we use an ActiveRecord style to retrieve data from MongoDB.
I have models like User and Category, which all have a companion object that uses the trait:
class MongoModel[T <: IdentifiableModel with CaseClass] extends ModelCompanion[T, ObjectId]
ModelCompanion is a Salat class which provide common MongoDB crud operations.
This permits to retrieve data like this:
User.profile(userId)
I never had any experience with this ActiveRecord query style. But I know Rails people are using it. And I think I saw it on Play documentation (version 1.2?) to deal with JPA.
For now it works fine, but I want to be able to run integration tests on my MongoDB.
I can run an "embedded" MongoDB with a library. The big deal is that my host/port configuration are actually kind of hardcoded on the MongoModel class which is extended by all the model companions.
I want to be able to specify a different host/port when I run integration tests (or any other "profile" I could create in the future).
I understand well dependency injection, using Spring for many years in Java, and the drawbacks of all this static stuff in my application. I saw that there is now a scala-friendly way to configure a Spring application, but I'm not sure using Spring is appropriate in Scala.
I have read some stuff about the Cake pattern and it seems to do what I want, being some kind of typesafe, compile-time-checked spring context.
Should I definitely go to the Cake pattern, or is there any other elegant alternative in Scala?
Can I keep using an ActiveRecord style or is it a total anti-pattern for testability?
Thanks
No any static references - using Cake pattern you got 2 different classes for 2 namespaces/environments, each overriding "host/port" resource on its own. Create a trait containing your resources, inherit it 2 times (by providing actual information about host/port, depending on environment) and add to appropriate companion objects (for prod and for test). Inside MongoModel add self type that is your new trait, and refactor all host/port references in MongoModel, to use that self type (your new trait).
I'd definitely go with the Cake Pattern.
You can read the following article with show an example of how to use the Cake Pattern in a Play2 application:
http://julien.richard-foy.fr/blog/2011/11/26/dependency-injection-in-scala-with-play-2-it-s-free/

How can I leverage JPA when code is generated?

I have classes for entities like Customer, InternalCustomer, ExternalCustomer (with the appropriate inheritance) generated from an xml schema. I would like to use JPA (suggest specific implementation in your answer if relevant) to persist objects from these classes but I can't annotate them since they are generated and when I change the schema and regenerate, the annotations will be wiped. Can this be done without using annotations or even a persistence.xml file?
Also is there a tool in which I can provide the classes (or schema) as input and have it give me the SQL statements to create the DB (or even create it for me?). It would seem like that since I have a schema all the information it needs about creating the DB should be in there. I am not talking about creating indexes, or any tuning of the db but just creating the right tables etc.
thanks in advance
You can certainly use JDO in such a situation, dynamically generating the classes, the metadata, any byte-code enhancement, and then runtime persistence, making use of the class loader where your classes have been generated in and enhanced. As per
http://www.jpox.org/servlet/wiki/pages/viewpage.action?pageId=6619188
JPA doesn't have such a metadata API unfortunately.
--Andy (DataNucleus)