Integration tests in Scala when using compagnons with Play2? -> Cake pattern? - scala

I'm working on my first Scala application, where we use an ActiveRecord style to retrieve data from MongoDB.
I have models like User and Category, which all have a companion object that uses the trait:
class MongoModel[T <: IdentifiableModel with CaseClass] extends ModelCompanion[T, ObjectId]
ModelCompanion is a Salat class which provide common MongoDB crud operations.
This permits to retrieve data like this:
User.profile(userId)
I never had any experience with this ActiveRecord query style. But I know Rails people are using it. And I think I saw it on Play documentation (version 1.2?) to deal with JPA.
For now it works fine, but I want to be able to run integration tests on my MongoDB.
I can run an "embedded" MongoDB with a library. The big deal is that my host/port configuration are actually kind of hardcoded on the MongoModel class which is extended by all the model companions.
I want to be able to specify a different host/port when I run integration tests (or any other "profile" I could create in the future).
I understand well dependency injection, using Spring for many years in Java, and the drawbacks of all this static stuff in my application. I saw that there is now a scala-friendly way to configure a Spring application, but I'm not sure using Spring is appropriate in Scala.
I have read some stuff about the Cake pattern and it seems to do what I want, being some kind of typesafe, compile-time-checked spring context.
Should I definitely go to the Cake pattern, or is there any other elegant alternative in Scala?
Can I keep using an ActiveRecord style or is it a total anti-pattern for testability?
Thanks

No any static references - using Cake pattern you got 2 different classes for 2 namespaces/environments, each overriding "host/port" resource on its own. Create a trait containing your resources, inherit it 2 times (by providing actual information about host/port, depending on environment) and add to appropriate companion objects (for prod and for test). Inside MongoModel add self type that is your new trait, and refactor all host/port references in MongoModel, to use that self type (your new trait).

I'd definitely go with the Cake Pattern.
You can read the following article with show an example of how to use the Cake Pattern in a Play2 application:
http://julien.richard-foy.fr/blog/2011/11/26/dependency-injection-in-scala-with-play-2-it-s-free/

Related

Spring data repository and DAO Java Generics

Reading about using Java Generics in DAO layer, I have a doubt applying this in spring data repositories. I mean, with spring data repositories, you have something like this:
public interface OrderRepository extends CrudRepository<Order,OrderPK>{
}
But if I have other 10 entities, I have to create 10 interfaces like the one above to execute CRUD operations and so on and I think this is not very scalable. Java Generics and DAO is about creating one interface and one implementation and reuse this for entities but with Spring Data repositories I have to create one interface for each entity so ...
You didn't really state a question, so I just add
Is this really true? And if so, why?
and answer it:
Yes, this is (almost) correct. Almost, because you should not create one repository per entity, but one repository per Aggregate Root. See http://static.olivergierke.de/lectures/ddd-and-spring/
Spring Data Repositories offer various features for which Spring Data needs to know, what entity it is dealing with. For example query methods need to know the properties of the entity, in order to convert the method name to JPA based query. So you have to pass in the information to Spring Data at some point and you also have to pass in the information, which entities should be considered Aggregate Roots. The way you do that, is by specifying the interface.
Do you really need that? Well if all you want is generic Crud functionality, you can get that straight out of the box with JPA. But if you want query methods, Pagination, simple native queries and much more Spring Data is a nice way to avoid lots of boiler-plate code.
(Please keep in mind that I'm biased)

Creating rdbms DDL from scala classes

Is there a straightforward way to generate rdbms ddl, for a set of scala classes?
I.e. to derive a table ddl for each class (whereby each case class field would translate to field of the table, with a corresponding rdbms type).
Or, to directly create the database objects in the rdbms.
I have found some documentation about Ebean being embedded in Play framework, but was not sure what side-effects may enabling Ebean in play have, and how much taming would Ebean require to avoid any of them. I have never even used Ebean before...
I would actually rather use something outside of Play, but if it's simple to accomplish in Play I would dearly like to know a clean way. Thanks in advance!
Is there a straightforward way to generate rdbms ddl, for a set of
scala classes?
Yes
Ebean
Ebean a default orm provided by play you just have to create entity and enable evolution(which is set to enable as default).It will create a (dot)sql file in conf/evolution/default directory and when you hit localhost:9000 it will show you apply script .But your tag say you are using scala so you can't really use EBean with Scala .If you do that you will have to
sacrifice the immutability of your Scala class, and to use the Java
collections API instead of the Scala one.
Using Scala this way will just bring more troubles than using Java directly.
Source
JPA
JPA (using Hibernate as implementation) is the default way to access and manage an SQL database in a standard Play Java application. It is still possible to use JPA from a Play Scala application, but it is probably not the best way, and it should be considered as legacy and deprecated.Source
Anorm(if you want to write ddl)
Anorm is Not an Object Relational Mapper so you have to manually write ddl. Source
Slick
Function relation mapping for scala .Source
Activate
Activate is a framework to persist objects in Scala.Source
Skinny
It is built upon ScalikeJDBC library which is a thin but powerful JDBC wrapper.Details1,Details2
Also check RDBMS with scala,Best data access option for play scala

ZF models correct use

I am struggling with how to understand the correct usage of models. Currently i use the inheritance of Db_Table directly and declare all the business logic there. I know it's not correct way to do this.
One solution would be to use Doctrine ORM, but this requires learning curve and all the current components what i use needs to be rewritten paginator and auth. Also Doctrine1 adds a another dozen classes which need to be loaded.
So the current cleanest implementation what i have seen is to use the Data Mapper classes between the so called model and DbTabel. I haven't yet implemented this as it seems to head writing another ORM. But example could be something this: SQL table User
create class with setters, getters, business logic here /model/User.php
data mapper /model/mapper/UserMapper.php, the funcionality is basically writing all the update, save actions in here.
the data source /model/DbTable/User.php extends the Db_Table_Abstract
Problems are with relationships between other models.
I have found it beneficial to not have my models extend Db_Table, but to use composition instead. That means my model 'has a' Db_Table rather than 'is a' Db_Table.
That way I find it much easier to reference multiple tables in the same model, which is a common requirement. This is enough for a simple project. I am currently developing a more complex application and have used the Data Mapper pattern and have found that it has simplified my code more than I would have believed.
Specifically, I have created a class which provides all access to the database and exposes methods such as getUser() etc.. That way, if the DB changes, or my client wants something daft like storing records in XML or we split the servers or something I only have to rewrite one class.
Again, my models do not extend this class, but have an instance of it assigned as a property during construction.
I would say the 'correct' way depends on the situation. Following the YAGNI and KISS principles, it is not good to over-complicate your model setup unless you really believe that it will benefit you in the long run.
What is the application you are developing? How is your current setup of extending Db_Table holding you back?

EF4 - possible to mock ObjectContext for unit testing?

Can it be done without using TypeMock Islolator? I've found a few suggestions online such as passing in a metadata only connection string, however nothing I've come across besides TypeMock seems to truly allow for a mock ObjectContext that can be injected into services for unit testing. Do I plunk down the $$ for TypeMock, or are there alternatives? Has nobody managed to create anything comparable to TypeMock that is open source?
I'm unit testing EF4 easily without mocking. What I did was create a repository interface using the code from http://elegantcode.com/2009/12/15/entity-framework-ef4-generic-repository-and-unit-of-work-prototype/ as a basis I then created an InMemoryRepository<T> class that used the IRepository interface. I then replaced the IObjectSet<T> with a List<T> inside of the class and changed the retrieval methods accordingly.
Thus if you need to do unit testing, pass in the InMemoryRepository rather than the DataRepository.
Put your Linq2Entity query behind an interface, unit test it in isolation against a real database.
Write tests for your business logic with mocks for your query interfaces. Don't let Linq bleed into your business logic!
Don't use the RepositoryPattern!
Wrap the ObjectContext in a proxy class. Then inject that into your classes.
I don't think the repository pattern is the only answer to the question (it avoids the problem, sure)
I liked this answer - I think more appropriate for introducing tests to an existing codebase Creating Interface for ObjectContext

Sending persisted JDO instances over GWT-RPC

I've just started learning Google Web Toolkit and finished writing the Stock Watcher tutorial app.
Is my thinking correct that if one wants to persist a business object (like a Stock) using JDO and send it back and forth to/from the client over RPC then one has to create two separate classes for that object: One with the JDO annotations for persisting it on the server and another which is serialisable and used over RPC?
I notice the Stock Watcher has separate classes and I can theorise why:
Otherwise the gwt compiler would try
to generate javascript for everything
the persisted class referenced like
JDO and com.google.blah.users.User, etc
Also there may be logic on the server-side
class which doesn't apply to the client
and vice-versa.
I just want to make sure I'm understanding this correctly. I don't want to have to create two versions of all my business object classes which I want to use over RPC if I don't have to.
The short answer is: you don't need to create duplicate classes.
I recommend that you take a look from the following google groups discussion on the gwt-contributors list:
http://groups.google.com/group/google-web-toolkit-contributors/browse_thread/thread/3c768d8d33bfb1dc/5a38aa812c0ac52b
Here is an interesting excerpt:
If this is all you're interested in, I
described a way to make GAE and
GWT-RPC work together "out of the
box". Just declare your entities as:
#PersistenceCapable(identityType =
IdentityType.APPLICATION, detachable
= "false") public class MyPojo implements Serializable { }
and everything will work, but you'll
have to manually deal with
re-attachment when sending objects
from the client back to the server.
You can use this option, and you will not need a mirror (DTO) class.
You can also try gilead (former hibernate4gwt), which takes care of some details within the problems of serializing enhanced objects.
Your assessment is correct. JDO replaces instances of Collections with their own implementations, in order to sniff when the object graph changes, I suppose. These implementations are not known by the GWT compiler, so it will not be able to serialize them. This happens often for classes that are composed of otherwise GWT compliant types, but with JDO annotations, especially if some of the object properties are Collections.
For a detailed explanation and a workaround, check out this pretty influential essay on the topic: http://timepedia.blogspot.com/2009/04/google-appengine-and-gwt-now-marriage.html
I finally found a solution. Don't change your object at all, but for the listing do it this way:
List<YourCustomObject> secureList=(List<YourCustomObject>)pm.newQuery(query).execute();
return new ArrayList<YourCustomObject>(secureList);
The actual problem is not in Serializing the Object... the problem is to Serialize the Collection class which is implemented by Google and is not allowed to Serialize out.
You do not have to create two versions of the domain model.
Here are two tips:
Use a String encoded key, not the Appengine Key class.
pojo = pm.detachCopy(pojo)
...will remove all the JDO enhancements.
You don't have to create separate instances at all, in fact you're better off not doing it. Your JDO objects should be plain POJOs anyway, and should never contain business logic. That's for your business layer, not your persistent objects themselves.
All you need to do is include the source for the annotations you are using and GWT should compile your class just fine. Also, you want to avoid using libraries that GWT can't compile (like things that use reflection, etc.), but in all the projects I've done this has never been a problem.
I think that a better format to send objects through GWT is through JSON. In this case from the server a JSON string would be sent which would then have to be parsed in the client. The advantage is that the final Javascript which is rendered in the browser has a smaller size. thus causing the page to load faster.
Secondly to send objects through GWT, the objects should be serializable. This may not be the case for all objects
Thirdly GWT has inbuilt functions to handle JSON... so no issues on the client end