Is there something like AutoMapper for Scala? - scala

I have been looking for some scala fluent API for mapping object-object, similar to AutoMapper.
Are there such tools in Scala?

I think there's less need of something like AutoMapper in Scala, because if you use idiomatic Scala models are easier to write and manipulate and because you can define easily automatic flattening/projection using implicit conversions.
For example here is the equivalent in Scala of AutoMapper flattening example:
// The full model
case class Order( customer: Customer, items: List[OrderLineItem]=List()) {
def addItem( product: Product, quantity: Int ) =
copy( items = OrderLineItem(product,quantity)::items )
def total = items.foldLeft(0.0){ _ + _.total }
}
case class Product( name: String, price: Double )
case class OrderLineItem( product: Product, quantity: Int ) {
def total = quantity * product.price
}
case class Customer( name: String )
case class OrderDto( customerName: String, total: Double )
// The flattening conversion
object Mappings {
implicit def order2OrderDto( order: Order ) =
OrderDto( order.customer.name, order.total )
}
//A working example
import Mappings._
val customer = Customer( "George Costanza" )
val bosco = Product( "Bosco", 4.99 )
val order = Order( customer ).addItem( bosco, 15 )
val dto: OrderDto = order // automatic conversion at compile-time !
println( dto ) // prints: OrderDto(George Costanza,74.85000000000001)
PS: I should not use Double for money amounts...

I agree with #paradigmatic, it's true that the code will be much cleaner using Scala, but sometimes you can find yourself mapping between case classes that look very similar, and that's just a waste of keystrokes.
I've started working on a project to address the issues, you can find it here: https://github.com/bfil/scala-automapper
It uses macros to generate the mappings for you.
At the moment it can map a case class to a subset of the original case class, it handles optionals, and optional fields as well as other minor things.
I'm still trying to figure out how to design the api to support renaming or mapping specific fields with custom logic, any idea or input on that would be very helpful.
It can be used for some simple cases right now, and of course if the mapping gets very complex it might just be better defining the mapping manually.
The library also allows to manually define Mapping types between case classes in any case that can be provided as an implicit parameter to a AutoMapping.map(sourceClass) or sourceClass.mapTo[TargetClass] method.
UPDATE
I've just released a new version that handles Iterables, Maps and allows to pass in dynamic mappings (to support renaming and custom logic for example)

For complex mappings one may want to consider Java based mappers like
http://modelmapper.org/user-manual/property-mapping/#conditional-mapping
http://github.com/smooks/smooks/tree/v1.5.1/smooks-examples/model-driven-basic-virtual/
http://orika-mapper.github.io/orika-docs/advanced-mappings.html
Scala objects can be accessed from Java:
http://lampwww.epfl.ch/~michelou/scala/using-scala-from-java.html
http://lampwww.epfl.ch/~michelou/android/java-to-scala.html
Implementations of implicit conversions for complex objects would be smoother with declarative mappings than handcrafted ones.
Found a longer list here:
http://www.javacodegeeks.com/2013/10/java-object-to-object-mapper.html

Related

What is the best way to build a model/entity in Scala to map with Squeryl?

I'm very new to Scala, Play Framework and Squeryl. I already understand the concepts of val and var, but I'm having a hard time on trying to model my entities. As I saw on Squeryl documentation, sometimes they use var on id and other times use val. What is the best approach for id and other values(sometimes they use var/val and other times Option, last one is only for nullable fields on entities)?
Example 1
class Playlist(var id: Long,
var name: String,
var path: String) extends KeyedEntity[Long] {
}
Example 2
class Author(val id: Long,
val firstName: String,
val lastName: String,
val email: Option[String]) {
def this() = this(0,"","",Some(""))
}
And why sometimes they extend the KeyedEntity[T] and sometimes don't?
I really appreciate some help!
In Squeryl 0.9.5, all entities needed to extend KeyedEntity[T] however with 0.9.6 you can provide the KeyedEntityDef implicitly. See this for an example.
Option[T] is used when the field can contain null values. When the field is null, None is returned.
As for val vs. var it is exactly as with any other class in Scala. var allows for reassignment whereas val is, more or less, read-only. If you are going to change values, a lot of people simply make the field a var. Alternately, if you are using a case class you can use copy to create a new object with updated values or you can update the value via reflection.

Prevent cycles in implicit conversion

I'm wondering if I have a similar scenario to the following, how I can prevent a cyclic implicit conversion?
Edit: A bit of context this is for converting between some classes used as ORM entities and case classes used as DTOs.
class Author(var name: String) {
def books : List[Book] = List(new Book("title", this))// get books
}
class Book(var title: String, var author: Author)
case class DTOBook(title: String, author: Option[DTOAuthor])
case class DTOAuthor(name: String, books: List[DTOBook])
implicit def author2Author(author: Author) : DTOAuthor = {
DTOAuthor(author.name, author.books.map(x => x : DTOBook) : List[DTOBook])
}
implicit def book2Book(book: Book) : DTOBook = {
DTOBook(book.title, Option(book.author : DTOAuthor))
}
val author: DTOAuthor = new Author("John Brown")
The problem is that your data structure is cyclic. An Author contains Books which contain an Author, which contains Books, etc..
So when you convert Author to DTOAuthor, something like this happens:
author2Author is called
Inside the first author2Author call, author.books must be converted to List[DTOBook].
This means that the author inside each Book must be converted to a DTOAuthor.
Repeat
You can workaround this by making the author that is nested within each Book have a list of books that is empty. To do this, you'll have to remove your reliance on the implicit conversion in one place, and manually create the nested DTOAuthor with no books.
implicit def book2Book(book: Book) : DTOBook = {
DTOBook(book.title, Option(DTOAuthor(book.author.name, Nil)))
}
This is not an implicit problem, it's a data structure problem; your data structures are cyclic which complicates things. You'd have exactly the same problem with a "normal", non-implicit conversion function.
You can abuse mutability for this, but the "correct" way is probably the "credit card transformation" described in https://www.haskell.org/haskellwiki/Tying_the_Knot (remember that Scala is not lazy by default, so you need to use explicit laziness by passing around functions). But the best solution is probably to see if you can avoid having these cycles in the data structures at all. In an immutable data structure they're a recipe for pain, e.g. think about what will happen if you do
val updatedAuthor = dtoAuthor.copy(name="newName")
updatedAuthor.books.head.author.get.name
The author's name has changed, but the book still thinks its author has the old name!

Pre-process parameters of a case class constructor without repeating the argument list

I have this case class with a lot of parameters:
case class Document(id:String, title:String, ...12 more params.. , keywords: Seq[String])
For certain parameters, I need to do some string cleanup (trim, etc) before creating the object.
I know I could add a companion object with an apply function, but the LAST thing I want is to write the list of parameters TWICE in my code (case class constructor and companion object's apply).
Does Scala provide anything to help me on this?
My general recommendations would be:
Your goal (data preprocessing) is the perfect use case of a companion object -- so it is maybe the most idiomatic solution despite the boilerplate.
If the number of case class parameters is high the builder pattern definitely helps, since you do not have to remember the order of the parameters and your IDE can help you with calling the builder member functions. Using named arguments for the case class constructor allows you to use a random argument order as well but, to my knowledge, there is not IDE autocompletion for named arguments => makes a builder class slightly more convenient. However using a builder class raises the question of how to deal with enforcing the specification of certain arguments -- the simple solution may cause runtime errors; the type-safe solution is a bit more verbose. In this regard a case class with default arguments is more elegant.
There is also this solution: Introduce an additional flag preprocessed with a default argument of false. Whenever you want to use an instance val d: Document, you call d.preprocess() implemented via the case class copy method (to avoid ever typing all your arguments again):
case class Document(id: String, title: String, keywords: Seq[String], preprocessed: Boolean = false) {
def preprocess() = if (preprocessed) this else {
this.copy(title = title.trim, preprocessed = true) // or whatever you want to do
}
}
But: You cannot prevent a client to initialize preprocessed set to true.
Another option would be to make some of your parameters a private val and expose the corresponding getter for the preprocessed data:
case class Document(id: String, title: String, private val _keywords: Seq[String]) {
val keywords = _keywords.map(kw => kw.trim)
}
But: Pattern matching and the default toString implementation will not give you quite what you want...
After changing context for half an hour, I looked at this problem with fresh eyes and came up with this:
case class Document(id: String, title: String, var keywords: Seq[String]) {
keywords = keywords.map(kw => kw.trim)
}
I simply make the argument mutable adding var and cleanup data in the class body.
Ok I know, my data is not immutable anymore and Martin Odersky will probably kill a kitten after seeing this, but hey.. I managed to do what I want adding 3 characters. I call this a win :)

scala programming without vars

val and var in scala, the concept is understandable enough, I think.
I wanted to do something like this (java like):
trait PersonInfo {
var name: Option[String] = None
var address: Option[String] = None
// plus another 30 var, for example
}
case class Person() extends PersonInfo
object TestObject {
def main(args: Array[String]): Unit = {
val p = new Person()
p.name = Some("someName")
p.address = Some("someAddress")
}
}
so I can change the name, address, etc...
This works well enough, but the thing is, in my program I end up with everything as vars.
As I understand val are "preferred" in scala. How can val work in this
type of example without having to rewrite all 30+ arguments every time one of them is changed?
That is, I could have
trait PersonInfo {
val name: Option[String]
val address: Option[String]
// plus another 30 val, for example
}
case class Person(name: Option[String]=None, address: Option[String]=None, ...plus another 30.. ) extends PersonInfo
object TestObject {
def main(args: Array[String]): Unit = {
val p = new Person("someName", "someAddress", .....)
// and if I want to change one thing, the address for example
val p2 = new Person("someName", "someOtherAddress", .....)
}
}
Is this the "normal" scala way of doing thing (not withstanding the 22 parameters limit)?
As can be seen, I'm very new to all this.
At first the basic option of Tony K.:
def withName(n : String) = Person(n, address)
looked promising, but I have quite a few classes that extends PersonInfo.
That means in each one I would have to re-implement the defs, lots of typing and cutting and pasting,
just to do something simple.
If I convert the trait PersonInfo to a normal class and put all the defs in it, then
I have the problem of how can I return a Person, not a PersonInfo?
Is there a clever scala thing to somehow implement in the trait or super class and have
all subclasses really extend?
As far as I can see all works very well in scala when the examples are very simple,
2 or 3 parameters, but when you have dozens it becomes very tedious and unworkable.
PersonContext of weirdcanada is I think similar, still thinking about this one. I guess if
I have 43 parameters I would need to breakup into multiple temp classes just to pump
the parameters into Person.
The copy option is also interesting, cryptic but a lot less typing.
Coming from java I was hoping for some clever tricks from scala.
Case classes have a pre-defined copy method which you should use for this.
case class Person(name: String, age: Int)
val mike = Person("Mike", 42)
val newMike = mike.copy(age = 43)
How does this work? copy is just one of the methods (besides equals, hashCode etc) that the compiler writes for you. In this example it is:
def copy(name: String = name, age: Int = age): Person = new Person(name, age)
The values name and age in this method shadow the values in the outer scope. As you can see, default values are provided, so you only need to specify the ones that you want to change. The others default to what there are in the current instance.
The reason for the existence of var in scala is to support mutable state. In some cases, mutable state is truly what you want (e.g. for performance or clarity reasons).
You are correct, though, that there is much evidence and experience behind the encouragement to use immutable state. Things work better on many fronts (concurrency, clarity of reason, etc).
One answer to your question is to provide mutator methods to the class in question that don't actually mutate the state, but instead return a new object with a modified entry:
case class Person(val name : String, val address : String) {
def withName(n : String) = Person(n, address)
...
}
This particular solution does involve coding potentially long parameter lists, but only within the class itself. Users of it get off easy:
val p = Person("Joe", "N St")
val p2 = p.withName("Sam")
...
If you consider the reasons you'd want to mutate state, then thing become clearer. If you are reading data from a database, you could have many reasons for mutating an object:
The database itself changed, and you want to auto-refresh the state of the object in memory
You want to make an update to the database itself
You want to pass an object around and have it mutated by methods all over the place
In the first case, immutable state is easy:
val updatedObj = oldObj.refresh
The second is much more complex, and there are many ways to handle it (including mutable state with dirty field tracking). It pays to look at libraries like Squery, where you can write things in a nice DSL (see http://squeryl.org/inserts-updates-delete.html) and avoid using the direct object mutation altogether.
The final one is the one you generally want to avoid for reasons of complexity. Such things are hard to parallelize, hard to reason about, and lead to all sorts of bugs where one class has a reference to another, but no guarantees about the stability of it. This kind of usage is the one that screams for immutable state of the form we are talking about.
Scala has adopted many paradigms from Functional Programming, one of them being a focus on using objects with immutable state. This means moving away from getters and setters within your classes and instead opting to to do what #Tony K. above has suggested: when you need to change the "state" of an inner object, define a function that will return a new Person object.
Trying to use immutable objects is likely the preferred Scala way.
In regards to the 22 parameter issue, you could create a context class that is passed to the constructor of Person:
case class PersonContext(all: String, of: String, your: String, parameters: Int)
class Person(context: PersonContext) extends PersonInfo { ... }
If you find yourself changing an address often and don't want to have to go through the PersonContext rigamarole, you can define a method:
def addressChanger(person: Person, address: String): Person = {
val contextWithNewAddress = ...
Person(contextWithNewAddress)
}
You could take this even further, and define a method on Person:
class Person(context: PersonContext) extends PersonInfo {
...
def newAddress(address: String): Person = {
addressChanger(this, address)
}
}
In your code, you just need to make remember that when you are updating your objects that you're often getting new objects in return. Once you get used to that concept, it becomes very natural.

scala: map-like structure that doesn't require casting when fetching a value?

I'm writing a data structure that converts the results of a database query. The raw structure is a java ResultSet and it would be converted to a map or class which permits accessing different fields on that data structure by either a named method call or passing a string into apply(). Clearly different values may have different types. In order to reduce burden on the clients of this data structure, my preference is that one not need to cast the values of the data structure but the value fetched still has the correct type.
For example, suppose I'm doing a query that fetches two column values, one an Int, the other a String. The result then names of the columns are "a" and "b" respectively. Some ideal syntax might be the following:
val javaResultSet = dbQuery("select a, b from table limit 1")
// with ResultSet, particular values can be accessed like this:
val a = javaResultSet.getInt("a")
val b = javaResultSet.getString("b")
// but this syntax is undesirable.
// since I want to convert this to a single data structure,
// the preferred syntax might look something like this:
val newStructure = toDataStructure[Int, String](javaResultSet)("a", "b")
// that is, I'm willing to state the types during the instantiation
// of such a data structure.
// then,
val a: Int = newStructure("a") // OR
val a: Int = newStructure.a
// in both cases, "val a" does not require asInstanceOf[Int].
I've been trying to determine what sort of data structure might allow this and I could not figure out a way around the casting.
The other requirement is obviously that I would like to define a single data structure used for all db queries. I realize I could easily define a case class or similar per call and that solves the typing issue, but such a solution does not scale well when many db queries are being written. I suspect some people are going to propose using some sort of ORM, but let us assume for my case that it is preferred to maintain the query in the form of a string.
Anyone have any suggestions? Thanks!
To do this without casting, one needs more information about the query and one needs that information at compiole time.
I suspect some people are going to propose using some sort of ORM, but let us assume for my case that it is preferred to maintain the query in the form of a string.
Your suspicion is right and you will not get around this. If current ORMs or DSLs like squeryl don't suit your fancy, you can create your own one. But I doubt you will be able to use query strings.
The basic problem is that you don't know how many columns there will be in any given query, and so you don't know how many type parameters the data structure should have and it's not possible to abstract over the number of type parameters.
There is however, a data structure that exists in different variants for different numbers of type parameters: the tuple. (E.g. Tuple2, Tuple3 etc.) You could define parameterized mapping functions for different numbers of parameters that returns tuples like this:
def toDataStructure2[T1, T2](rs: ResultSet)(c1: String, c2: String) =
(rs.getObject(c1).asInstanceOf[T1],
rs.getObject(c2).asInstanceOf[T2])
def toDataStructure3[T1, T2, T3](rs: ResultSet)(c1: String, c2: String, c3: String) =
(rs.getObject(c1).asInstanceOf[T1],
rs.getObject(c2).asInstanceOf[T2],
rs.getObject(c3).asInstanceOf[T3])
You would have to define these for as many columns you expect to have in your tables (max 22).
This depends of course on that using getObject and casting it to a given type is safe.
In your example you could use the resulting tuple as follows:
val (a, b) = toDataStructure2[Int, String](javaResultSet)("a", "b")
if you decide to go the route of heterogeneous collections, there are some very interesting posts on heterogeneous typed lists:
one for instance is
http://jnordenberg.blogspot.com/2008/08/hlist-in-scala.html
http://jnordenberg.blogspot.com/2008/09/hlist-in-scala-revisited-or-scala.html
with an implementation at
http://www.assembla.com/wiki/show/metascala
a second great series of posts starts with
http://apocalisp.wordpress.com/2010/07/06/type-level-programming-in-scala-part-6a-heterogeneous-list%C2%A0basics/
the series continues with parts "b,c,d" linked from part a
finally, there is a talk by Daniel Spiewak which touches on HOMaps
http://vimeo.com/13518456
so all this to say that perhaps you can build you solution from these ideas. sorry that i don't have a specific example, but i admit i haven't tried these out yet myself!
Joschua Bloch has introduced a heterogeneous collection, which can be written in Java. I once adopted it a little. It now works as a value register. It is basically a wrapper around two maps. Here is the code and this is how you can use it. But this is just FYI, since you are interested in a Scala solution.
In Scala I would start by playing with Tuples. Tuples are kinda heterogeneous collections. The results can be, but not have to be accessed through fields like _1, _2, _3 and so on. But you don't want that, you want names. This is how you can assign names to those:
scala> val tuple = (1, "word")
tuple: ([Int], [String]) = (1, word)
scala> val (a, b) = tuple
a: Int = 1
b: String = word
So as mentioned before I would try to build a ResultSetWrapper around tuples.
If you want "extract the column value by name" on a plain bean instance, you can probably:
use reflects and CASTs, which you(and me) don't like.
use a ResultSetToJavaBeanMapper provided by most ORM libraries, which is a little heavy and coupled.
write a scala compiler plugin, which is too complex to control.
so, I guess a lightweight ORM with following features may satisfy you:
support raw SQL
support a lightweight,declarative and adaptive ResultSetToJavaBeanMapper
nothing else.
I made an experimental project on that idea, but note it's still an ORM, and I just think it may be useful to you, or can bring you some hint.
Usage:
declare the model:
//declare DB schema
trait UserDef extends TableDef {
var name = property[String]("name", title = Some("姓名"))
var age1 = property[Int]("age", primary = true)
}
//declare model, and it mixes in properties as {var name = ""}
#BeanInfo class User extends Model with UserDef
//declare a object.
//it mixes in properties as {var name = Property[String]("name") }
//and, object User is a Mapper[User], thus, it can translate ResultSet to a User instance.
object `package`{
#BeanInfo implicit object User extends Table[User]("users") with UserDef
}
then call raw sql, the implicit Mapper[User] works for you:
val users = SQL("select name, age from users").all[User]
users.foreach{user => println(user.name)}
or even build a type safe query:
val users = User.q.where(User.age > 20).where(User.name like "%liu%").all[User]
for more, see unit test:
https://github.com/liusong1111/soupy-orm/blob/master/src/test/scala/mapper/SoupyMapperSpec.scala
project home:
https://github.com/liusong1111/soupy-orm
It uses "abstract Type" and "implicit" heavily to make the magic happen, and you can check source code of TableDef, Table, Model for detail.
Several million years ago I wrote an example showing how to use Scala's type system to push and pull values from a ResultSet. Check it out; it matches up with what you want to do fairly closely.
implicit val conn = connect("jdbc:h2:f2", "sa", "");
implicit val s: Statement = conn << setup;
val insertPerson = conn prepareStatement "insert into person(type, name) values(?, ?)";
for (val name <- names)
insertPerson<<rnd.nextInt(10)<<name<<!;
for (val person <- query("select * from person", rs => Person(rs,rs,rs)))
println(person.toXML);
for (val person <- "select * from person" <<! (rs => Person(rs,rs,rs)))
println(person.toXML);
Primitives types are used to guide the Scala compiler into selecting the right functions on the ResultSet.